Artificial Intelligence: truths and myths

Scroll to see more

IA_01_cabecera

In the last few years, we have been experiencing the rise of a new set of methods, algorithms and applications under the label of Artificial Intelligence, partly due togreat advances in computer technologies,. However, this concept is not new, its origin dates back to the middle of the last century according to the most widespread references (Mc Carthy, Minsky, Rochester, & Shannon, n.d.).

In the 1950s it was considered that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it”. Since then, the concept has been put into practice in very different ways, giving rise to a great variety of systems that, in one way or another, can be considered Artificial Intelligence. It is this generality in the definition, together with the fact that technology is in a clear hype moment, that mainly contributes to the fact that right now almost any computer system is promoted as “AI-based” .

In this article we will try to clarify the concepts and basis of AI, so that you can form a realistic opinion about AI and don’t get just carried away by buzzwords.

First step: classification and definition

First things first, it is important to make a classification of artificial intelligence systems that allows us to differentiate them according to their specificity. Despite the fact that there are many possible classifications in this sense, at this point we will focus on what is perhaps the most general one, which defines the following classes of systems ordered from lower to higher levels of complexity:

  • Weak AI / Narrow AI: this is any Artificial Intelligence system focused on a specific and concrete task, in which it can exceed the capacity of the human being.
  • General AI / Strong AI: these Artificial Intelligence systems are capable of demonstrating a human level in the range of cognitive tasks that people carry out in their daily activities.
  • Artificial Superintelligence: General Artificial Intelligence whose level can exceed human capacity in cognitive tasks.

In light ofthese definitions, it is clear that current systems fit under the heading of Weak AI, and technology still has some way to go before it can reach the next level. However, despite its name, today’s Weak AI has the ability to solve complex problems with greater precision and efficiency than a human being, which is a great advantage in many situations.

Going into a little more technical detail, Weak AI systems can be further subdivided into:

  • Symbolic AI: it can execute instructions and follow rules defined by a human expert. The so-called expert systems based on ontologies, rules or formal logic fit within this category.
  • Machine Learning: this type of system is capable of acquiring the ability to execute a task without having been explicitly instructed to do so in a specific sequence of steps. Said in a more colloquial way, the system is indicated (more or less directly depending on the family of algorithms of which we speak) what is expected of him, but not how it can be achieved, as opposed to the previous case. This category includes systems based on regression, Deep Learning, clustering, etc.

Second step: reliable and trustworthy systems

The greaterfocus right now is on machine learning systems, especially those based on Deep Learning, because of their greater ability to represent complex problems However, they also have limitations in terms of the high amount of data and computer resources they require to function properly, as well as the fact that the resulting models cannot easily be interpreted by a human being (Došilović, Brčić, & Hlupić, 2018). This second point, secondary as it might seem in terms of problem solving, is of vital importance to obtain systems that are secure and reliable, especially in environments that are critical in terms of physical security -as in the case of autonomous vehicles or industrial robotics, for example-; and logic (fintech, electronic voting, etc.).

Both machine learning and symbolic AI are strategies that can offer very good results. With the technology available today (remember, what we know as Weak AI), it is possible to solve specific problems with a high degree of accuracy if they encompass 3 main factors:

  1. The availability of data to configure, train and validate the AI system.
  2. The appropriate specialization, that is to say, a team of people with skills ranging from the purely mathematical analysis of data, to technical aspects for the design and production of complex computer systems.
  3. The right tools to implement these systems.

There are many options available both in the state of the art and in the market regarding the tools needed to implement Artificial Intelligence systems. The main difficulty is knowing how to properly choose those that best fit the needs and requirements of each problem, especially if the final objective is that the solution proposed can evolve and adapt to other problems and situations in the future.

The availability of adequate data is another of the critical points for the correct implementation of this type of solution, especially in cases where we are talking about Machine Learning and, above all, with supervised approaches. Not only is it necessary to have a high volume of data to work with, but it is also necessary that the data be balanced (for example, in the case of a classification system, that all classes have a more or less balanced number of members); that it does not include much noise (erroneous or mislabeled data); and that it is sufficiently representative of the variability of the problem to be solved.

For large data sets, cleanliness and adaptation before exploitation by artificial intelligence algorithms is a problem in itself, the resolution of which is not always easy. Recently, to counteract the problems derived from the availability of data, some approaches are beginning to be considered that allow the use of a smaller number of samples (few-shot learning), models that learn from their own unlabeled data (self-supervised), or models that learn progressively, obtaining better results as they include more information (continual learning). However, some of these strategies are still in the early stages of research and require an even higher level of expertise to implement.

The specialization of the human team in charge of creating the solutions based on Artificial Intelligence is the last of the keys to success in this task. Despite the high availability of both tools,information and training elements for the deployment of AI-based systems, it is necessary to make a clear distinction between the application of algorithms as a “black box” (currently accessible to anyone with basic computer skills thanks to the multitude of resources on the web) and the understanding, adaptation and specialization of these algorithms to solve specific problems with high efficiency and accuracyrequirements.

Third step: AI strategy in the company

Companies can choose different ways for the resolution of problems in their business using Artificial Intelligence systems. Next, we show you the different strategies that theycan follow, depending on their degree of novelty with respect to what is available in the market:

Disruptive contributions onthe state of the art

In this position are those who make improvements on the fundamental foundations of AI (interpretability, generalization, efficiency in the use of data…). This is a position that guarantees a very high differentiation with respect to other actors, but at the cost of a high level of complexity that generally drivethe solutions created far from application in practical use cases.

Only the most specialized research centers and universities are found here, as well as the major international contributors such as Google, Facebook, Baidu, etc.

Incremental contributions onthe state of the art

This position includes those who make modifications on known schemes to improve their benefits or solve specific problems ina new domain. Despite having a lower degree of contribution to the fundamental foundations of AI than the previous ones, their contributions allow them to differentiate themselves and have great value in specific fields of application, at the cost of requiring more development time than other actors further down the chain.

This is typically where most of today’s academic community and leading technology centres in the field are located, such as DeepMind, OpenAI, the Alan Turing Institute for Data Science and AI, etc.

Implementation of state-of-the-art results

This strategy is chosen by those who mainly carry out analysis of academic results and seek to apply it to specific problems in the field. The value provided is far from the fundamental foundations of AI and focuses on the resolution of practical problems, with a shorter implementation time than in previous cases. However, it still requires a high level of knowledge of AI algorithms and tools to find the optimal solutions in each case, as well as mastery of the characteristics of the application sector.

This is the typical positioning of applied R&D centres.

Own training of frameworks and open source algorithms

At this point are those who make adequate use of existing results and tools, performing training or configuration of AI algorithms with domain-specific datasets.

Despite having a lower degree of innovation than the previous cases and depending on external results to offer solutions, their value is in the speed with which they are able to bring AI systems to reality, which can be very relevant in well-known problems. The vast majority of technology consulting and solutions integration companies are at this point.

Use of models trained by third parties

Finally, where the global nature of the problems encountered or the availability of data of interest to researchers upstream in the chain allow it, examples of direct application of pre-trained models for defined use cases can be found.

This is perhaps the least common case, given that a series of very specific conditions must be met for it to be applied, but on the other hand it has the enormous advantage of being the option with the least technical risk and the shortest start-up time.

Fourth step: #FutureBeginsToday

If there is one thing that cannot be discussed, it is that we live in an era of splendor for Artificial Intelligence technologies and that, in one way or another, they are going to bring about a disruptive change in the way many of today’s problems find solutions.

From the point of view of the problem ‘owners'(the companies) -which do not necessarily have to be technological-, the whole boom of this type of technology becomes more of an obstacle than an advantage when it comes to finding partners with whom to work in order to respond to them, since it is terribly complicated to differentiate between suitable and valuable proposals and others that, although probably well intentioned, lack the soundness and technological specialization required.

It is common for a company to want to incorporate Deep Learning technologies to solve a specific problem, incurring a very high cost of both implementation and computing, without it being really necessary. The main thing is to carry out a previous exhaustive analysis, which allows us to determine what our problem is and the need our company has. In our experience, we have found that, in some cases, it is possible to implement solutions that provide better results (and that are more economical) using traditional Machine Learning techniques. By this we do not mean that the most advanced algorithms are not necessary, but that the most important thing is to find the optimal tool to solve a particular problem, and that this is a task that goes far beyond hypeand requires a deep knowledge of technology.

Our recommendation to incorporate AI technologies in any of the processes of a business is to always count on the help of specialists who can guide us both in the definition of the problem and in the design and application of solutions from a neutral, critical and informed point of view.

References

Došilović, F. K., Brčić, M., & Hlupić, N. (2018). Explainable artificial intelligence: A survey. 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), 0210–0215. https://doi.org/10.23919/MIPRO.2018.8400040

Mc Carthy, J., Minsky, M. L., Rochester, N., & Shannon, C. E. (n.d.). A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence. Retrieved from https://aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Puppe, F. (1993). Characterization and History of Expert Systems. In F. Puppe (Ed.), Systematic Introduction to Expert Systems: Knowledge Representations and Problem-Solving Methods (pp. 3–8). https://doi.org/10.1007/978-3-642-77971-8_1


Authors: David Chaves Diéguez, Head of Technology; & David Jiménez Cabello, Technical Manager in Biometrics in the area of Multimodal Information at Gradiant 


This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.