What is artificial intelligence?

Artificial intelligence is ubiquitous. Whether in language assistants, chatbots, semantic text analysis, streaming services, smart factories or autonomous vehicles – AI will change the way we shape our professional and private lives, how we do business and how we live together as a society. Politics also declares AI to be a fundamental requirement for our future prosperity.

And although more and more people are using artificial intelligence, few know exactly what it is. This is hardly surprising: defining artificial intelligence in a clear-cut way is a difficult undertaking. Just as human intelligence cannot be unambiguously described – for example, a distinction is made between cognitive, emotional and social intelligence – there is no universal definition of artificial intelligence used consistently by all parties. However, the following perimeters should try to create some clarity and transparency.

AI: definition and history

Historically, the term goes back to the US computer scientist John McCarthy, who in 1956 invited researchers from various disciplines to a workshop entitled “Dartmouth Summer Research Project on Artificial Intelligence”. The main theme of the meeting was “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Under this guiding theme, the foundation for what later advanced to the field of artificial intelligence was laid as early as 1956.

AI: simulation and automation of cognitive abilities

Today, numerous encyclopedia entries define artificial intelligence as a branch of computer science that deals with the machine imitation of human intelligence. For example, The English Oxford Living Dictionary describes AI as follows: “The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”

Meanwhile, AI experts in research and practice also agree on a similarly abstract working definition: artificial intelligence is the automation and/or simulation of cognitive abilities, including visual perception, speech recognition and generation, reasoning, decision making and action, as well as a general adaptability to changing environments.

The performance of these simulated and/or automated cognitive abilities may vary in intensity. While they are still quite rudimentary in language assistants such as Alexa and Siri, for example, they already far exceed human abilities in some areas – in medicine with millions of evaluations of MRI scans, for example.

Strong AI vs. weak AI

Very abstractly, the developmental directions of artificial intelligence can be classified into two categories: weak and strong AI. Weak AI (also known as narrow AI) comprises the majority of all developmental activities and enables an efficient simulation of specific individual human capabilities. Currently, strong AI, which has the same or even higher intellectual abilities than humans, is still very far from being reality.

Strong AI

Strong AI is not only able to act in purely reactively, but is also creative, flexible, able to make decisions even when uncertain and motivated by its own initiative – and therefore able to act in a way which is proactive and planned. According to expert opinion, however, such an AI is neither currently existent nor is its existence foreseeable.

In science and philosophy, it is highly controversial if and when a strong AI can be developed at all. One of the most controversial issues is whether an AI will ever have empathy, self-reflection and consciousness – qualities that (to date) have been at the core of being human. Therefore, statements that announce or promise the existence of such a strong or general AI (also: AGI or artificial general intelligence) should be met with scepticism. Exaggerated expectations of AI, often referred to as superintelligence or singularity and fuelling exaggerated fears of robotic domination, merely lead to a populist debate. They are anything but conducive to transparent discourse.

Weak AI

Weak AI, on the other hand, focuses on the solutions of individual application problems, whereby developed systems are capable of self-optimisation or learning. To this end, attempts are made to simulate and automate specific aspects of human intelligence. Most of the currently-existing commercial AI applications are weak AI systems. Currently, weak AI systems are used in the following concrete fields of application, among others

  • Digital speech and text processing (natural language processing): AI systems that can fully or partially automatically understand or generate the content and context of texts and language. In this way, for example, football or election reports can be typed, texts translated and chatbots or language assistants can communicate.
  • Robotics & autonomous machines: smart and autonomously-navigating (transport) machines such as drones, cars and rail vehicles that can adapt independently to new environmental situations and learn in real time.
  • Pattern recognition in large data sets: control and optimisation of infrastructures (e.g. in the flow of road traffic or on the power grid); identification of cases of fraud, money laundering or terrorist financing in the financial industry; predictive policing in the fight against crime; AI-based diagnostic systems in the healthcare sector (e.g. evaluation of radiological image data) etc.

Artificial intelligence and pattern recognition

Pattern recognition plays a special role in the broad application field of artificial intelligence. On the one hand, because numerous current advances in the field of AI can be attributed to advances in pattern recognition, on the other hand, because different fields of application (e.g. image, text and speech recognition, etc.) also make use of pattern recognition, at least in part.

The aim is to extract meaningful and relevant information from large, unstructured amounts of data by automatically capturing regularities, repetitions or similarities. The basis is the ability to classify: characteristics must be identified that are identical within a characteristic category but do not occur outside this category. In this way, faces on digital photos can be recognised, songs identified or traffic signs filtered from a flood of image data. The systematic recognition of patterns is also of great relevance in speech and text recognition.

Artificial intelligence und language

One of the most challenging and simultaneously most exciting application areas of artificial intelligence is the mechanical processing of natural language – better known as natural language processing (NLP). As an interdisciplinary, cross-section field between linguistics and artificial intelligence, the aim is to develop algorithms that break down elements of human language and process them mechanically. This means that NLP can translate everything that people express verbally or in writing into digitally- readable information. However, this process also works in the opposite direction: data can also be processed into speech or text. These two process directions mark the two sub-disciplines in which NLP can be divided: natural language understanding (also: NLU) and natural language generation (also: NLG or automatic language and text generation).

Natural language generation

While the translation of natural language or texts into data is a typical form of natural language understanding, the opposite is true for natural language generation. In NLU, natural text is usually processed into data; in NLG processes, natural text is created by data. In all areas where structured data is generated – in e-commerce for example, in the financial world or in reporting for sports, weather or elections – NLG programs can create reader- friendly texts from data in seconds. In this way, NLG systems free copywriters and editors from monotonous, routine work. The time saved can thus be invested in more creative or conceptual work.

Natural language understanding

Natural language understanding, on the other hand, has the goal of “understanding” a natural-language text and generating structured data from it. The generic term NLU can be applied to a variety of computer applications, ranging from small, relatively simple tasks such as short commands to robots, to highly-complex tasks such as the complete understanding of newspaper articles.

Differences and similarities: AI, machine learning, deep learning

The terms machine learning and deep learning are closely related to the term artificial intelligence. The words are often used synonymously in the public discussion. In the following, a brief classification of the terms will lead to a transparent handling of the different terminologies.

While artificial intelligence serves as a generic term for all areas of research and development that deal with the simulation and automation of cognitive abilities, as described above, machine learning and deep learning are more likely to be understood as partial terms of AI. Machine learning, in particular, is often used congruently with AI, but is basically only a branch of it – although the vast majority of current advances in AI applications basically relate to machine learning. It therefore seems all the more helpful to first take a closer look at the term machine learning.

Machine Learning

Machine learning (also called ML) is a specific category of algorithms that use statistics to find patterns in large amounts of data, known as big data. They then use the patterns found in historical (and ideally representative) data to make predictions about certain events – such as what series a user might like on Netflix or what exactly is meant by a specific voice entry on Alexa. In machine learning, algorithms are therefore able to learn patterns from large data sets and independently find the solution to a particular problem without each individual case having been explicitly programmed beforehand. With the help of machine learning, systems are therefore able to generate knowledge from experience. In this sense, the term was already described in 1959 by the American computer scientist and AI-pioneer Arthur L. Samuel as a system that has the “ability to learn without having been explicitly programmed”.

Extracting relevant data from big data and making predictions

In practice, this means the following: in streaming services, algorithms (without having been programmed beforehand in any way with regard to which series genres even exist) learn, for example, that there are certain types of series that are watched by a certain class of users. In contrast to rule-based systems, it is therefore not absolutely necessary in machine learning to implement concrete if-then rules which are then applied to a data set (for classification purposes, for example) for each new case that occurs. Rather, machine learning uses the existing data set to independently extract and summarise relevant data and thereby make predictions.

They can therefore be used to optimise or (partially) automate processes that would otherwise have to be done manually, such as text or image recognition. Machine learning is the process that drives many of the services we use today – recommendation systems like Netflix, YouTube, Spotify, search engines like Google and Baidu, social media feeds like Facebook and Twitter, language assistants like Siri and Alexa. In all these cases, each platform collects as much data as possible about its users – which genres they like to watch, which links they click on, which songs they prefer to listen to – and uses machine learning to make as accurate an estimate as possible of what users like to see or hear.

Deep Learning

Deep learning is a subcategory of machine learning and is therefore also to be understood as a branch of artificial intelligence. While ML is a kind of self- adaptive algorithm that improves through experience or historical data, deep learning has the ability to significantly enhance the process of machine learning and train itself. The technique used for this is called a neural network. It is a kind of mathematical model whose structure is based on the functions of the human brain.
Neural networks and black boxes

Neural networks contain numerous layers of computing nodes (similar to human neurons) that interact in a way which is orchestrated to search through data and deliver a final result. As the contents of these layers become increasingly abstract and less comprehensible, these layers are also referred to as hidden layers. Through the interaction of several of these layers, “new” information can be formed between the layers, representing a kind of abstract representation of the original information or input signals. Even developers are therefore not, or only to a limited extent, able to comprehend what the networks learn at all or how they arrived at a certain result. This is referred to as the black box character of AI systems. Finally, machine and deep learning can be differentiated by three types of learning: supervised, unattended and reinforced.

Supervised learning

In supervised learning, the data to be analysed is classified beforehand to tell the ML system what patterns to look for. According to this principle the automatic classification of images is learned, for example: first, images are marked manually with regard to certain variables (e.g. whether it is a sad, cheerful or neutral facial expression); after thousands of examples have been created, an algorithm can then automatically categorise the image data.

Unsupervised learning

In unsupervised learning (also: unattended learning), the data to be analysed have no previously-classified designations. Therefore, the algorithm does not have to be provided with exact targets in an upstream training phase. Rather, the ML system itself searches for any patterns it can find. Unattended learning methods are therefore preferred for the exploration of large data sets. However, unsupervised techniques (except in the area of cybersecurity) are currently still rather uncommon in practice.

Reinforcement learning

The method in which an algorithm learns through reward and punishment is described as reinforcement learning. A reinforcement algorithm thus learns by pure trial and error whether a goal is achieved (reward) or missed (punishment). Reinforcement learning is used, for example, in the training of chess programs: when playing against other (simulated) chess programs, a system can very quickly learn whether a certain behaviour has led to the desired goal, victory (reward), or not (punishment). Reinforcement learning is also the training foundation of Google’s AlphaGo, the program that has defeated the best human players in the complex game of Go.

Limits and possibilities: what can artificial intelligence (not) do?

Sometimes quite different definitions of artificial intelligence circulate not only in media discourses, but also in circles of experts. However, unclear ideas and definitions of what AI is and what it is not, in addition to what it can and cannot do, contribute to uncertainty rather than acceptance in society. They lead to a debate that is often polarised and driven by unrealistic ideas. An explanation of the limits and possibilities of artificial intelligence is therefore of the greatest relevance. Only in this way can the impact of AI on society, economy, culture and science be realistically assessed.

All in all, the use of artificial intelligence is associated with high hopes: AI-based medical (cancer) diagnostics, for example, promise great progress in the health sector. And in road traffic, a reduction in accidents or traffic jams could lead to a lower number of road deaths on the one hand and a lower environmental impact on the other. The way we work also seems to be facing disruptive changes: AI could relieve workers of dangerous and monotonous tasks.

On the other hand, technology- critical septics with dystopian prognoses of the future warn against the use of AI and the resulting supposed rise of power of a superintelligence or singularity. Even Stephen Hawking and tech visionary Elon Musk have warned of the threat of AI. However, it should be noted that such fears relate far lesser to (weak) AI systems that exist so far.

A balanced debate in which the (possible) advantages and disadvantages of the development and implementation of AI can be discussed transparently and in an enlightened way is indispensable. Ultimately, the aim of all AI systems should be to create added social, cultural and economic value and thus contribute to the human well-being. Artificial intelligence should support people intelligently in everyday life, at work when applicable and take on unpleasant or dangerous tasks without making people superfluous. The result: more time and resources to devote to creative or emotionally and socially-valuable tasks that give pleasure to people and create meaning and added value in society, economy and culture.

Sources:

Back to the news overview