Over the past 50-60 years, artificial intelligence has traversed the path from the wildest dreams of science fiction writers to the utilitarian developments that we use every day. Although the first efforts in the field of neural networks date back to the 1940-1960s, the most significant technological boom has occurred in the last several years. The CTO of Brainy Solutions Andrey Kirsanov speaks about how and why companies are developing AI today.

Artificial intelligence, like many other modern technologies, from the point of view of the common person, is surrounded by an aura of mystery. This is partly the merit of world of fiction: first of all, by the term “AI”, people usually mean a computerized mind – completely analogous to the human brain – but created artificially by scientists and programmers. However, these views are not entirely correct.

People expect AI to be a system that can do anything and everything. But it is important to understand that current artificial intelligence involves only the automation of complex, yet very specific, processes. Neural networks can work according to well-defined rules and, in most cases, this requires huge volumes of data.

That is why neural networks began to actively develop only with the widespread adoption of broadband connectivity, since isolated devices (such as desktop computers, smartphones, security cameras, etc.) generally lack the computing power to run complex neural networks locally. At the same time, even with the advent of cloud computing’s massive computational resources, the challenge of computing power for AI applications still persists (as they are significantly more resource-intensive than “ordinary” non-AI code), so any neural network is generally adapted to work with one specific task. Furthermore, the neural network does not have a will of its own and is unable to go beyond the task at hand to conjure up plans of taking over the world.

Below, we will describe a few common trends in AI that involve applications where neural networks can tangibly benefit in resolving common consumer and business challenges.

Trend # 1: Voice Assistants

One of the most promising technologies today is voice assistants. In fact, voice assistants represent a new, more advanced type of interface – the next stage of development beyond a regular website or mobile app.

Currently, most companies are looking to acquire such AI-based corporate assistants. According to forecasts, in the next five years, every person will have a personal voice assistant. It will become as much a part of our life as smartphones are now.

You will, without hesitation, ask something like: “Alexa/Google/Siri, where can I buy headphones for my phone?” The assistant will translate your voice into text, process the request and send it to the database of, say, the nearest electronics store, coupled with your geolocation. Within the store’s catalogue and in-stock items, it will find the product of interest and confirm the store’s opening hours. Next, it will tell you that there are headphones that you are looking for in the store, or offer to place an order for delivery. The money for the item will be charged automatically to a linked payment method, assuming that advanced permission for such seamless transactions is given.

Trend # 2: Computer Vision

The second technology that is increasingly entering our everyday life is Computer Vision. It is becoming more and more widespread and today it is used almost everywhere.

If you received a traffic camera fine, this is machine vision. Unlocked your smartphone with face recognition technology? It’s the same. If a doctor wants to take a closer look at your X-ray, they may be also utilizing machine vision. This is indeed a huge industry!

Absolutely all facial recognition technologies used in security systems are a product of machine learning. Artificial intelligence, after appropriate training, can perform a wide range of tasks: from searching for certain people in a crowd of passers-by to monitoring the working hours of employees at a plant.

All these technologies are based on machine learning, not only on the practice of learning by example, but on the machine’s ability to find patterns. For example, for training a facial recognition application we will take five million faces, show them to an AI-powered system and will teach it to look for patterns in these images.

Machine learning has a broad functionality. For example, computer vision can be used to diagnose lung cancer. The neural networks are shown an array of normal MRI images and an array of images with abnormalities. Consequently, the system will be taught to recognize the signs of such anomalies. Thus, it becomes possible to show the neural network any MRI image and ask it if there are any abnormalities that the doctor should pay attention to.

Machine learning has a huge number of potential uses, one of which is reinforcement learning. The training of navigation programs for autonomous unmanned aerial vehicles (UAVs) takes place in a similar way. A virtual model of a certain area with various obstacles is created: trees, houses, power lines, etc. A virtual drone is launched into this environment. At first, the neural network encounters obstacles, for which it receives penalty points, which it is programmed to avoid. Thus, it accumulates the data set necessary for orientation in real space and trouble-free piloting of the drone.

Trend # 3: “DeepFakes” – Image Generation

One of the trends in machine learning is “DeepFake” technologies, which are often based on generative adversarial neural networks (GAN) algorithms. Based on a given array of data, the neural network can complete a specific picture or even turn a schematic sketch into a photorealistic image. The most striking example of this trend: applications that can “age” a person by a certain number of years or “change” their gender.

In these kinds of algorithms, as a rule, two types of neural networks are involved: a generator and a discriminator. The first task is to generate a certain image, according to the task, similar to the real one. The generator neural network does this on the basis of previously obtained data. The second network evaluates the generated data and compares it with a real one, after which it makes a decision – to accept or to reject.

As a result, we can get a fairly high-quality and believable image that is, however, not created from scratch, but assembled piece by piece from various photographs. “DeepFake” neural networks are also capable of combining pictures or transferring the style from one to another. Thus, for example, you can get a portrait of a person that is composed of flowers and tree leaves.

Meanwhile, artificial intelligence is able to work not only with images. In general, machine learning is applicable to all types of data. For example, it can synthesize the voice of a specific person, create videos or texts. Moreover, in the case of texts, the neural network is capable of copying the style of a particular author when composing prose or poetry.

In summary, as can be evidenced from above, even within these three exemplified applications of AI, new technologies generate a wide array of business use cases that are being developed with widespread economic and social ramifications.