The Word "neural networks" today is one of the most fashionable terms, and often they are abused by marketers in promoting their technology startups. However, it is undeniable that today's object of our consideration is one of the most important phenomena, without which it is impossible to imagine modernity.
What is more powerful — the human brain or the computer? For the vast majority the answer to this question is obvious and what is really to argue, is correct. But if you look at the work scientists have done over the past decades, we will notice that many of them have tried to bring the principle of operation of the computer to the way of human thinking, and not without success. How is that possible? With the help of neural networks — computer systems composed of hundreds, thousands or millions of artificial brain cells that can learn and act on the principle, very similar to how the brain works.
The Human brain and the computer is often compared to, and we have to admit that they have a lot in common. A normal brain consists of about 100 billion microscopic cells called neurons. Each neuron consists of a cell body with a few branching off from it connections — many dendrites (input channels of the cells that transmit information to the cell body) and one axon (output channel of the cells involved in the display of information). In the computer equivalent of a neuron is a nanoscopic device called the transistor. Modern microprocessors used in computers and mobile devices contain more than two billion transistors.
This, perhaps, the similarities between computers and human brains end and the differences begin. And it's not that computers — it is a cold metal box filled with binary numbers, and the brain — it's something warm, alive and filled with thoughts, feelings and memories. The real difference is that a computer system consists of relatively simple serial connections, while neurons in the brain together into a complex parallel connection, and each neuron is connected to approximately 10,000 of their neighbors.
On the one hand we have hundreds of millions of transistors, combined in a simple and logical system, on the other — 100 times more neurons interacting with each other through complex, interwoven relationships. The result is that the process of human thinking, at least at this stage, is extremely difficult to study and especially to artificially recreate. And that is exactly what were trying to do scientists Macallan Warren and Walter Pitts, when in 40-ies of the last century, first formulated the concept of an artificial neural network.
Computers are designed to store large amounts of useless information that makes sense and logic, only if you pre-enter the exact instructions for its processing. The human brain is slow and often requires several months to sort out something complicated. But unlike computers, we can spontaneously assemble information into intricate patterns — hence the roots of the creativity of Beethoven or Shakespeare: creating original patterns, forming unusual relationships, and perception of things so they brought in a new and unexpected light.
Agree, it would be great if one day computers could manage the available information as well? In fact, we are already moving to this intriguing future, and very quickly. And perhaps this was due to several major acquisitions in recent years: significantly increased the performance of computers, the emergence of artificial neural networks and machine learning, as well as the emergence of the Internet, with its huge volumes of data that are used as educational material for artificial intelligence.
The basic idea of artificial neural networks consists in copying the complex reciprocal connections between the cells of the artificial brain so that the machine could learn to recognize patterns and make decisions the way it does. The terrific thing is that the neural network is not necessary to program: it is designed for self-study.
However, one can hardly say that a neural network — this is an exact artificial copy of the brain. It is important to note that the network — is first of all a computer simulation: these networks are created by programming conventional computers, in which the traditional way are ordinary transistors, combined in a logical connection. However, they operate as though composed of billions of tiny brain cells working in parallel. A computer simulation is only a collection of algebraic variables and mathematical equations that collect them together.
Conventional artificial neural network consists of tens, hundreds, thousands, or even millions of artificial neurons, called units, which are arranged in layers, where each unit is connected to the next, both in its own layer, and in the next. Some of them are the input blocks and are designed to receive from the external world information. These blocks are connected with the hidden blocks that processes the received data and occupy most of the artificial brain. Finally, the output units are engaged in extraction of received and processed information.
The connection between the blocks is characterized by a number called a weight and can be positive (when one unit excites the other) or negative (when one unit blocks or inhibits another). The higher the weight that regard, the more one unit affects the other. This is reminiscent of how living brain cells influence each other.
Information flows through the neural network in two ways. When the network is trained or functioning normally after training, the samples of the information fed to her through the blocks of the input, and then get to the units output. This common structure is called a network with a mechanism for predicting events. However, the advent of information does not guarantee the excitation of all units. Each unit receives an input from its neighbor, and these signals are multiplied by the weights of connections on which they are traveling. Each block adds to the introductory data of the received signal, and if the total amount exceeds the threshold, the unit excites the neighboring blocks.
To the neural network was trained, there should be mandatory presence of feedback: in the same way children need to constantly talk about what is good and what is bad. In fact, the feedback we are constantly. Remember how you learned to play in bowling: when you pick up the ball and roll it along the track, your brain monitors the speed of the ball and its trajectory. Depending on the roll next time you will remember how I rolled the ball last time, and will adjust their movements in order to achieve a good result. Thus, you have used feedback to compare the result with the desired and to adjust their actions to achieve success.
Neural networks are trained in a similar way, by treating the feedback which is called "the method of back propagation of error". It compares the received output data with the data that was expected to, and uses the differences between these data for the change in weight of connections between units involved in the network. Moreover, the changes affect all connections from input units to the output units and back. Over time, the method of error back-propagation allows to train the network and to reduce to zero the difference between the desired and actual results.
After the neural network was trained using a sufficient number of examples, it reaches the stage where you can provide her a brand new set of input data, which she had never seen, and watch for her reaction. For example, you showed the neural network with a huge number of photos of chairs and tables, the maximum available explaining to her the difference between these pieces of furniture. Then you try to show her the picture of the couch and wait for the result. Depending on how effectively you have trained the neural network, it tries to relate what he saw to the category of "chair" or "table" based on experience. Such processes occur in the brain of a little child who first sees an object and tries to relate it to the list of concepts that are already known to him.
If you think about it, this one example is already enough to understand what artificial neural networks in the process of its evolution can have many practical applications. For example, airplane manufacturers have already tested the automatically trained systems autopilot systems that are not programmed in advance, and make decisions based on incoming real-time signal dashboard and cockpit control of the aircraft based on the received information.
Or take, for example, banking sector. Imagine that you run a Bank, where every minute are thousands of credit card transactions. Neural networks can be successfully used to identify activities that are classified as fraudulent. As inputs you can use the following queries: 1) did the card holder in person? 2) whether to use a PIN card? 3) have there been in the last 10 minutes five or more card transactions? 4) is there a map in the same country where it was issued?
Having Received a sufficient number of leads, the system can automatically mark the transaction as suspicious, then the Bank staff can make decisions about...
Information environment with consciousness - fiction or reality? Can machines regain consciousness? Popular culture regularly draws humanoid robots that have regained consciousness or were deliberately endowed with them by crazy (or not very) scienti...
Nvidia opened Pandora's box, deciding to buy ARM. If the deal is approved. Last week, Nvidia announced that it intends to buy ARM for $40 billion. The amount is considerable even for the technology sector, but besides the fact that $40 billion is a t...
Dyslexia affects about 10% of the world's population There are many diseases in the world that are associated with disorders of the brain and spinal cord, as well as various groups of nerves. These diseases are called neurological disorders and one o...
Every Monday in the new issue of «News high-tech» we summarize the previous week, talking about some of the most important events, the key discoveries and inventions. This time we will focus on the world's first tunnel f...
Elon Musk continues to engage in the drilling of tunnels, not to stand in traffic jams. At the end of January 2017 has started the first earthworks, which marked the beginning of the first tunnel from the office of SpaceX to the n...
Every Monday in the new issue of «News high-tech» we summarize the previous week, talking about some of the most important events, the key discoveries and inventions. This time we will talk about the unusual wind generat...