Since ancient times, scientists, including Pascal and Leibniz, dreamed of machines, is able to see, understand the world and interact with it. Writers and Directors, such as Jules Verne, Mary Shelley, George Lucas and Steven Spielberg, created a bold appearance such smart devices. In this episode we talk about machine learning, which has already managed to prove that somewhere in the depths of computers and robots is not very much different from people.
What is machine learning? In a nutshell, is the scientific discipline that tries to answer the following question: "How can we program the system for automatic learning and improving with new experience?". Training in this context consists, not in acquiring new knowledge, but in recognizing complex patterns and making intelligent decisions based on available data. The main difficulty stems from the fact that the set of all possible solutions when all incoming data is too complicated to describe. In order to solve this problem, in the framework of machine learning develops algorithms that extract knowledge from specific data and experience on the basis of statistical and computational principles.
The History of machine learning itself is very interesting and has more than 70 years. In 1946, he developed the first computer system ENIAC. The computer that is a calculator in those days was called the people performing calculations on paper, but ENIAC was called a computing machine. It was controlled manually, that is, the person you need to connect to each other machine components to compute. It was considered that the granting of such a machine a human approach to learning and thinking was logical and achievable task.
In 1950, British mathematician Alan Turing proposed a method for measuring performance of learning machines. "The Turing test" based on the following idea: we can determine that the machine is learning, if only when dealing with her, we will not be able to distinguish it from the other person. But in those days none of the existing systems are unable to pass the "Turing test", set a high standard stimulated inventors to create a very interesting car.
In 1952, Arthur Samuel of IBM wrote a computer game called "Checkers", setting himself the task to give her that level of skill, so she can fight with the world champion. Program machine learning Samuel was a great success and helped professional players checkers to improve their skills in the game.
Another important milestone was the introduction of ELIZA, developed in the early 60-ies of Joseph Weizenbaum. ELIZA was a simulation of a psychotherapist and used such tricks as the substitution of words and canned answers in response to certain keywords. For the first time confronted with ELIZA, some mistook her for a living person.
The Illusion of real communication was felt stronger, if a person has limited conversation by talking about yourself and your life. Despite the fact that ELIZA worked far from ideal, it became an early prototype of the modern electronic assistants, such as Siri and Cortana. Another important achievement you can call the system MYCIN, developed in the early 70-ies at Stanford University by a team under the leadership of Ted Shortliffe. Through a chain of questions and answers, the system helps a medical professional to make a correct diagnosis to the patient and to choose the most appropriate method of treatment. MYCIN is often called the world's first expert system.
Amid the emergence of expert systems there were other approaches to the problem of machine learning. In 1957, the American neurophysiologist Frank Rosenblatt developed the perceptron — a computer model of information perception by the brain, subsequently implemented in the electronic machine "mark-1" and became one of the first models of neural networks. 23 June 1960 at Cornell University demonstrated the first neurocomputer "mark-1" which was able to recognize some letters of the English alphabet.
To "teach" the perceptron to classify the images, we developed a special iterative learning method of trial and error, resembling the process of human learning — a method of error correction. In addition, in recognition of certain letters perceptron could distinguish the characteristic features of the letters, are statistically more common than minor differences in individual cases. Thus, the perceptron was able to generalize the letter, written in different ways (handwriting) into a single consolidated image.
However, not everyone shared the conviction that the approach to learning computers with neural networks is correct. After a prominent scientist Marvin Minsky has publicly criticized the concept, research was mainly focused on the creation of machines that have been programmed under the specific tasks that doomed the industry to stagnation, which lasted more than 10 years.
In the early nineties machine learning has again become a very popular topic due to the intersection of computer science and statistics. This synergy has led to the emergence of a new way of thinking in the field of artificial intelligence — probabilistic approach. This approach is distinguished by the fact that based on data volumes and not developed skills in expert systems, to appear informed. Many of today's successful applications of machine learning are the result of ideas that arose at that time.
An Important aspect of machine learning is a phenomenon of Big Data, or big data. In the nineties it became obvious that more statistical information be fed to the computing system, the more likely it is to build up a true understanding of the data.
Thanks to the emergence of Internet and cost reduction of devices for storing information scientists have at their disposal huge amounts of data, which fifty years ago the researchers and could not dream. The volume of data grows exponentially. For example, biology today has 1 exabyte of data on genomes, which equals 10 bytes at 18 degrees. It is expected that in 2024, a new generation of radio telescopes will generate as much information every day. To process such huge data volumes, created a new scientific discipline, dedicated to big data — and their rapid search, analysis, and ranking.
One of the biggest successes of recent years can be called cooperation scientist Geoffrey Hinton and founder of ImageNet FEI-FEI Li, who together have made significant progress in the development of such a phenomenon as deep learning. Through the use of multilayer neural networks and millions of images collected by ImageNet, the researchers were able to ensure that the computers have learned to perceive the information not on the basis of logic, as adults, and on the basis of data from the senses — that is, as a child discovering the world. As planned by the scientists, deep learning should be allowed to depart from the supervised learning and provide machines the ability to learn on their own, without any instruction from man.
An example of confirming the validity of this approach was the experiment carried out by Google in 2012, shortly after it settled on the work of Geoffrey Hinton. In the experiment were 1000 servers with about 16 thousand of cores. During the tests the neural network was analyzed 10 million screenshots of various random YouTube videos, of which were able with a high degree of accuracy to identify images of cats. The experiment, held in the framework of the project Google Brain, have proved that the approach of Hinton to machine learning is loyal and has a very impressive potential for commercialization. For example, at the moment, machine learning, through the consumption of a large number of images enables to successfully implement the project self-driving car Google.
Experiment Google with cats
Most strongly feel the impact machine learning can be when it is integrated into the Toolkit to other methods of artificial intelligence in such a way that has never been done. For example, the DeepMind project all the same company Google was able to demonstrate amazing results by combining deep learning with a technique called reinforced learning. The company has created a system AlphaGo that in March 2015, unable to beat the champion of the Chinese Board game go. In contrast to the IBM computer Deep Blue won the chess match against Garry Kasparov in 1997, AlphaGo not programmed using the so called decision trees or equations to analyze the situation on the Board. The system mainly studied the game, watching it played by professionals. Based on observations AlphaGo played with myself a million parties, analyzing the results and building an independent strategy.
Today, machine learning is actively involved in our lives and touches each of us, even if you do not notice. This system of product recommendations in online stores, and the system spam filtering in email. And sometimes call the customer support of any company, we can hardly determine, speaks to us of a live person or a digital assistant that recognizes speech and giving answers to questions, knowing the context.
In Addition to the obvious advantages, this phenomenon is a major source of concern. It's not even the fact that students without human intervention once the machine wants to destroy us. The negative impact of artificial inte...
Thanks to the development of virtual reality and associated technologies we can simulate being in a virtual space, the touch, the feeling of volume and weight of virtual items and so on. However, the sense of smell in this process is not used, and th...
according to the Telegram channel "house of cards: Russia", citing unnamed sources, the first launch of the Soyuz rocket after an accident is scheduled for October 25-26. We will remind, on October 11, immediately after the start of the rocket "Soyuz...
Invented in the early 21st century, graphene has already found its application in many fields of science and technology. And even gave the scientists who studied him the Nobel prize. However, two-dimensional structure like that of carbonaceous materi...
Today we present to your attention our new project — scientific-educational program «How does it work?». In the pilot episode, we talk about solar panels: how it works, what material is and what it efficiency. Plea...
this month marks three years since then, as mysteriously disappeared flight 370 of Malaysian airlines. March 8 during a flight from Malaysia to China, the aircraft deviated from the planned route and went beyond the reach of radar...
Save the received energy for later use — an extremely promising sector of the energy industry development. One of the simplest methods is the storage of energy by pumping into the reservoir located above the turbine generato...