Artificial intelligence is not as smart as you and Elon Musk believe it


2017-07-26 11:00:12




1Like 0Dislike


Artificial intelligence is not as smart as you and Elon Musk believe it

In March 2016 computer algorithm AlphaGo company DeepMind was able to win over Lee Sedalen, at that time the world's best player in complex logic. This event became one of those defining moments in the history of the technology industry, which at the time began and the victory of the computer Deep Blue, IBM over world chess champion Garry Kasparov, and the victory of the Watson supercomputer from IBM in the quiz for scholars Jeopardy in 2011.

But yet, despite these victories, however impressive they may be, here we are largely talking about training algorithms and brute computing power than actual artificial intelligence. A former Professor of robotics Massachusetts Institute of technology Rodney Brooks, one of the founders of iRobot, Rethink Robotics and later, said that the training algorithm of the game in a complex strategic puzzle that's not intelligence. At least not such as we represent them to humans.

The Expert explained that no matter how strong AlphaGo neither showed themselves in the performance of its tasks, in fact, he is not capable of anything else. Moreover, it is configured in such a way that can play go only on the standard box, 19 x 19. In an interview with TechCrunch Brooks told how recently had the opportunity to chat with the team of DeepMind and find out one interesting detail. On the question of what would have happened to change the tournament size boards, and increased it to 29 x 29 cells, the team AlphaGo confessed to him that even a slight change in the playing field would lead to the fact that "we have come to an end".

"I think people see how well the algorithm copes with a problem, and probably immediately believe that he is able to effectively perform the other. But the fact that he can not" — commented Brooks.


Rough intelligence

In may this year in an interview with Devin Coldewey at the event TechCrunch Disrupt Kasparov noted that the development of a computer that can play chess at the global level is one thing, but quite another to call such a computer artificial intelligence, as it is not. It's just a machine that throws all their computing power on the problem with which she used to cope the best.

"the chess machine win because of the possibility of deep computing calculation. They can become completely invincible when there is a huge database, very fast hardware and a more logical algorithms. However, they lack understanding. They don't recognize strategic patterns. The machines have no purpose," — said Kasparov.

Gil Pratt, CEO of Toyota Institute, a division of Toyota, working on issues and projects related to artificial intelligence and its use in home robots and unmanned vehicles, also took part in an interview with TechCrunch at the event Robotics Session. According to him, the fear that we hear from a wide range of people, including Elon musk recently called artificial intelligence an "existential threat to humanity" may be due to nothing more than those antiutopiya descriptions of the world that offers us a science fiction.

"Our current system of deep learning are good in fulfilling their tasks only to the extent that we have created. But in fact they are quite specialised and tiny in scale. So I think it's important every once in the context of the topic to mention that how good they are and how they are actually ineffective. And how far we are from the time when these systems will be able to begin to imagine the threat, which says Elon Musk and the other" — commented Pratt.

Brooks, in turn, on TechCrunch Robotics Session noted that among men in General there is a tendency to assume that if the algorithm is able to cope with the task "x", then he must be as smart as people.

"I think the reason people, including Elon musk, make this error is the following. When we see a person, very well cope with its task, we understand that he has a high competence in this matter. It seems to me that the same model people are trying to apply to machine learning. And this is precisely the main error" — says Brooks.

Facebook CEO mark Zuckerberg spent last Sunday's live broadcast, which also criticized comments by Elon musk, calling it "pretty irresponsible". According to Zuckerberg, the AI will be able to significantly improve our lives. Musk, in turn, decided not to remain silent and answered Him that "limited understanding" about AI. The topic is still not closed, and Musk promised later in more detail to respond to attacks from peers in the IT industry.

By the way, Musk is not the only one who thinks that the AI can be a potential threat. Physicist Stephen Hawking and philosopher Nick Bostrom also expressed their concern about the potential infiltration of artificial intelligence in the everyday life of mankind. But most likely, they're talking about more generalized artificial intelligence. About the one that is taught in these labs as Facebook AI Research, DeepMind and Maluuba, rather than on more specialised AI, the first rudiments of which can be seen today.

Brooks also notes that many of the critics of the AI don't even work in this field, and suggested that these people just don't understand how difficult it is finding solutions for every single task in this field.

"In fact, people who consider AI as an existential threat, not so much. Stephen Hawking, British astrophysicist and astronomer Martin Rees... and a few others. The irony is that most of them share one feature – they don't even work in the field of artificial intelligence" — said Brooks.

"For those of us who work with AI, it is obvious how difficult it is to get something on the level of the finished product".


the Wrong idea AI

Part of the problem comes also from the fact that we call it "artificial intelligence". The truth is that this "intelligence" is not like human intelligence, which in dictionaries and lexical dictionaries are usually described as "the capacity for learning, understanding, and adaptability to new situations".

Pascal Kaufmann, CEO of Starmind, a start-up, offering assistance to other companies to use collective human intelligence in the search for solutions to the problems in the field of business, for the last 15 years studying neurobiology. The human brain and the computer, said Kaufman, they work quite differently, and it would be an obvious mistake to compare them.

"the Analogy is that the brain works like a computer – very dangerous and is an obstacle to the progress of development of AI", — says Kaufman.

The Expert also believes that we will not get very far in understanding human intelligence, if we consider it in terms of technology.

"It is a misconception that the algorithms work as the human brain. People just like algorithms, and therefore they think that the brain can be described with their help. I believe that it is fundamentally wrong," — adds Kaufman.


If something goes wrong

There are many examples of AI algorithms are not as smart as we are accustomed to think about them. And one of the most infamous may serve as the AI-algorithm Tay (Tay), created by the development team of AI systems from Microsoft and out of control last year. It took less than one day to turn the bot into a real racist. Experts say that this can happen with any AI system, when it is offered bad examples to follow. In the case of Tay, she came under the influence of racist and other offensive lexical word forms. And since it was programmed to "learn" and "mirror behaviour", it soon got out of control of researchers.

The widespread research specialists at Cornell and Wyoming universities, it was found that very easy to trick the algorithms trained to identify digital images. The experts found that the image looked like a "scrambled nonsense" for the people, by the algorithm was determined as the picture of some everyday object like a school bus.

According to an article in MIT Tech Review and describing this project, it is not clear why the algorithm can be fooled the way it was done by the researchers. What we found out is the fact that people have learned to recognize what is before them is either self-sufficient picture, or some obscure image. Algorithms, in turn, analyzes the pixel, easier manipulation and deception.

As for self-driving cars, here is much more complicated. There are some things that a person understands when preparing to deal with certain situations. The car of this train will be very difficult. In a long article published in one of the car blogs by Rodney Brooks in January of this year, are a few examples of such situations, including the one which describes how the unmanned vehicle is approaching a traffic Stop sign located next to the pedestrian crossing in the city at the beginning which are and communicate with an adult with a child.

The Algorithm is likely to be configured to wait for the passage of pedestrians across the road. But what if these pedestrians and in thoughts was not ...


Can genes create the perfect diet for you?

Can genes create the perfect diet for you?

Diet on genotype can be a way out for many, but it still has a lot of questions Don't know what to do to lose weight? DNA tests promise to help you with this. They will be able to develop the most individual diet, because for this they will use the m...

How many extraterrestrial civilizations can exist nearby?

How many extraterrestrial civilizations can exist nearby?

If aliens exist, why don't we "hear" them? In the 12th episode of Cosmos, which aired on December 14, 1980, co-author and host Carl Sagan introduced viewers to the same equation of astronomer Frank Drake. Using it, he calculated the potential number ...

Why does the most poisonous plant in the world cause severe pain?

Why does the most poisonous plant in the world cause severe pain?

The pain caused to humans by the Gimpi-gympie plant can drive him crazy Many people consider Australia a very dangerous place full of poisonous creatures. And this is a perfectly correct idea, because this continent literally wants to kill everyone w...

Comments (0)

This article has no comment, be the first!

Add comment

Related News

The moon is full of water. Left to produce it, scientists say

The moon is full of water. Left to produce it, scientists say

Scientists have long discussed the probability of presence of water on the moon. In previous studies, was discovered traces of water in covered shadow the icy regions of the poles of the satellite, however, the results of a new st...

Created an AI algorithm that predicts the death of a character

Created an AI algorithm that predicts the death of a character "Game of thrones"

the Series "Game of thrones" — a real phenomenon. Six full seasons (recently saw the premiere of the 7th season). Millions of fans around the world. Dal the tears shed for the beloved, but so unfair to the killed characters....

A monthly injection will replace the daily intake of HIV medications

A monthly injection will replace the daily intake of HIV medications

People who are diagnosed are forced to end their lives every day to take very expensive drugs that can suppress the level of virus in their body. In the future the situation may change dramatically, because an international group ...