As people dedicated to research of artificial intelligence, I often encounter the view that many people are afraid of AI and what it can become. It is not really surprising, when viewed from the standpoint of the history of humanity, while paying attention to what we are fed by the entertainment industry that people may be afraid of the cybernetic uprising, which forced us to live in an isolated area, and the other turned into a "Macrocephaly" human batteries.
And yet to me, looking at all these evolutionary computer models that I use in the development of AI, it is difficult to think about the fact that my harmless, clean as a tear baby create on my computer screen one day can turn into monsters futuristic dystopia. If you call me "destroyer of worlds" as once was sorry and talked about himself Oppenheimer after he led the program to create a nuclear bomb?
Perhaps I would have accepted that honor, and maybe critics of my work still right? Maybe I really should stop avoiding the questions about what kind of fears in relation to artificial intelligence available to me as an expert in the field of AI?the
The Computer HAL 9000, which became the dream of science fiction Arthur Charles Clarke and brought to life by Director Stanley Kubrick in his film "Space Odyssey 2001" is a great example of a system that failed due to unforeseen circumstances.
In many complex systems, the Titanic, NASA space Shuttle and the Chernobyl nuclear power plant, engineers had to combine many components. Perhaps the architects of these systems was very aware of how each element works individually, but they are not well understood, how all these components will work together.
The result was a system which never were understood by their creators, which led to certain consequences. In each case the ship sank, two shuttles have exploded, and almost all of Europe and parts of Asia are faced with the problem of radioactive contamination – a set of relatively small problems, but by chance happened at the same time, created a catastrophic effect.
I can easily imagine how we, the creators of AI can come to a similar result. We take the latest developments and research in cognitice (the science of thinking — approx. ed), translate them into computer algorithms and add it all into the existing system. We are trying to develop AI without fully understanding their own intelligence and consciousness.
Systems Such as Watson from IBM or Alpha from Google, are artificial neural networks, with impressive computing capabilities and is able to cope with really challenging tasks. Only the consequences of error in their work, is the result of a loss in the intellectual game "Jeopardy!" or a missed opportunity to win another of the world's best player in the desktop logical game.
These effects are not global in nature. In fact, the worst thing that can happen to people in this case is someone loses some money on rates.
Nevertheless, the architecture of the AI getting harder, and the computer processes — faster. The possibility of the AI over time will only increase. And already it will lead us to what we begin to impose on the AI more and more responsibility, despite the increasing risks of contingencies.
We are well aware that "mistakes are part of human nature", so for us it will be just physically impossible to create a truly safe throughout the system.the
Very concerned about the unpredictability of consequences in the AI that I'm developing, using the approach of the so-called neuroevolution. I can create a virtual environment and populate their digital creatures, giving them the "brain" of the team for solving problems of increasing difficulty.
Over time, the efficiency of solving the problems these creatures increases, evolyutsioniruet. Those who cope with the task better, all selected for reproduction, creating on their basis of a new generation. Over many generations, these digital create develop cognitive abilities.
For Example, right now we are making the first steps in the development of machines to perform simple navigation tasks, making simple solutions or memorize pairs of bits of information. But soon we will achieve the development of machines that can perform more complex tasks and will be much more effective overall level of intelligence. Our final goal is to create intelligence at the human level.
In the course of this evolution we will try to detect and fix all errors and problems. With each new generation of machines will be better able to cope with errors, compared to the previous. This will increase the chances that we will be able to identify all the unintended consequences in the simulations and eliminate them before they can be realized in the real world.
Another option which provides an evolutionary method of development is the empowerment of artificial intelligence ethics. It is likely that such ethical and moral characteristics of a person, as a reliable and altruism are the result of our evolution and a factor in its continuation.
We can create an artificial environment and endow machines abilities that allow them to demonstrate kindness, honesty and empathy. This could be one way to ensure that we develop a more obedient servants than the ruthless killer robots. However, despite the fact that neuroevolution can reduce the level of unintended consequences in AI behavior, it cannot prevent the misuse of artificial intelligence.
As a scientist, I must abide by their obligations to truth and to report that have discovered within their experiments, regardless, I like their results or not. My task is not to determine what I like and what not. It is only important that I can publish my work.the
Being a scholar doesn't mean losing humanity. I have on some level to regain contact with their hopes and fears. Being morally and politically motivated person, I have to consider the potential implications of their work and its possible effect on society.
As scientists and as members of society, we still have not come to a clear idea of exactly what you want from AI, and what it should become in the end. This is partly, of course, due to the fact that we still do not fully understand its potential. But still we need to clearly understand and decide what we want to get really advanced artificial intelligence.
One of the biggest areas in which people pay attention in a conversation about AI, is employment. Robots already perform for us difficult physical work, for example, assembling and welding between parts of car bodies. But one day the day will come when robots will be tasked with a cognitive task, that is, they will charge what was previously considered an exclusively unique ability of the man himself. Self-driving cars will replace taxi drivers; self-managed aircraft will need pilots.
Instead of receiving medical care in emergency rooms, always filled with tired staff and doctors, patients will be able to conduct surveys and to know the diagnosis through expert systems with immediate access to all medical knowledge. Surgery will be impervious to fatigue by the robots, is "quite an arm."
Legal advice can be obtained from the comprehensive legal framework. For advice on investment will appeal to the expert system in market forecasting. Perhaps one day all human work would be done by machines. Even my work can be done faster through the use of a large number of machines, constantly investigating how to make machines more intelligent.
In the realities of our current society automation is already causing people to leave their jobs, making the rich owners of such automated machines richer and the rest poorer. But this is not a scientific problem. It is a political and socioeconomic issue that should be decided by the society itself.
My research isn't going to change, but my political principles, together with humanity, might lead to circumstances in which AI can become a extremely useful feature, instead of making the gap between the one percent global elite and the rest of us even wider.the
We got to the last fear, we imposed the insane HAL 9000, Terminator, and any other evil superintelligence. If the AI will continue to evolve until, until you exceed human intelligence, whether artificial swarmintelligence system (or set of systems) to consider man as a useless material? How can we justify its existence in the face of superintelligence capable of doing and create something that will not be able to no one? If we can avoid the fate of being razed to the Ground by the machines, which we helped to create?
So the most important question in such circumstances would be: why do we need artificial superintelligence?
Had such a situation, I probably would have said I was a good man, who even contributed to the creation of this superintelligence, to which are now. I would have appealed to his compassion and empathy to superintelligence left me such compassionate and empathetic, alive. I would also add,...
as a result of leakage at the disposal of edition Hi-News.ru were documents and videos, in which information is too important for it to hide. All the records are guarded by private security agents, so publishing even a subset of them will result in i...
You may not realize it, and even surprised, but, by the time you finish reading this text, you will get about 2-3 minutes into the future. The truth is that from a theoretical point of view to travel into the future is actually much easier than to tr...
If you ever played role-playing games, whether online or old-fashioned yard — you know how easy it is to get attached to your avatar, that is, to play the role of "Cossacks-robbers". You literally feel pain when a character has a Troll, sear a dragon...
being a fan of Nintendo in the modern world is not easy. Not only that, you picking on not only the adherents of PC and other more powerful home consoles, and the company itself, which you support, often brings another surprise an...