In science fiction like to portray robots as Autonomous machines, able to make their own decisions and even to demonstrate personality. However, we did not get rid of the idea that the robots belong to us as property and that they have no rights, usually available in people. But if the machine can think, make decisions and act on their own, if she can be hurt or obliged to take responsibility for their actions, should we stop treating her as property and start treating her as an individual with rights?
But if the robot will suddenly become fully conscious? Will he have the same rights as we do, and the same protection by the word of the law, or at least something similar?
These and other questions are already discussing the European Parliament's Committee on legal Affairs. Last year he released a project report and a proposal to create a set of civil law of robotics, regulating its production, use, autonomy and impact on society.
The proposed legal solutions to the most interesting was a proposal to create a legal status of "electronic entities" for the most complex robots.the
The report acknowledged that the improvements Autonomous and cognitive robots makes them more than just tools, and the usual rules of liability, like contract and tort liability, are insufficient to work with them.
For Example, the current EU Directive on liability for damage caused by robots, only covers foreseeable damage caused by manufacturing defects. In these cases, the responsibility of the manufacturer. However, when the robots can learn and adapt based on their environment in unpredictable ways, the manufacturer will be more difficult to anticipate problems that can hurt.
Also expressed concern about the way in which it is necessary to consider sufficiently or not sufficiently complex robots: how ordinary people, legal entities (e.g., corporations), animals or objects. Instead of trying to cram them into an existing category that proposes the creation of a new category of "electronic entities" as more appropriate.
The report did not advocate immediate legislative action, however. Instead, I propose to update the law when robots become more complex and acquire the behavioral subtleties. If this happens, one of the recommendations is to reduce the responsibility of the "creators" in proportion to the autonomy of the robot, and to include compulsory insurance.
But why go so far as to create a new category of "e-persons"? In the end, computers will get close to human intelligence, if it does.
The Robots — or, more precisely, the software they are based become more and more difficult. Autonomous (or "emergent") machines are becoming increasingly common. There have been disputes on the legal possibilities of Autonomous devices. Can they conduct surgery? Can you sue a robot surgeon?
a Robot are taught to "feel" pain
While the responsibility lies on the shoulders of the manufacturer, it is not a particularly difficult problem. But if the manufacturer is impossible to determine easily, for example, in the case of use of the software open source? For someone to sue, if the creators IN the millions worldwide?
Artificial intelligence is also beginning to justify its name. Alan Turing, father of modern computing, proposed a test, passing which the computer can consider themselves smart, if he manages to fool, to deceive the person, to impersonate a living creature. The machines are already close to that to pass this test.
The List of success robots is quite long: the computer writes the soundtrack for the video, indistinguishable from written by the people, bypasses the captcha, writing words and plays with the world's best poker players.
In the end, the robots can catch up with people on cognitive abilities and even to be excessively humane, for example, if they "feel" pain. If this progress continues, self-aware robots will cease to be a product of fiction.
The EU Report was one of the first on these issues, but other countries also take an active part. Yue-Xuan Wen of the Beijing University said that Japan and South Korea believe that we coexist with robots by 2030. The Japanese Ministry of economy, trade and industry established a series of guidelines addressed to business and security in relation to the robots of the next generation.the
If we do decide to give robots legal status, what would it be? If they behaved as humans, we could treat them as legal subjects, not objects, or place them somewhere in the middle. Subjects have rights and obligations, and it gives them legal "personhood". They are not required to be individuals; the Corporation is not a legal entity but is considered to be. Legal objects, on the other hand, do not have rights and duties, although they may have economic value.
The Awarding of the rights and duties of an inanimate object or program independent of the creators, it may seem... strange. But we are already seeing how corporations are fictitious legal entities with rights and responsibilities.
Perhaps the approach to robots should be similar? If the robot (or program) is sophisticated enough to meet certain requirements, it is possible to give the status of the Corporation. This will allow him to earn money, pay taxes, own assets, or to file a lawsuit, regardless of its creators. The creators of the robot are similar to Directors of corporations.
Robots will be considered in this case as the legal objects, but unlike the corporations will have a physical body. "Electronic person", thus, can be a combination of subject and object of law....
a Robot was sent into the reactor No.
We can't create humanoid robots (androids) that will be indistinguishable from biological humans, but that doesn't mean we don't try.
the Newspaper Frankfurter Allgemeine Zeitung for many years doing photo shoots with various celebrities, businessmen who sit in the armchair, crossing her legs and reading the latest issue of the publication.