Why ethical problem — the most serious for artificial intelligence


2017-03-09 15:00:11




1Like 0Dislike


Why ethical problem — the most serious for artificial intelligence

Artificial intelligence is already everywhere and will remain everywhere. Many aspects of our lives in one degree or another relate to artificial intelligence: he decides what books to buy, what tickets for the flight order of how successful the submitted resume, will provide you the Bank loan and what is a cure for cancer to give the patient. Many of its use on our side of the ocean has not yet come but will come.

All of these things — and many others — are now able largely to define complex software systems. The huge success of AI over the past few years is astonishing: the AI makes our lives better in many, many cases. The rise of artificial intelligence was inevitable. Huge sums have been invested in startups on the topic of AI. Many existing tech companies — including giants such as Amazon, Facebook and Microsoft opened a new research laboratory. It is no exaggeration to say that the software now means AI.

Some predict that the advent of AI will be the same big event (or even more) as the advent of the Internet. BBC interviewed experts that prepares this rapidly changing world filled with shiny machines, we humans. What is especially interesting, almost all their answers were devoted to ethical issues.

Peter Norvig, Director of research at Google and a pioneer of machine learning, believes that technology is AI, based on the data, raises a particularly important issue: how to ensure that these new systems have improved society as a whole and not just those who manage them. "Artificial intelligence has proved very effective in practical tasks, from labeling pictures to understand speech and written natural language, detection of diseases," he says. "The challenge now is to ensure that everyone enjoys this technology."

The Big problem is that software complexity often means that it is impossible to determine exactly why the AI does what it does. Due to the fact, how the modern AI is based on the widely successful technique of machine learning — it's just impossible to open the hood and see how it works. So we just have to trust him. The challenge was to come up with new ways of monitoring or auditing the many areas in which AI plays a big role.

Jonathan Zittrain, Professor of Internet law at the faculty of Harvard, believes that there is a danger that the increasing complexity of computer systems may prevent us from providing an adequate level of inspection and control. "I am concerned about the reduction of the individual's autonomy as our systems — using technologies becoming more complex and intertwined," he says. "If we "set and forget", we may regret, how to develop the system and what is not considered ethical aspect in the past."

This concern is echoed by other experts. "How are we going to certify these systems as safe?", asks Missy Cummings, Director of the Laboratory of human and autonomy at Duke University in North Carolina, one of the first female pilots of the US Navy, now an expert on drones.

The AI will need supervision, but it is unclear how to do it. "Currently, we have no standard approaches," says Cummings. "And no industry standard testing such systems will be difficult to widely implement these technologies."

But in a rapidly changing world and often the settlement be in the position of catching up. In many important areas like criminal justice and health care companies are already enjoying the efficiency of artificial intelligence that takes the decision to parole or diagnosis of a disease. Transferring the right decisions to machines, we run the risk of losing control — who will check the correctness of the system in each case?

Dana Boyd, senior researcher at Microsoft Research, says that there remain serious questions about the values that are recorded in these systems and who is ultimately responsible for them. "Regulators, civil society and social theorists increasingly want to see these technologies in a fair and ethical but their concept at best vague".

One of the areas fraught with ethical problems, it jobs, General job market. AI allows robots to perform more complicated work and replacing a large number of people. Chinese Foxconn Technology Group, the supplier of Apple and Samsung, has announced its intention to replace the 60 000 workers of the factory by robots and the factory of Ford in Cologne, Germany, put robots next to people.

Moreover, if the rise of automation will have a big impact on employment, this can have negative consequences for the mental health of people. "If you think about what gives people meaning in life is three things: meaningful relationships, love interests and significant work," says Ezekiel Emanuel, a bioethicist and a former adviser on health care Barack Obama. "Meaningful work — a very important element of someone's identity." He says that in the regions where the works were lost with the closure of factories, and increased risk of suicide, substance abuse and depression.

As a result, we see the need for a large number of experts on ethics. "The company will follow its market incentives — it's not bad, but we can't rely on the fact that they will behave ethically just because", says Kate darling, a specialist in law and ethics at the Massachusetts Institute of technology. "We saw that whenever there was a new technology and we were trying to decide what to do with it".

Darling notes that many companies with large names such as Google, have already established supervising ethics committees, which control the development and deployment of their AI. It is believed that they should be more common. "We don't want to stifle innovation, but such structures we need," she says.

Details about who sits in the Board of ethics Google and what does remain vague. But last September, Facebook, Google and Amazon launched a consortium to develop solutions which will allow to cope with the deep traps related to the security and secrecy of the AI. OpenAI — too organization, which is engaged in the development and promotion of AI open source for the benefit of all. "It is important that machine learning has been studied openly and has spread through open publications and open source code that we all could participate in a sharing of rewards," says Norvig.

If we want to develop industry and ethical standards and to clearly realize what is at stake, it is important to create a brainstorm with experts on ethics, technology leaders and corporations. We are talking about not simply to replace people with robots, and to help people.


Comments (0)

This article has no comment, be the first!

Add comment

Related News

China has built the world's largest experimental 5G network

China has built the world's largest experimental 5G network

As reported by the news Agency «Xinhua», in the capital of China Beijing, Huairou district, built the world's largest experimental network of mobile technology standard 5G.

The company PassivDom started printing completely Autonomous house on a 3D printer

The company PassivDom started printing completely Autonomous house on a 3D printer

at Home that print in a Ukrainian company obtained a fully Autonomous and «smart», so for life, you can choose any place.

Hyperloop can appear in India

Hyperloop can appear in India

the Company Hyperloop Transportation Technologies has previously consulted with the state commissions of the Czech Republic, Slovakia and the United Arab Emirates and discussed the construction of a new transport system on the territory of these countries, and now, it seems, to develop a plan for the construction of similar lines in India.