People do not trust the AI. How to fix it?

Date:

2018-01-16 17:00:09

Views:

115

Rating:

1Like 0Dislike

Share:

People do not trust the AI. How to fix it?

Artificial intelligence is already able to predict the future. The police use it to produce maps reflecting when and where there can be a crime. Doctors use it to predict when a patient may have a stroke or a heart attack. Scientists are even trying to give AI's imagination, so he can anticipate the unexpected events.

Many decisions in our lives require good forecasts, and agents AI almost always do better with them than people. However, all these technological advances we still lack confidence in the forecasts, which gives artificial intelligence. People are not used to rely on AI and prefer to trust the experts in people's faces, even if these experts are wrong.

If we want artificial intelligence beneficial to the people, we need to learn to trust him. For this we need to understand why people are so persistently refuse to trust the AI.

the

Trust the Dr. Robot

An Attempt by IBM to provide supercomputing program oncologists (Watson for Oncology) has been sorely missed. AI promised to provide high-quality advice on the treatment of 12 types of cancer, which accounted for 80% of cases in the world. To date, more than 14 000 patients received recommendations on the basis of his calculations.

But when the doctors first encounter with Watson, they found themselves in a rather difficult situation. On the one hand, if Watson gave instructions regarding the treatment, coinciding with their own views, doctors have not seen great value in the recommendations of the AI. The supercomputer was just telling them what they already knew, and these recommendations do not change the actual treatment. This might give doctors peace of mind and confidence in their own decisions. But IBM has not shown that Watson does increase the survival rate of cancer.

On the other hand, if Watson has made recommendations that differed from the experts, the doctors concluded that Watson is incompetent. And the machine could not explain why the treatment should work, because its machine learning algorithms were too complicated to be able to understand people. Accordingly, this led to even greater distrust, and many doctors simply ignored the recommendations of the AI, relying on their own experience.

As a result of main medical partner of IBM Watson — MD Anderson Cancer Center recently reported the waiver program. Danish hospital also reported that abandons the program after it found that oncologists do not agree with Watson in two cases out of three.

The Problem of oncological Watson was that the doctors he just didn't trust. The trust of the people often depends on our understanding of how other people think, and experience strengthens confidence in their opinion. It creates a psychological sense of security. AI, on the other hand, a relatively new and confusing thing for people. He makes decisions based on complex system analysis to identify potential hidden patterns and weak signals arising from large amounts of data.

Even if it is possible to explain the technical language, the decision-making process of the AI is usually too complex to be understood of most people. Interaction with something that we don't understand can cause anxiety and create a sense of loss of control. Many people simply don't understand how AI works, because it's happening somewhere behind the screen, in the background.

For this reason, they are sharper noticed cases when the AI makes a mistake: remember the Google algorithm, which klassificeret colored people as gorillas; chatbot Microsoft, which became a Nazi in less than a day; Tesla working on autopilot, which had an accident with a fatal outcome. These bad examples have received a disproportionate attention of the mass media, emphasizing the agenda that we can't rely on technology. Machine learning is not 100% reliable, partly because of his designing people.

the

the division of society?

The sensation of artificial intelligence, go deep into the nature of human beings. Recently, scientists conducted an experiment in which interviewed people who watched movies about artificial intelligence (fantastic), on the topic of automation in everyday life. It turned out that no matter was the AI in the film is depicted in a positive or negative way, merely watching a cinematic representation of our technological future polarizes the attitude of the participants. Optimists become even more optimistic, but the skeptics are closed even stronger.

This suggests that people are prejudiced against AI, starting from their own reasoning, is deeply rooted tendency of biased confirmation: the tendency to seek or interpret information in such a manner to confirm pre-existing concept. Because AI is often mentioned in the media, it can promote deeply divided society, split between those who use AI, and those who rejected it. The predominant group of people can gain a serious advantage or handicap.

the

Three ways out of the crisis of confidence AI

Fortunately, we have thoughts on how to deal with the credibility of AI. One only have experience with AI can significantly improve the attitude of people towards this technology. There is also evidence indicating that the more often you use certain technologies (e.g. Internet), the more you trust them.

Another solution would be to open the "black box" machine learning algorithms and to make their work more transparent. Companies like Google, Airbnb and Twitter, already publish transparency reports on government requests and disclosure. This practice in AI systems will help people get the right understanding of how algorithms make the decisions.

Studies show that the involvement of people in decision-making process of the AI will increase confidence and allow the AI to study human experience. The study showed that people who were given the opportunity to slightly modify the algorithm, felt great satisfaction from the results of his work, apparently because of feelings of superiority and influence on the future outcome.

We don't have to understand the complex inner workings of AI systems, but if you give people at least a little information and control over how these systems are implemented, they will have more confidence and desire to make the AI in everyday life.

...

Tags:

Comments (0)

This article has no comment, be the first!

Add comment

Related News

Japanese neural network is taught to

Japanese neural network is taught to "read minds"

it Seems that computers can already read our minds.

#news high tech | Issue 226

#news high tech | Issue 226

Every Monday in the new issue of «News high-tech» we summarize the previous week, talking about some of the most important events, the key discoveries and inventions.

#2018 CES | Samsung showed a prototype of bendable smartphone

#2018 CES | Samsung showed a prototype of bendable smartphone

at the close of the exhibition CES-2018, traditionally held in Las Vegas, the South Korean company Samsung has decided to do a very narrow number of invited participants covered by the cameras flashing, the prototype demonstration of his secret Galaxy X, rumors of which have long circulated in the Network.