How to cheat artificial intelligence and what this means


2017-04-14 16:30:22




1Like 0Dislike


How to cheat artificial intelligence and what this means

Outside the window 2022. You go on a self-driving car as usual in the city. A car approaches the stop sign, past which drove many times, but this time it stops in front of him. This stop sign like the others. But for the car it is absolutely different. A few minutes earlier without telling anyone, the attacker stuck on the sign small sign, imperceptible to the human eye but which can not fail to notice the technology. That is a tiny sticker on the sign turned the stop sign into something entirely different from the stop sign.

All this may seem incredible. But growing field of research proves that artificial intelligence can be deceived like this, if you see some tiny detail, completely invisible to humans. As machine learning algorithms are increasingly appearing on our roads, our finances, our health system, the computer scientists hope to learn more about how to protect them from similar attacks — before someone tries to fool them for real.

"This is a growing concern in the field of machine learning and AI community, especially because these algorithms are used more and more," says Daniel Lode, associate Professor of computer and information Sciences at the University of Oregon. "If spam passes or blocked several emails, this is not the end of the world. But if you rely on the vision system in the self-driving car that tells the car how to drive in anything without crashing, the rates will be much higher."

Regardless, the car breaks down or is compromised, it will hurt the machine learning algorithms that "see" the world. And here is for the machine Panda starts to look like a Gibbon, and a school bus — like the ostrich.

In one experiment, scientists from France and Switzerland showed how these perturbations can force the computer to mistakenly take squirrel gray Fox, and the coffee pot for a parrot.

How is this possible? Think about how a child learns to recognize numbers. Looking at the characters one by one, the child begins to notice some common characteristics: some are taller and slimmer, six and nine contain a single large loop, and eight — two, and so on. Once they see enough examples, they can quickly recognize the new numbers in the fours, eights or threes — even if the font or handwriting they will not look exactly the same as any other fours, eights, or threes, they have ever before seen.

Machine learning Algorithms learn to read the world through a somewhat similar process. Scientists fed a computer hundreds or thousands of (usually marked) examples of what they would like to detect on the computer. When the machine sifts the data is a number, it's not, it's a number that's not — she begins to notice the features that lead to the answer. Soon she can look at a picture and say, "It's five!" with high accuracy.

Thus, as human children, and computers can learn to recognize a huge number of objects, from numbers to cats, from boats to separate human entities.

But, unlike the child of a human and the computer pays no attention to details high level — like furry ears of cats or the distinctive angular shape of the four. He doesn't see the whole picture.

Instead, he looks at the individual pixels of the image — and the fastest way to share objects. If the vast majority of units will have a black pixel at a particular point and a few white pixels at other locations, the machine will quickly learn to determine several pixels.

Now back to the stop sign. Discreetly adjusting the pixels of the image — experts call this intervention "perturbations" — you can fool the computer and make you think that stop sign is, in fact, not.

Similar studies carried out in the Laboratory of evolutionary artificial intelligence at the University of Wyoming and Cornell University, has made quite a lot of optical illusions for artificial intelligence. These psychedelic images of abstract patterns and colors like no other for people, but quickly recognized by the computer in the form of snakes or rifles. It talks about how AI can look at something and not see the object, or to see instead of something else.

This weakness is prevalent in all types of machine learning algorithms. "You would expect that each algorithm has a chink in the armor," says Yevgeny vorobeichik, associate Professor of computer science at Vanderbilt University. "We live in a very complex multidimensional world, and algorithms, by their nature, affect only a small part of it".

Vorobeichik, "extremely sure" that if these vulnerabilities exist, someone will figure out how to use them. Probably someone has already done it.

Consider spam filters, automated programs that eliminate any awkward e-mails. Spammers might try to circumvent this barrier by changing the spelling of words (instead of viagra VI@gra) or by adding a list of "good words" that are commonly found in normal letters like "yeah", "me", "happy". Meanwhile, the spammers can try to remove words that appear frequently in spam, such as "mobile" or "win".

What can walk scammers in one day? The self-driving car, deceived by the sticker on the stop sign is a classic script that was invented by experts in this field. Additional data can help pornography to slip through safe filters. Others may try to increase the number of checks. Hackers can tweak the code of the malicious software to evade law enforcement.

Violators may understand how to create pass data, if you get a copy of the machine learning algorithm, which want to cheat. But to get through the algorithm, it is not necessary. You can just break it with brute force, throwing him slightly different versions of e-mail or images until they pass. Over time, this can even be used for a completely new model, which will know that looking for the good guys, and what to do in order to deceive them.

"People manipulate machine learning systems since as they were presented for the first time," said Patrick McDaniel, Professor of computer science and engineering at the University of Pennsylvania. "If people are using these methods, we can even not know about it".

These methods can benefit not only from the Scam — people can escape from the x-ray eyes of modern technology.

"If you're some kind of political dissident in a repressive regime and want to hold events without the knowledge of the security services, you may need avoidance automatic observation techniques based on machine learning," says Lode.

In one of the projects, published in October, researchers from the University Carnegie — Mellon created a pair of glasses that can subtly mislead the facial recognition system, causing the computer to mistakenly take actress Reese Witherspoon for Russell Crowe. It sounds ridiculous, but this technology can be useful to someone who is desperately trying to avoid censorship on the part of those in power.

What to do with all this? "The only way to avoid this is to create a perfect model to be always correct," says Lode. Even if we could create artificial intelligence that would surpass humans in all respects, the world is still able to slip a pig in an unexpected place.

Machine learning Algorithms is usually evaluated according to their accuracy. The program, which recognizes chairs in 99% of cases is clearly better than the one which recognizes 6 of the 10 chairs. But some experts offer another way to assess the ability of the algorithm to cope with the attack more violent, the better.

Another solution may lie in the fact that the experts could ask programs specific rate. Create your own examples of attacks in the laboratory, based on the capabilities of criminals in your opinion, then show them the machine learning algorithm. It might help him become more sustainable over time — provided of course that the test attack will be of a type which will be tested in the real world.

"machine learning Systems — a tool for understanding. We need to be reasonable and rational in relation to what we give them and what they tell us," said McDaniel. "We should not treat them as perfect oracles of truth."



Comments (0)

This article has no comment, be the first!

Add comment

Related News

The University of Cambridge have invented a graphene ink for PCB

The University of Cambridge have invented a graphene ink for PCB

In recent years this connection as graphene, has become increasingly popular in various fields of science.

In Russia begins production of a unique wheelchair-transformer

In Russia begins production of a unique wheelchair-transformer

As the press service of the holding "Schwabe", the company will begin production of the wheelchair-transformer, able to overcome previously inaccessible obstacles, having the course on the track rod and the high speed movement.

#news high technology | Volume 182

#news high technology | Volume 182

Every Monday in the new issue of «News high-tech» we summarize the previous week, talking about some of the most important events, the key discoveries and inventions.