Wikipedia is one of the most popular websites on the Internet (according to itself, Wikipedia, reference data Alexa Internet and SimilarWeb). On the running in 2001, the website to date contains more than 40 million articles on various subjects, written almost 300 languages of the world by the same users. It seemed, here it is – a haven of free Internet. However, under the bowels of all the free information and knowledge is boiling really a hidden cyberwar.
Study and analysis of the history for the first 10 years of existence, Wikipedia has shown that a large number of automated software "bots" — algorithms editing articles, working on the basis of artificial intelligence – were involved in endlessly continuing the epic controversy surrounding the specifics of the edit posted on the website articles. Each of these algorithms is trying to retain the last word, continuing endlessly to make certain edits to a particular article.
"the Fights between the bots can be much longer than disputes between people. People usually "cool down" after a couple of days, but bots can continue years" — says the researcher Taha Yasseri from the University of Oxford.
In their study, Asseri together with colleagues followed the cases of the edits to Wikipedia between 2001 and 2010. Despite the fact that in the early years of the existence of the site had very low activity editorial bots, this activity increased dramatically as soon as the platform where this site is, and the technology of automated bots have evolved. The researchers note that in 2014 about 15 percent of all edits in all languages in Wikipedia is owned by editorial bots, despite the fact that the share algorithms account for only 0.1 percent of the total staff of editors of "Wikipedia".
Editorial bots perform on the website in a variety of work. In addition to making editorial changes, they also deter vandalism (because the information on the website, can make anyone who has registered), collect the list of users to block, create links, check spelling, and automatically submit new content to the site.
In his original form, these bots were created to help people – not only editors of the site, but the users — but, as it turned out, this rule does not apply when talking about the interaction between the bots themselves.
As an example, in this study, two of the bot — Xqbot and Darknessbot, — who wrote for the portal a total of more than 3600 articles on a variety of topics, ranging from the Greek king Alexander and ending with the British football club Aston Villa. In the period from 2009 to 2010 Xqbot rejected more than 2,000 edits made by the bot Darknessbot. The latter, in turn decided to return the favor and rejected about 1,700 editorial changes made by the bot Xqbot. Another epic battle took place between the algorithms Tachikoma and Russbot. Each of them in total rejected over 1,000 edits made by another.
This rivalry has become for researchers a real surprise, especially considering the fact that the presence thereof between them was never intended, but nevertheless was somehow triggered.
"we really didn't expect to see something interesting. If we consider all these bots separately, it is possible to fall asleep from boredom" — comments Assery.
"thus, the presence of rivalry between them was a big surprise to us. Do not misunderstand: this is a great boots, perfectly cope with their tasks. They were developed with good intentions and based on the same open technologies code. But to see that! It's not surprising."
Another surprise for the researchers was how different the number of cases of conflict between the bots, given one or another language version of the website. For example, in the German version of Wikipedia there were fewer total conflicts – about 24 cases per each bot on average for the considered 10-year period. On the Portuguese Wikipedia was the greatest number of such clashes – an average of 185 conflicts on each bot. Fighting on the English version of "Wikipedia" are mid-tempo. There on each bot on average for 105 conflict cases over the past 10 years.
"We note that bots behave differently in different cultural environments and contention between them is also very different," — explains Milena Tsvetkova, one of the study participants.
"This information is very practical in nature. After all, it can be useful not only in the development of artificial agents, but also allows us to study their possible future behavior. I think it should dig deeper in this kind of sociology bots".
In this active development of automated AI systems becoming more powerful and more popular, and given the probability of cultural differences in their programming, we should give such phenomena a lot more well-deserved attention. Otherwise, the fate of our future may one day become dependent on the results of one of these disputes.
Millions of organizations for which data placed in a map , actually do not exist and never existed. This was identified through the investigation of journalists of The Wall Street Journal. Among 200 million seats added over the years, cards , about 1...
Perhaps in the future the Internet will be cleared from the scammers, and by users will have to carefully watch their words. Researchers from Florida state University have developed artificial intelligence that perfectly fulfills the role of the poly...
In the future on Earth may appear global , covering the entire surface of the planet — doing this for the company and OneWeb . Judging by the new decree of the Russian Government, their deployment throughout the country will be extremely difficult. T...
the So-called selfie (the term was even added to the Oxford English dictionary), which are surrounded by modern social networks represent the imprinting itself on the camera. This cultural phenomenon is now so widespread that the ...
Yesterday evening, January 31, a service to store code Networks were unavailable. It turns out that the system administrator accidentally… deleted almost everything. Made mistakes at different stages of copying databases fro...