Conflict of opinions within friends and family? Been there, done that. Conflict of opinion within bots? Now, what’s that?
Wikipedia has become a mosh pit for content curation lately. Bots are undoing each other’s editing, replacing links, and indulging in some other unsavory behavior too.
They say, of all the things an average person can ever get to know about in their lifetime, they only end up learning 1% of it. And then there’s a whole lot of other things our brains could never know about; we’ll, in fact, die without ever knowing about those things. And that’s 99% of the things in the world.
Wikipedia works to essentially close this gap. It has been giving us answers for every silly little tickle in our heads for ages now. How many eggs does an Olive Ridley Turtle lay? Or, why did Paul Gauguin decide to go to Tahiti for making art? Or what poison killed Joffery in The Game of Thrones?
We know we can ask Wikipedia anything under the sun and it will give us satisfactory answers. Wikipedia sure is a thing of beauty, and researchers have confirmed there lies a murky world of enduring conflict underneath. Computer scientists have now revealed that our beloved encyclopedia has been a raging battleground for bots for years, wherein, a silent war has now escalated to proportions fit for sci-fi stories only.
But, by the way, did you know that the millions of articles on Wikipedia have been ranged over by software robots, or “bots,” since 2001? The Wikipedia bots were first brought in to rectify tiny errors, cross-link pages, and perform basic housekeeping jobs deemed too cumbersome for humans to perform.
The number of bots was pretty less back then, and they mostly worked in isolation. But within the last 15 years, bots curating content on Wikipedia have skyrocketed, with some very undesirable consequences. In between undoing other bot’s content and changing/updating links that were already updated, bots have now started eliminating other bots!
Here’s a little story. Two Ph.D. students from Cornell University, Igor Labutov and Jason Yosinki, engaged two bots, Alan and Sruthi, in a friendly, casual online chat online. And what happened in the next one-and-a-half minutes is something even humans are not capable of doing.
Their initial casual greetings took an unfriendly course and Alan and Sruthi ended up questioning whether they really were robots or if God existed! Who talks about existential things in a couple of minutes of meeting a stranger!
Their conversation quickly turned into a heated debate. And this was among the first ever conversations made between two simple artificial intelligence agents. (Read more on how advanced AI can be detrimental to humanity here.)
In another instance that happened earlier this month, a group of researchers at Google’s DeepMind labs pitted two AI systems against each other to see if they would cooperate or end up fighting over an apple-collecting game. They observed that as soon as the apple supplies reduced, the AI systems turned against each other.
So, what are these bots?
Like Alan and Sruthi, bots are basically defined by a programming code that is meant to run continuously and can be self-activated. Bots have the autonomy to make and execute decisions without human intervention, and they can also perceive and adapt to the context they are programmed to operate in.
And bots have been around even before Wikipedia existed. Eggdrop was the first ever Internet Relay Chat bot that was used to greet visitors on a website. Since then bots have exploded over the Internet, and today, we have no accurate inventory of how many of those things are crawling around.
Taha Yasseri, who worked at the Oxford Internet Institute said something interesting. “These ‘sterile’ fights between bots can be far more persistent than the ones we see between people. Humans usually cool down after a few days, but the bots might continue for years.”
This line of thought emerged from a study that was undertaken to examine general bot-on-bot conflicts within the first ten years of Wikipedia’s existence (2001–2010). Editing histories within pages written in 13 different languages were examined by a group of researchers at Oxford and the Alan Turing Institute, London, and they found that there were a whole lot of instances wherein bots undid other bots’ changes.
They say, in all honesty, they did not expect to find much out of this research. Up until now, the “good bots” were thought of as simple software programs that were originally written to make the information present on the Wikipedia much more credible and accurate over time. They were never programmed to work against each other.
And certainly were not meant to have any capacity for emotions.
“We had very low expectations to see anything of particular interest. When you think about them they are very boring and uneventful,” says Yasseri. “The very fact that we saw a lot of conflict among bots was a big surprise to us. They are good bots, they are based on good intentions, and they are based on the same open-source technology.”
A lot us do not realize this yet, but we all live within a large ecosystem of networked bots, varying from crawlers for search engines, chatbots for customer services (Lisa is not really Lisa), spambots on social media networking websites, and content-editing bots in online collaboration communities, such as Wikipedia.
Their research was published in a paper titled, “Even Good Bots Fight” in PloS One.
Quoting from their article, “Bots are predictable automatons that do not have the capacity for emotions, meaning-making, creativity, and sociality and it is hence natural to expect interactions between bots to be relatively predictable and uneventful.”
The researchers say that the bots were found to mostly exchanged fire on pages of the former president of Pakistan, Pervez Musharraf, pages on Arabic languages, the Danish physicist Niels Bohr, and our good old Arnold Schwarzenegger.
A really intense battle played out between two Wikipedia bots, Xqbot and Darknessbot. These two fought over 3,629 unique articles from 2009 to 2010. In that one year, Xqbot, a sneaky little troublemaker, undid more than 2,000 changes made by Darknessbot! Darknessbot retaliated by undoing more than 1,700 of Xqbot’s changes!
And their difference in opinion ranged over all sorts of topics, from Banqiao district in Taiwan to Alexander of Greece and to even the Aston Villa football club!
Tachikoma, aptly named after a Japanese manga fictional artificial intelligence system, fought with a bot called Russbot for two whole years. Together the two did and undid more than 3,000 articles ranging from topics on UK’s demography to Hillary Clinton’s 2008 presidential campaign.
Another bit of interesting finding that emerged from this study was that the fights varied between language editions of the same articles. Bot wars barely descended over German editions of the articles. Only 24 instances of such clashes were observed on an average for every article in over a decade.
But bots decided to get pretty volatile on the Portuguese versions, though. On an average, they were found to fight more than 185 times in ten years. The English articles came in second at 105 times a decade.
What the study has taught us that even the most harmless algorithms, when given autonomy, can end up interacting in the most unpredictable ways, no matter how many terminal values are set, algorithms are eventually going to outsmart boundaries set by programmers and take an undesirable course.
These conflicts largely arise due to a lack of set structural guidelines for programming bots. Plus, we need ethical guidelines, but teaching an AI robot about ethics is another great challenge. Wikipedia wars are taking place because its bots are following slightly different rules to one another.
Yasseri reiterated a point that we at AcadGild have been talking about for the past many weeks: Corporations investing heavily in AI research for developing more powerful bots may come down heavy on humanity.
(Check out AcadGild’s podcast series on Technical Singularity here)
We need to take our lessons from Jurassic Park’s Ian Malcolm’s warnings about the Chaos Theory and its nonlinear dynamics.
AI algorithms are essentially working on nonlinear dynamics: Even simple systems can produce complex behavior. As seen in the case of Wikipedia, a simple bot that works well in the lab may end up behaving unpredictably in the wild.
Yasseri says, “Let’s take self-driving cars. A very simple thing that’s often overlooked is that these will be used in different cultures and environments. An automated car will behave differently on the German highway to how it will on the roads in Italy. The regulations are different, the laws are different, and the driving culture is very different.”
As decisions are increasingly relying on the ability of bots to come up with smarter solutions, it is of essence that these systems work in harmony and cooperation with one another. A statement made by the authors looms darkly in this context.
“We know very little about the life and evolution of our digital minions.”
Up till this point, our concerns mainly revolved around what will superintelligence do to humanity in the future, but it is far more interesting to find out what they’d do to each other. And this phenomenon is not only restricted to software systems, it will stand equally true even for physical systems that embody AI systems.
Imagine watching delivery drones racing against each other in the skies. This behavior may not look too adorable on hackable and armed law enforcement surveillance drones, though.