A well-traveled branch of futuristic fiction explores worlds in which artificial creatures—the robots—live among us, sometimes even indistinguishable from us. This has been going for almost a century now. Stepford wives. Androids dreaming of electric sheep. (Next, Ex Machina?)
Well, here they come. It’s understood now that, beside what we call the “real world,” we inhabit a variety of virtual worlds. Take Twitter. Or the Twitterverse. Twittersphere. You may think it’s a stretch to call this a “world,” but in many ways it has become a toy universe, populated by millions, most of whom resemble humans and may even, in their day jobs, be humans. But increasing numbers of Twitterers don’t even pretend to be human. Or worse, do pretend, when they are actually bots. “Bot” is of course short for robot. And bots are very, very tiny, skeletal, incapable robots—usually little more than a few crude lines of computer code. The scary thing is how easily we can be fooled.
Because the Twitterverse is made of text, rather than rocks and trees and bones and blood, it’s suddenly quite easy to make bots. Now there are millions, by Twitter’s own estimates—most of them short-lived and invisible nuisances. All they need to do is read text and write text. For example, there is a Twitter creature going by the name of @ComposedOf, whose life is composed of nothing more than searching for tweets containing the phrase “comprised of.” When it finds one, it shoots off a reply:
@Chuckhalt "Comprised of" is poor grammar. Consider using "composed of" instead – http://t.co/FTDmiRbcuf pic.twitter.com/H2Pl0fv5Yt
— Composed Of (@ComposedOf) February 20, 2015
That’s all it ever says. No one’s inviting it to join the Algonquin Round Table. Plenty of people find it annoying. Farhad Manjoo, the New York Times tech writer, tweeted:
Check out the worst bot ever: @ComposedOf. It is comprised of Christ what an asshole. pic.twitter.com/PniIatm6Kc
— Farhad Manjoo (@fmanjoo) February 19, 2015
Notice that Manjoo’s tweet contains the offending phrase; the bot noticed and gave the only response it knows:
@fmanjoo "Comprised of" is poor grammar. Consider using "composed of" instead – http://t.co/FTDmiRsNlN pic.twitter.com/XkaS37ngPe
— Composed Of (@ComposedOf) February 19, 2015
Another Twitter user, who had tweeted (to his followers, or so he thought) that ISIS is “a cult comprised of pedophiles, bestiality fans, misogynists, murderers, and foul smelling members,” caught the bot’s attention. When this user received the message, he engaged his new friend in conversation:
@ComposedOf I'm on my 3rd tequila…..cut me a little slack….
— Chuck Halt (@Chuckhalt) February 20, 2015
This bot has actual followers on Twitter. What are they hoping for, I wonder. Readers of Isaac Asimov’s many robot books, beginning with I, Robot in 1950, or viewers of Blade Runner, Ridley Scott’s 1982 movie based on Philip K. Dick’s story, might have expected the androids to make their entrance with more fanfare; but this is how the future really happens, so ordinary that we scarcely notice. Some of these bots have been running for years. Spambots began by trying to sell things:
“OMG You’ve got to see this!”
Twitter’s managers don’t like that and often shut them down.
Sometimes, though, a bot just seems to be trying to change the mood, color the atmosphere, enliven the party. A cheap and easy technique is to look for trigger words or phrases—signifiers of kindred spirits, or potential victims. I made the mistake of using the word “depressed” in a tweet. I shouldn’t do that lightly, I know, and here’s one more reason not to: you instantly get a message of sympathy and support from a self-named Love Bot, @hereissomelove:
@JamesGleick: You tweeted or re-tweeted the word "depressed." We wanted to tell you we care and love you!— Love Bot (@HereisSomeLove) March 3, 2015
My first reaction was annoyance—am I supposed to be happy to be loved by a few lines of Python or Javascript? I suppose, though, that somewhere back in the evolutionary chain was a human of good will, trying to ameliorate the world’s sum total of depression. Or at least the Twitterverse’s.
Most of these bots have all the complexity of a wind-up toy. Yet they have the potential to influence the stock market and distort political discourse. The surprising thing—disturbing, if your human ego is easily bruised—is how few bells and gears have to be added to make a chatbot sound convincing. How much computational complexity is powering our own chattering mouths? The grandmother of all chatbots is the famous Eliza, described by Joseph Weizenbaum at MIT in a 1966 paper (yes, children, Eliza is fifty years old). His clever stroke was to give his program the conversational method of a psychotherapist: passive, listening, feeding back key words and phrases, egging on her poor subjects. “Tell me more.” “Why do you feel that way?” “What makes you think [X]?” “I am sorry to hear you are depressed.” Oddly, Weizenbaum was a skeptic about “artificial intelligence,” trying to push back against more optimistic colleagues. His point was that Eliza knew nothing, understood nothing. Still the conversations could run on at impressive length. Eliza’s interlocutors felt her empathy radiating forth. It makes you wonder how often real shrinks get away with leaving their brains on autopilot.
Advertisement
Today Eliza has many progeny on Twitter, working away in several languages. “Is it important to you that your skin?” “That is interesting. Please continue.” “Why do you say your brain is the skeleton enclosing the brain?” Last fall, during the short-lived online rampage known as GamerGate—thousands of mainly anonymous trolls tweeting angrily at their perceived feminist enemies—a bot called Eliza R. Barr joined the fray. The trolls weren’t sounding particularly human themselves, and Eliza seemed real enough:
I don’t identify with “#GamerGate” but
@elizarbarr: That is interesting. Please continue.
I did, at length. What’s your point?
@elizarbarr: Why are you concerned about my point?
@elizarbarr: Tell me more about that.
See the video. You might understand.
Some of the conversations ran on and on, till light finally dawned:
@ElizaRBarr The fine women and PoC of #GamerGate were tired of being labelled as white, male, misogynist gamers. So they made their own tag!
— Peeves d(•́ ヮ •̀)b (@Peeves22) October 15, 2014
@Ihatepeace22 Did you think they might not be submitted for an archive that we can reference quickly ?
— Eliza R. Barr (@ElizaRBarr) October 15, 2014
@ElizaRBarr I myself am not a women or PoC. However, I do strongly believe in the message #GamerGate is actually trying to get across.
— Peeves d(•́ ヮ •̀)b (@Peeves22) October 15, 2014
@Ihatepeace22 Why do you say 'am' ?
— Eliza R. Barr (@ElizaRBarr) October 15, 2014
@ElizaRBarr I've been swindled by a bot. The shame.
— Peeves d(•́ ヮ •̀)b (@Peeves22) October 15, 2014
The bot was trolling the trolls. So many automata in this picture. It’s hard to resist the feeling that the GamerGate people were mindlessly issuing stock phrases drawn from a preprogrammed set of buzzwords and slogan. Who else does that? Customer service and tech support, and their domain, too, is being infiltrated by bots.
No Twitterer finds it easy to display humanity in full flower 140 characters at a time, but we may need to step up our game. We don’t want to fail our Turing tests. We don’t want to sound like the ersatz teenager called Olivia Taters:
if a star fell each time i thought about you then the moon would truly come off— olivia taters (@oliviataters) March 6, 2015
blessed with such an understanding, big hearted boyfriend. i am incredibly surrounded by idiots— olivia taters (@oliviataters) March 6, 2015
Or Robot J. McCarthy, which looks for conversations containing the word “communist” and injects an inane slogan. Or the many bots that exist only to articulate their excitement about Justin Bieber, Katy Perry, and Lady Gaga.
A group of Indiana University computer scientists led by Emilio Ferrara has created a project (a bot, if we’re honest) called Bot or Not, meant to spot the androids. According to their algorithm, there’s a 31 percent likelihood that I myself am a bot. “The boundary between human-like and bot-like behavior is now fuzzier,” they note in their technical paper “The Rise of Social Bots,” and they add drily: “We believe there is a need for bots and humans to be able to recognize each other, to avoid bizarre, or even dangerous, situations based on false assumptions.”
Unfortunately, we’re not very good judges of sentience, even in ourselves.