They’re already taking our jobs and fighting our wars. Now, it turns out, computers are learning to impersonate humans in conversation.
At an event sponsored by the Royal Society outside London on Saturday, a computer posing as a 13-year-old boy convinced one third of the people who interacted with it via a keyboard chat that it was, in fact, human. This is, of course, terrible news for humanity.
Related: Robots to Replace Troops on the Battlefield
The event this weekend, arranged by the University of Reading, was the first successful effort to pass the “Turing test,” articulated by Artificial Intelligence and computing pioneer Alan Turing. In 1950, Turing suggested that a machine could be considered “intelligent” if one in three humans were unable to distinguish it from another human in conversation.
On Saturday, the program designed by a team including Princeton-based computer scientist Vladimir Veselov convinced one in three human partners that it was actually the fictional 13-year-old Eugene Goostman of Odessa, Ukraine. Veselov, who has come close to beating the test in years past, told The Independent, “We spent a lot of time developing a character with a believable personality."
The test required the computer to engage in a five-minute text-based conversation with a number of different people. There were no restrictions on the content of the conversations or the kind of questions the participants could ask.
AI researchers have struggled to beat the test for decades, with gradually increasing success. The milestone just passed is obviously a major victory for the team of engineers who created it. But it will also likely be a major wake-up call to computer scientists, ethicists, and others grappling with the ever-more-real advent of “intelligent machines.”
Related: The Robot Revolution Rips Up Our Social Contract
This is not to say that the world is in particular danger from a computer program than can convince one out of three people that it is actually a Ukrainian teenager. The problem is that, as this technology progresses, it’s a practical certainty that computers will be programmed to be increasingly persuasive in their efforts to impersonate – well, persons.
It’s not likely we’ll ever see a computer that’s able to consistently convince a woman that she is talking with her husband, or trick a father into thinking he’s having an extended conversation with his child. But it’s a much lower bar to convince people that they’re talking with anonymous credit card account managers at a call center in Omaha, or representatives of the federal government calling from Washington about a tax return.
Saturday’s results suggest that we’re not far from seeing another significant threat to both online privacy and financial security. Professor Kevin Warwick of Reading University told The Telegraph that the Eugene Goostman impersonation has ''implications for society'' and would serve as a “wake-up call to cybercrime.”
Related: Robots take Caregiving to a Whole New Level
Of course, the real danger, as anyone with even a passing acquaintance with science fiction will immediately recognize, is an evil mastermind bent on world domination who equips an army of these things with computer programs approximating human intelligence.
So, before things get out of hand, let me just get this out there now:
I, for one, welcome our new robot overlords.
Top Reads from The Fiscal Times: