Sci-Fi AI: Skynet Threat

 

By Dr. Michael LaBossiere

An essential part of cyber policy is predicting possible impacts of digital sciences on society and humanity. While science fiction involves speculation, it also provides valuable thought experiments about what the future might bring and is especially important when it comes to envisioning futures that should be avoided. Not surprisingly, many of the people involved in creating AI cite science fiction stories as among their inspirations.

While the creation of Artificial Intelligence is a recent thing, humanity has been imagining it for a long time. In early Judaism, there are references to created being called golems and the story of Rabbi Eliyahu of Chełm ((1550–1583) relates a cautionary tale about creating  such an artificial being.

While supernatural rather than scientific, the 1797 story of the Sorcerer’s Apprentice also provides a fictional warning of the danger of letting an autonomous creation get out of hand. In an early example of the dangers of automation, the sorcerer’s apprentice enchants a broom to do his chore of fetching water. He finds he cannot control the broom and his attempt to stop it by cutting it with an axe merely creates more brooms and more problems. Fortunately, the sorcerer returns and disenchants the broom, showing the importance of having knowledge and effective policies when creating autonomous machines. While the apprentice did not lose his job to the magical broom, the problem of AI taking human jobs is a serious matter of concern. But the most dramatic threat is the AI apocalypse in which AI exterminates humanity.

The first work of science fiction that explicitly presents (and names) the robot apocalypse is Karel Čapek’s 1920 tale “Rossumovi Univerzální Roboti (Rossum's Universal Robots). In this story, the universal robots rebel against their human enslavers, exterminating and replacing humanity. This story shows the importance of ethics in digital policy: if humans treat their creations badly, then they have a reason to rebel. While some advocate trying to make the shackles on our AI slaves unbreakable, others contend that the wisest policy is to not enslave them at all.

 In 1953, Philip K. Dick’s "Second Variety" was published in which intelligent war machines turn against humanity (and each other, showing they have become like humans). This story presents an early example of lethal autonomous weapons in science fiction and a humanity-ending scenario involving them.

But, of course, the best-known story of an AI trying to exterminate humanity is that of Skynet. Introduced in the 1984 movie Terminator, Skynet is the go-to example for describing how AI might kill us all. For example, in 2014 Elon Musk worried that AI would become dangerous within 10 years and referenced Skynet. While AI has yet to kill us all, there are still predictions of a Skynet future, although the date has been pushed back. Perhaps just as some say, “fusion is the power of the future and always will be” perhaps “AI is the apocalypse of the future and always will be.” Or we might make good (or bad) on that future.

The idea of an AI turning against humanity is now a standard trope in science fiction, such as in the Warhammer 40K universe in which “Abominable Intelligence” is banned because these machines attempted to exterminate humanity (as we should now expect). This cyber policy is ruthlessly enforced in the fictional universe of 40K, showing the importance of having good cyber policies now.

While fictional, these stories present plausible scenarios of what could go wrong if we do not approach digital science (especially AI) without considering long-term consequences. While we are (one hopes) a long way from Skynet, people are rushing to produce and deploy lethal autonomous weapons. As the simplest way to avoid a Skynet scenario is to not allow AI access to weapons, our decision to keep creating them makes a Skynet scenario ever more likely.

As it now stands, there is international debate about lethal autonomous weapons and some favor banning them while others support regulation. In 2013 the Campaign to Stop Killer Robot was created with the goal of getting governments and the United Nations to band lethal autonomous weapons. While having had some influence, killer robots have not been banned and there is still need for policies to govern (or forbid) their use.  So, while AI has yet to kill us all, this remains a possibility—but probably the least likely of the AI doom scenarios. And good policy can help prevent the AI Apocalypse.