Now that I’ve got your attention, I want to question your motivation for clicking this article – are you deathly afraid of artificial intelligence, human-sized parkour enabled robots? Did you want to see what my argument is only to tell me to steer clear of movies and calm the $#*% down? Or maybe it’s curiosity – what’s this guy on about?
Well… before I start I want you to check out this video by Boston Dynamics, a robotics company out of the US demonstrating the latest addition to their ‘Atlas’ robot;
This video left me TERRIFIED in a tight-chested, boot trembling kind of way. But why? I’m a cyber security student, I’ve been around tech, code and robots my entire life and yet seeing one in action had me incredibly anxious.
Turns out I’m not alone, a quick trawl through social media confirmed my suspicion;
This led to an interesting conversation among friends, starting with;
WHERE DOES THE FEAR COME FROM?
If you showed a 5-year-old the video of Atlas, I would frame a market they’d love it. It’s awesome, right? Well it is! What Boston Dynamics have done with robotics and artificial intelligence is exceptional and a tremendously exciting feat for mankind. So why am I afraid of it? Is it because I know better? I’ve seen these things cause harm… where did I see that again?
Oh that’s right… ‘Hollywood’. Films like Terminator, Black Mirror and ‘I, Robot’, where humanoid or otherwise artificially intelligent robots are used as the ‘enforcers’ scare the pants off us – they’re designed to. And while it’s easy to brush off any concern caused by these films, it’s just as easy to let them shape your perception of the technology. Watching Atlas doing a backflip gave me flashbacks of hacked humanoids diving off a track at action star Will Smith;
I’ll be the first to argue against how the media affects human behavior when it comes to video games and violence, but I struggled internally to argue my own principles in this case because of how actively afraid Atlas made me.
A study by Anthony Leiserowitz of Yale University in 2004 titled ‘Before and After the Day After Tomorrow: A US Study of Climate Change Risk Perception’ (as the title would suggest) was aimed at understanding how the public perceived the risk of a natural disaster due to climate change having seen a film about it.
Leiserowitz wrote “The study found that, on the whole, those who had seen the movie responded in a statistically significantly differently (manner) than those who had not seen the movie. Respondents who had seen the movie were more concerned about global warming than non‐watchers, even after controlling for other demographic predictors.“.
Now if a film on natural disasters due to global warming can affect how the general population perceive it as a threat, I’d argue the same principle can be applied to the threat of a humanoid uprising.
A second study by Sayantani Satpathi of the University of Oklahoma supported Leiserowitz research, stating that “When considering the impact of films it is important to consider the possibility that movies may have the capacity to socially amplify the viewer’s perceptions of risk outside the theater, particularly if people are getting information from time spent in the theater.”
While Satpathi’s study wasn’t particularly conclusive or damning, it posed the question and confirmed the possibility that these films and pop culture in general have the ability to sway our opinions on risk perception.
SO HOW SHOULD I FEEL THEN?
Contemplate for a minute the amount of good that robots such as Atlas could do in our world. Think about the firefighter that has to run into the burning building, hoping to make it out alive. Consider the lifeguard that has to put their life at risk in order to save someone drowning by the rocks.
These are two of two hundred scenarios where a humanoid or other physically capable robot could save lives without risking any. The question that was then raised during this conversation “What about job loss? What do the firefighters/lifeguards/divers/first responders do now?” which is a genuine concern, but one that I would outweigh with the possibility of saving lives. We’re talking about HUMAN LIFE.
Forget me, ask anyone – time and time again whether they’d trade a job for a life – there’s a serious morality issue surrounding anyone that argues against. But nonetheless I’m not here to advocate for the immediate employment of AI robots, rather for giving them a chance.
At a base level, our only reason to fear this technology is because the entertainment industry has scared us into it. I’d argue the way you should feel is… oddly… welcoming. In this case, the reward far outweighs the risk.
Inevitably however when this topic comes up the next question is “Where does it stop? Are we going to see these robots as police officers, deciding who’s committing a crime?” Which leads me to…
SENTIENCE VS CONTROLLED LEARNING
Boston Dynamics is using artificial intelligence in their robots so that they can decipher the environment and navigate the most effective route to get to a destination. You can actually witness this tech inside their ‘SpotMini’ machine;
And while the SpotMini is thinking for itself, it’s incredibly limited as to what that the actual thinking and subsequent learning is in regards to. We’re not talking about learning human morality, we’re not even remotely close to machines classifying actions as harmless or criminal as Hollywood suggests. This tech is significantly more along the lines of recovery missions from collapsed buildings, getting kittens out of sewers and finding lost hikers out in the fierce wilderness.
So ultimately, yes – at a face level these robots are terrifying, I understand that. I felt it as well. However if you think about the live saving prospects this technology proposes, you should be significantly more optimistic than you might have been at first.
Studying Cyber Security and working for Macquarie Media Limited, John is a huge nerd with a passion for video games and computers.
You will often find him in the streets advocating for the benefits of gaming or just generally nerding out.
Feel free to email with any questions or comments: [email protected]