“A seemingly indestructible android is sent from 2029 to 1984 to assassinate a waitress, whose unborn son will lead humanity in a war against the machines, while a soldier from that war is sent to protect her at all costs” (Terminator 1984, IMDb)
Artificial Intelligence is not a new concept. Its storytelling origins go far beyond the release of “Terminator”. However, it took a few decades, for the people to realize its true potential and for the technology to be accessible.
Alan Turing, British mathematician and WWII code-breaker, whose life was depicted in “The Imitation Game”, is one of the first people to come up with the idea of machines that think in 1950. He created the Turing test, which is still used today to determine a machine’s ability to think like a human. 1959, American scientist Marvin Minsky picked up the journey and co-founded the AI laboratory at Massachusetts Institute of Technology.
Why is AI suddenly accessible?
According to Accenture, 85% of executives plan to invest extensively in AI in the next three years. There are 3 major drivers accelerating accessibility to AI.
- Internet of Things: The proliferation of connected devices resulted in an explosion of data, required to train AI systems.
- Big Data: Today companies can store all structured and unstructured data collected from various sources in a central repository, where it can be analyzed.
- Computing Power: The infrastructure required to collect, process and analyze large amount of data is readily available
Reality is that AI is behind a lot of things we encounter in our daily life. For instance, Facebook is leveraging its vast amount of data to train its AI to take down inappropriate content and to recognize images with no human intervention. Similarly, Google is training its AI to allow computers to see, listen and speak like humans.
Many industry leaders and scientists, including Stephen Hawking, Elon Musk and Steve Wozniak warned about the disastrous effects of autonomous robots. Fact is that we cannot “uninvent” a technology that is becoming part of our life.
Every industry is already incorporating AI into its offerings to compete more efficiently in the digital economy. The arms industry is not an exception. Like nuclear weapons, every superpower will have to weaponize AI hoping that it will never have to use it.
Machines vs. Hackers: The Role of Cybersecurity
The warfare of the future will consist of a new generation of soldiers known as “hackers” combating highly sophisticated “machines”.
Unlike human soldiers, smart machines are vulnerable to hacking, and taking over an adverse combat unit is a far more efficient strategy than mobilizing and risking resources to destroy it. As a result, governments will have to develop offensive and defensive cybersecurity measures to efficiently conduct the wars of the future.
For instance, to counter measure the automation of key battlefield processes, hackers may create traps known as “honeypots” to trick the adversaries or launch denial of service attacks to shut down the machines. On the other side, to counter these measures machines will continuously evolve to learn how to recognize honeypots and filter out a flood of messages.
The REAL danger comes upon the emergence of robots that are completely autonomous. In other words, robots that are not managed by humans and therefore do not require any connection with the outside world nor the internet. Those robots are technically less vulnerable to hacking.
The Road to Terminator
Although AI offers limitless benefits, it also has the potential to go wrong. The arms race among nations will gradually push technology manufacturers to develop fully automated machines that will need minimal input from the outside world, making less vulnerable to hacking. Therefore, in my opinion, Terminator-like scenarios are very likely to happen if legislators, humanitarian organizations do not engage the technology vendors to regulate an industry that can pose a threat to the human race.
What do you think?