Blog entry by: Mausam.
Recently a large number of researchers all across the world signed a petition for stopping research and development of intelligent weapons. In addition to prominent AI researchers like Stuart Russell (Berkeley) and Eric Horvitz (Microsoft Research), several other famous supporters and detractors of AI such as Stephen Hawking, Noam Chomsky and Elon Musk are in the list of almost 20,000 signatories. I signed it too.
The International Joint Conference of Artificial Intelligence 2015 (IJCAI) had a full panel dedicated to the issue. Several press releases and newspaper articles discussed the issue. Since then AI technology has become a matter of an even larger debate. (It has always been one of a tech-debater’s first loves!) Is all of AI bad? Why are scientists making such weapons? Why do people do AI research at all? Will AI some day become synonymous with weapons the way nuclear energy has almost become today? Will robots some day control or kill the whole of mankind?
The questions are complex and the arguments many. People in favor of AI weapons make some reasonable points. Many often give the bomb analogy – bombs are great if used as dynamite to make roads, but bad if used to kill people. The central argument is that technology by itself is value neutral, and should not be subjected to an inherent value; its usage or application, however, is value-laden. This kind of an argument typically absolves research and researchers, transferring the blame entirely onto policymakers, who we all know as the most just and objective people who ever lived…not! (For a funny but highly discomforting account of U.S. policies on drone operations see this John Oliver video on drones. There are people who are scared of blue skies, because currently drone strikes only happen in clear weather; but, of course, it’s only a matter of time.)
Others argue that intelligent weapons will have the ability to track and kill just the right persons and reduce mass destruction – that can only be good news. I personally find a lot of merit in this argument – while killing is mostly bad, killing innocents is sinful. Most of us are comfortable killing those who don’t value human life as long as we ensure that the innocent aren’t hurt. Finally, a few others, usually policymakers and nationalists, love to harp on the defending the country against terrorists and neighbors arguments. I may not like the rhetoric, but are they misplaced in their assessment? Indeed, neither have gender crimes nor religious terrorism nor national vendettas seem to have reduced off late. Everyone wants a sense of safety and security. So, then, why not? But, I believe, it is important to look at these questions from a long term and holistic perspective. There will be interim consequences of intelligent weapons and then there will be the long term ones.
In the interim the country making such weapons will definitely have an edge. It will have technology that could prove to be the difference between success and failure, victory and defeat, life and death. But it will also be a technology that will lower the barrier of waging a war. Why do we avoid wars? Because not only do they kill them, they kill us too. This dissuades both countries, even the stronger one. What happens after intelligent weapons? Well, that would mean practically no or little destruction to those countries. That would allow, in fact, encourage, impulsive actions without negative consequences. With the fear of losing their own soldiers gone, is it not possible that governments could be tempted into retaliation over small provocations? This could lead to many more wars.
As a citizen of a developing country this bleak future scares me deeply. What if we all were living in the same fears that the people worried about drone strikes are currently living in? What if most countries lost their ability to defend themselves because a few developed countries developed ruthless weapons, which didn’t need their citizens to go and man the warfront? And they went on a traveling imperialist problem instead of a traveling salesman one; and the last few hundred years of history repeated itself. Of course, this account is somewhat hyperbolic but the dangers are real. Lowering the entry barrier for waging a war cannot be good for anyone!
But, even if, this future where a small number of nations own such technology is short-lived, there are issues. Intelligence is much more a software phenomenon than a hardware one. The weapons will slowly become readily available, as it is not easy to control piracy of software technology. Slowly, everyone will start owning such weapons. Not just countries, but also individuals, since these will be easy to obtain, low danger to oneself, cheap pieces of equipment. Imagine U.S. gun control laws gone wilder! The good will have them, the bad will also have them and so will the ugly. These will not be like the super expensive nuclear weapons, which are carefully guarded by all countries (for fun, watch this eye-opening video on U. S. security of their nuclear weapons). The barrier to use weapons by everyone will have reduced now.
Just the thought of such a world makes me shudder. It is hard to visualise the exact outcome of such an arms race. But, it seems that the far-fetched science fiction of a coordinated army of robots taking over and killing all humans could become a reality. The main difference, however, will be that it won’t be caused by the intentions of robots; instead, it would be caused by people using weapons as slaves to further their individual interests.
Artificial intelligence has continually given us technology with amazing value to people. Machine learning has led to novel scientific discoveries, aided in design of drugs, rejected fraudulent financial transactions, and much more. Intelligent web search has allowed us to access hoards of information at our fingertips. Social networks have brought long lost friends together. Other subfields of AI have proven mathematical theorems, led to enhanced border security, and assisted in inter-planetary travel. The virtues of AI technology have been many and far reaching. But the application to automated weaponry has the danger to destroy it all. The onus is upon us to take corrective measures and make the world a better place to live in.
(This article is written after discussions with Dan Weld and Subbarao Kambhampati, and also after watching a presentation by an IIT Delhi student, Akash Tanwar. It only reflects the current views of the writer.)