Dream or Nightmare? – The Real-Life Implications of Artificial Intelligence

They can run faster than Usain Bolt, carry more baggage than human or animal ever could, they don’t need to eat, sleep, drink or breathe, and, most importantly, they aim at their enemies with more accuracy a human could ever achieve. Given the rapid progress made in developing hardware and Artificial Intelligence (AI), in a few years, combat robots could replace their human counterparts. This does not only affect security policy and security thinking, but also has implications on human rights and their protection.

Hollywood, in addition to many military strategists, has been dreaming about autonomous military robots for decades. This dream is being brought into reality by companies like Boston Dynamics. On their YouTube channel, one can see videos of highly sophisticated man-made cheetahs, mules and even human-shaped robots moving through rough terrain. The Boston Dynamics robots run so well, some animal-robots have already been recruited by the US military. In combat exercises, pony-sized four-legged robots carry heavy equipment for US marines, unburdening the marching soldiers and following them autonomously.

A robotic packhorse for military equipment is just the tip of the iceberg. Things become more frightening if one takes a look at so-called LARs (lethal autonomous robotics). These robots do not only support military actions by carrying equipment or providing intelligence, but also perform military actions. The first fully autonomous weapon system was developed by Samsung Techwin – an affiliated company of the manufacturer of millions of smartphones and TVs each year – and was deployed on the South Korean side of the Demilitarized Zone between North and South Korea. This weapon is capable of detecting dangers by voice-recognition and reacting autonomously on them with a rate of 1,000 bullets per minute.

What distinguishes the newest generation of combat robots from already in-use weapon systems like man-controlled drones is their ability to decide by themselves whether to attack or not instead of a human giving the command to attack. Guns that can select and attack their targets autonomously – Hollywood’s dream being brought into reality sounds more like a nightmare to many people. According to Greg Allen, staff member of the Washington DC Center for New American Security, the development of AI is capable of changing the conduct of war as much as the invention of the nuclear bomb did in the last century.

As artificial intelligence is about to massively change the way wars are fought, opposition to the use of intelligent robots for military uses increases.The most notable criticism comes from the developers of robots and AI who have insight into what their technology could be capable of if it falls into wrong hands. Thus, this past summer, a group of 116 leading AI developers and scientists, including Stephen Hawking and SpaceX-founder Elon Musk, signed a letter demanding of the UN to ban the development of weapons controlled by artificial intelligence.Critics argue that deployment of intelligent combat-fighting robots destabilizes peace as it lowers the threshold for using military means as a way to solve external conflicts. If leaders do not have to bear public responsibility for fallen soldiers but only for a bunch of electronic waste, they may be more likely to resort to military means in the future.

The UN recognizes the threats originating from armed robots and has reacted by setting up the Center for Artificial Intelligence and Robotics in The Hague. This Center will provide research on the implications the deployment of intelligent autonomous robots will have on the future of peace and security.

There is a lot of work to do for the researchers of the UN as the questions raised by the emergence of Artificial Intelligence are diverse on political, legal and ethical levels.These are questions such as: Which ethical status do human rights violations conducted by computers have? Who can be tried for machine-made human rights violations – whether it is the developers and scientists, the people deploying them or no one? Can we trust machines to make a decision when life and death are at stake?

It is the task of the international community to develop codes of conduct of war for the twenty-first century and ensuring that these questions stay hypothetical ones in the future – answered by Hollywood, not by policymakers.

 

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.