Kargu 2 drone

       There has increasing concern in recent times of robot weapons becoming more of a real-life threat, and a very serious one at that. So it was only a matter of time before we would hear of a horrible robot weapon being used in real battlefield. This appears to have happened last year in Libya with the appearance of what is being called a ‘ flying killer robot’.

This is an autonomous   weapon made in Turkey—the STM Kargu-2 drone. Zachary Kallenborn, an expert on non-conventional weapons has described this weapon in these words—The Kargu is a loitering drone that can  use machine learning based object classification to select and engage targets, with swarming capabilities in development to allow 20 drones to work together. A recent report of UN Experts on Libya has suggested that this may have been used to hunt down and remotely engage retreating soldiers loyal to the Libyan General Khalifa Haftar. As Kallenborn says, this is a new chapter in autonomous weapons , one in which they are used to fight and kill human beings based on artificial intelligence. It is also quite likely that the first killing by a robot weapon took place here.

There has been increasing concern among many well-informed persons as well as experts regarding robot weapons or AI weapons, but unfortunately this concern has not led to real-life action for checking this very serious threat. Highly scary but fact-based warnings  have been issued, only to be followed by increasing investments by big powers to strengthen their preparations for developing a wide range of robot weapons.

One of the arguments given for not checking military use of robot weapons ( also called lethal autonomous weapons or LAWs) is that work for civilian and military use of robots, particularly in the context of scientific research and innovation, can be closely related. The message given is that as civilian research on robots advances, there will be accompanying implications for military use of robots which cannot be ignored by any leading military power. However the dangers of robot weapons are so serious that any false justifications being peddled on their behalf have very low credibility. Moreover there are many warnings about harmful civilian uses as well, such as those that can create very large-scale unemployment.

In August 2017 as many as 116 specialists from 26 countries, including some of world’s leading robotics and artificial intelligence pioneers,  called on the United Nations to ban the development and use of killer robots.  They wrote, “Once developed lethal autonomous weapons will permit armed conflict to be fought at a scale greater than ever, and at time scales faster than humans can comprehend. These can be weapons of terror, weapons than despots and terrorists use against innocent population, and weapons hacked to behave in undesirable ways.”

“We do not have long to act.” This letter  warned. “Once this Pandora’s box is opened, it will be hard to close.”

The Economist (January 27 2017) noted in its special report titled ‘The Future of War’, “At least the world knows what is like to live in the shadow of nuclear weapons. There are much bigger question marks over how the rapid advances in artificial intelligence (AI) and deep learning will affect the way wars are fought, and perhaps even the way people think of war. The big concern is that these technologies may create autonomous weapon systems that can make choices about killing humans independently of those who created or deployed them.”

This special report distinguished between three types of AI weapons or robot weapons (i) in the loop (with a human constantly monitoring the operation and remaining in charge of critical decisions, (ii) on the loop (with a human supervising machines that can intervene at any stage of the mission) or (iii) out of the loop (with the machine carrying out the mission without any human intervention once launched).

Fully autonomous robot weapons (third category) are obviously the most dangerous. A letter warning against the coming race of these weapons was signed in 2015 by over 1000 AI experts. An international campaign called ‘Campaign to Stop Killer Robots’ is working on a regular basis for this and related objectives. Elon Musk has pinpointed competition for AI superiority at national level as the “most likely cause of World War 3.”

Several countries are surging ahead with rapid advances in robot weapons. In 2014 the Pentagon announced its ‘Third Offset Strategy’ with its special emphasis on robotics, autonomous systems and ‘big data’. This is supposed to help the USA to maintain its military superiority. In July 2017 China presented its “Next-Generation Artificial-Intelligence Development Plan”, which gives a crucial role to AI as the transformative technology in civil as well as military areas, with emphasis on ‘military-civil fusion’.

The campaign called Stop Killer Robots wants a legally binding international treaty banning LAWs. But can this emerge at a time when the big powers including all the permanent UN Security Council members are spending billions on robot weapons, and committing even more for the near future. The nuclear weapons will also become much more dangerous with the use of robot weapons technology. Clearly very strong, broad-based  mobilization of worldwide peace movement with a sense of great urgency will be needed to check such extremely dangerous trends.

Bharat Dogra is a journalist and author. His recent books include Planet in Peril and Protecting Earth for Children.


GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX


 


Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B. Subscribe to our Telegram channel


GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX


Comments are closed.