SP Guide Publications puts forth a well compiled articulation of issues, pursuits and accomplishments of the Indian Army, over the years
"Over the past 60 years, the growth of SP Guide Publications has mirrored the rising stature of Indian Navy. Its well-researched and informative magazines on Defence and Aerospace sector have served to shape an educated opinion of our military personnel, policy makers and the public alike. I wish SP's Publication team continued success, fair winds and following seas in all future endeavour!"
Since, its inception in 1964, SP Guide Publications has consistently demonstrated commitment to high-quality journalism in the aerospace and defence sectors, earning a well-deserved reputation as Asia's largest media house in this domain. I wish SP Guide Publications continued success in its pursuit of excellence.
AI integration in the military domain raises significant legal, security, and ethical concerns, especially regarding transparency, accountability, and bias, amplified in high-risk military contexts
The Author is Former Director General of Information Systems and A Special Forces Veteran, Indian Army |
Fundamental changes, including in the military domain are taking place globally with the advent of Artificial Intelligence (AI). While the integration of AI technologies creates unprecedented opportunities to boost human capabilities, especially in terms of decision-making, it also raises significant legal, security-related and ethical concerns in areas like transparency, reliability, predictability, accountability and bias. These concerns are amplified in the high-risk military context.
On February 14-15, 2023, Netherland hosted the first summit on Responsible AI in the Military Domain – REAIM 2023 at the World Forum in Hague. The theme of REAIM 2023 was based on –
60 countries participated in REAIM 2023, including the US and China. The summit concluded with the production of a non binding "call to action" document, endorsed by representatives from 60 countries, including Russia and China. There were some external calls for starting negotiations on an internationally binding law or an enforcement mechanism-driven law. The US proposed a 'Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy' – an international norms and arms control proposal, which covers areas like lethal autonomous weapons and weapons decision-making.
At the REAIM 2024 summit in Seoul, nearly 100 countries, including the US, China, and Ukraine, agreed that humans, not AI, should make critical decisions about nuclear weapons
The second REAIM summit was held in Seoul, Republic of Korea on September 8-10, 2024. Nearly 100 countries attending this global summit, including the US, China, and Ukraine, agreed that humans, not AI, should make critical decisions regarding the use of nuclear weapons. The summit concluded with a non-binding declaration called the "Blueprint for Action." Among the details in the blueprint were specific points on AI-enabled drone usage, risk assessment guidelines, the importance of maintaining human control, and preventing AI from being used to proliferate weapons of mass destruction. It emphasises the necessity of maintaining human control in decisions concerning nuclear weapons deployment.
The non-binding agreement says it is essential to "maintain human control and involvement for all actions concerning nuclear weapons employment", AI applications in the military "must be applied in accordance with applicable national and international law", and "AI applications should be ethical and human-centric."
The declaration stopped short of outlining sanctions or consequences for any violations of these principles. The summit also noted that there was a need for "further discussions for clear policies and procedures". Conspicuously, China did not sign the declaration even though it is not legally binding but Russia and China were not invited to the summit because of the war in Ukraine.
The non-binding declaration at the REAIM 2024 summit emphasises the necessity of maintaining human control in nuclear weapons decisions and ensuring AI applications comply with national and international law
REAIM 2023 and REAIM 2024 may be considered good steps but the reality is quite different, as witnessed in the ongoing wars in Ukraine and Gaza where the UN Security Council (UNSC), UN General Assembly (UNGA), International Court of Justice (ICJ) and the International Criminal Court (ICC) rulings are ignored, scoffed at, veto freely used and members of the ICC even threatened of physical harm by the US and Israel.
It is "might is right" and "everything is fair in war", while the disinformation using AI enables flat denials as well as labeling false charges. Take the UN Convention on Terrorism, which is being flouted by the P-5 nations – most blatantly by the US in financing, supporting and employing terrorist organisations.
In March 2024, it was revealed that the Chinese Communist Party (CCP) is creating a genetic database of every human on Earth, which flouts medical privacy laws and international standards. Also, China is suspected of using DNA samples from prisoners of conscience, persecuted by the CCP.
While AI is already being used in military operations for tasks like reconnaissance, surveillance, and analysis, it also has the potential to autonomously select targets. Israel has already been using the AI-based tool "Lavender," in attacking Gaza.
Israel's use of the AI-based "Lavender" system in Gaza demonstrates the potential dangers of AI-driven target selection, as it has led to civilian casualties despite a known error rate of 10 per cent
The Lavender system is said to mark suspected operatives in the military wings of Hamas and Palestinian Islamic Jihad (PIJ) as potential bombing targets, including low-ranking individuals; for which the software analyses data, collected through mass surveillance on most of Gaza's 2.3 million residents, assessing and ranking the likelihood of each person's involvement in the military wing of Hamas or Palestinian Islamic Jihad (PIJ).
Individuals were given a rating of 1 to 100, indicating their likelihood of being militant. However, even though Lavender has an error rate of "10 per cent", its outputs were treated "as if it were a human decision"; resulting in the killings of thousands of civilians including women and children.
REAIM 2024 was a bigger global gathering than REAIM 2023, with another incremental step, whatever its worth. The United Nations has also been discussing the creation of international guidelines for lethal autonomous weapons. But the irony of the global situation was reflected by Netherlands Defence Minister Ruben Brekelmans, stating at REAIM 2024, that progress is being made but "we will never have the whole world on board. How do we deal with the fact that not everyone is complying? That is a complicated dilemma that we should also put on the table."
Finally, the chances of terrorist and rogue nations using AI can hardly be discounted. But it is the powerful nations, P-5 included, that need to be careful especially when it comes to weapons of mass destruction. Hopefully sanity will prevail.