SP Guide Publications puts forth a well compiled articulation of issues, pursuits and accomplishments of the Indian Army, over the years
"Over the past 60 years, the growth of SP Guide Publications has mirrored the rising stature of Indian Navy. Its well-researched and informative magazines on Defence and Aerospace sector have served to shape an educated opinion of our military personnel, policy makers and the public alike. I wish SP's Publication team continued success, fair winds and following seas in all future endeavour!"
Since, its inception in 1964, SP Guide Publications has consistently demonstrated commitment to high-quality journalism in the aerospace and defence sectors, earning a well-deserved reputation as Asia's largest media house in this domain. I wish SP Guide Publications continued success in its pursuit of excellence.
Future of Autonomous Warfare with the rise of AI-Controlled Weapons
The Author is Former Director General of Information Systems and A Special Forces Veteran, Indian Army |
China’s recent successful experiment of deploying its artificial (AI) controlled Qimingxing-1 remote sensing satellite in low-Earth orbit (LeO) to observe areas in India and Japan has been reported in these columns earlier. While the AI was induced from a ground-based station for the experiment, the operation for redeployment of the satellite was autonomous without any human intervention. In India, the Qimingxing-1 satellite observed the military area of Danapur in Bihar that houses the Bihar Regimental Centre and in Japan the busy port of Osaka which is also frequented by American naval vessels. The locations were selected by AI through instructions fed by the researchers but the autonomous operation of Qimingxing-1 continued for 24 hours without any human intervention or assignment.
China successfully deployed its AI-controlled Qimingxing-1 satellite to observe military areas in India and Japan autonomously, without human intervention
This came to light only because of a report in the South China Morning Post describing the experiment led by China’s State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing (LIESMARS). It was also covered in these columns earlier that out of the 5,465 active artificial satellites orbiting the Earth as of April 30, 2022, 541 belonged to China, according to a report published by the ‘Statista’ on April 13, 2023.
Now news reports of June 13, 2023 state that Russia has scored the first AI kill by using its AI-controlled S-350 Vityaz anti-aircraft missile system in shooting down a Ukrainian aircraft autonomously. Developed by ‘Almaz-Antey, the ‘S-350 Vityaz’ can effectively engage ballistic targets at a maximum range of 25 km and engage aerodynamic air targets, including aircraft, drones, cruise missiles, and similar devices, at a range of 120 to 150 km; with the Vityaz missiles capable of intercepting targets traveling up to two km per second. The advanced radar with improved anti-jamming resistance utilises a circular scanning mode, allowing for environmental monitoring from all angles. These features enhance the S-350’s effectiveness and situational awareness, making it a powerful air defence system. The system can be integrated into foreign air defence systems. The first edition of the S-350 entered service in the Russian military in 2019.
Russia achieved a significant milestone by using its AI-controlled S-350 Vityaz anti-aircraft missile system to shoot down a Ukrainian aircraft autonomously
According to media reports, Russia’s Deputy Prime Minister has confirmed that the S-350 Vityaz missile air defence system recently accomplished an extraordinary task by successfully shooting down a Ukrainian aircraft while operating in “automatic mode”. The Minister said that the Vityaz anti-aircraft missile system operating in the NVO zone demonstrated unparalleled capabilities by autonomously detecting, tracking, and destroying Ukrainian air targets without any operator intervention. He further said that this remarkable achievement marked the first instance where a system operated fully automatically using artificial intelligence in combat conditions. The implementation of the automatic mode followed the principle of non-interference by human operators with the decisions made by the artificial intelligence components of the complex. In other words, the operator refrained from canceling or overriding the decisions made by the complex’s AI based on the prevailing air combat situation. This approach confirmed the effectiveness of the chosen operational algorithm of the machine.
The above is a clear indication that the future will witness use of more and more AI-controlled weapon systems; experimentation and actual use in the battlefield. China’s AI-controlled satellites can remotely monitor any part of India, not the border region alone, without our knowledge. But AI-controlled weapon systems pose a bigger danger whether based in space, air, ground, sea or sub-surface. An aerial platform inadvertently crossing into foreign airspace or into a mutually agreed no-fly zone could be immediately shot down by an AI-controlled weapon system firing autonomously.
Concerns are mounting worldwide about the potential for AI to outpace human capabilities, leading to potential risks associated with the uncontrolled development and deployment of AI
Concerns are mounting across the world of the dangers that can be posed by the use of AI, which are already being fictionalised in movies. Stephen Hawking told the BBC, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.” A scarier comment by a columnist reads, “The upheavals (of artificial intelligence) can escalate quickly and become scarier and even cataclysmic. Imagine how a medical robot, originally programmed to rid cancer, could conclude that the best way to eliminate cancer is to exterminate humans who are genetically prone to the disease.”
Tesla CEO Elon Musk has cautioned that even a "benign dependency" on these complex AI machines can threaten civilisation. He has reasoned that reliance on AI to perform seemingly simple tasks can, over time, create an environment in which humans forget how to operate the machines that enabled AI in the first place. He further wrote, “The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year time frame. 10 years at most.” Elon Musk is among many other technical luminaries who have signed an open letter urging a moratorium on the deployment of AI.
In light of the rapid advancement and proliferation of AI-controlled weapons, it is argued that India must develop its own AI-based weapon systems for deterrence and self-defence purposes
However, the bottom line is that AI-controlled weapon systems will continue to grow notwithstanding any moratorium, bill or convention by the United Nations, not only because all countries will not sign these but also because many signatories would themselves flout them. Have we been able to effectively block the proliferation of nuclear, chemical and biological weapons, weaponisation of space, or for that matter terrorism? The answer is no. Therefore, it stands to reason that India must develop AI-based weapon systems for deterrence or use when situations demand.