Careless use of artificial intelligence algorithms can have negative impacts on the human race beyond our imagination. Global regulations are still not in place for prevention of the negative effects of killer algorithms, despite the fact that AI has advanced many folds during the last two years.
Many countries are integrating artificial intelligence into their military systems for target recognition and speeding up their firepower, but its careless use is devastating. AI’s negative impact is also clearly visible in disturbing democratic processes by trying to alter public opinion with false claims through disinformation.
Killer algorithms claiming innocent lives
We have recently seen announcements of AI integration in military systems from many countries, and quite a number of military contractors launching their AI-enhanced targeting and fire systems, from drones to radar systems, and also surveillance systems for the recognition of human targets.
The most careless deployment has been witnessed by Israel targeting the Gaza population by generating lists of targets through killer algorithms, which are often wrong or deployed with criminal negligence, killing tens and hundreds of innocent civilians with one claimed potential individual as the primary target. Imagine the havoc it has brought on the civil people of Gaza, killing thousands of children and women, as one official expressed that the machine did it coldly. Really? Killing civilians in cold blood is the reality of how cruel killer algorithms can get.
A recent investigation by an Israeli publication, +972 Magazine, has discovered startling facts about the use of AI by Israeli forces for airstrikes. The system known as Lavender has been used to generate lists of thousands of suspected militants, and another system called Gospel for selecting buildings and infrastructure as targets. The strategy used was to target the Hamas individuals even if they were in their homes located in the civilian neighborhoods, killing hundreds of civilians along the way.
The report includes interviews with six Israeli intelligence sources whose names were kept secret for source safety. The sources claimed the use of Lavender, along with other AI systems, for selecting targets. They also discussed the leniency in regulations for the use of these systems and their estimation of civilians being killed or allowed to be killed as collateral damage, with each high-ranking or a low ranking Hamas official ranging from two to three digits.
But the actual figures seem to be far surpassing these allowed estimates, which were already the highest compared to modern warfare events across the globe. The number of children killed during the first three months of the war also exceeded the number of children killed in wars around the globe. It’s heartbreaking just thinking of them.
Global regulation still lacks
Despite killer AI algorithms killing people in wars, there seem to be not enough efforts to put a stop to this menace from global leaders. There is no set of global regulations for the use of AI in warfare, despite the fact that killing people and damaging crucial infrastructure to deprive the civil population of their basic human needs is a clear war crime.
What we see is lip service by global leaders followed by another grant of arms from the US, which might be used in collaboration with the said killer algorithms based on AI. The only legal action was that of South Africa going to the International Court of Justice for war crimes against Israel.
Militaries around the globe are trying to leverage artificial intelligence, and many countries are trying to get a military advantage by deploying these systems. Geopolitical tensions are also fueling these advances, as countries from the US to China are all rushing to develop AI systems for target selection and decision-making to speed up the processes.
Until now, the claims are for better accuracy and less civil damage with artificial intelligence deployment, but the ground realities contradict the claims and the hazards of killer algorithms far exceed the precision claims, which demand robust efforts for regulating AI on a global level.
The +972 Magazine reports can be found here.