AI Robots & the Future of Modern Warfare

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
There are moves afoot to ban AI robots from being fielded as weapons of war. Killer robot ban vs faster, more lethal future wars with 'nowhere to hide' This is a highly ethical question and one that needs to be discussed, because up until now, a human has always been part of the kill chain, in that a human has always made the final decision whether or not to wilfully kill another human. Now we are on the dawn of technology that can make that decision for itself without any human input at all.

A recent argument has been put forward that the US needs to develop, operationalise and field weaponised AI (Artificial Intelligence) robots before the Russians and Chinese have the capabilities to defeat the US in a war using these weapons. The New Revolution in Military Affairs With the combination of the AI, sensors and very fast reaction times, any humans will have great difficulty surviving on the battlefield, or anywhere, against such machines. Therefore three powers in an arms race with these weapons, is IMHO, far more dangerous than the nuclear arms race. As it is, AI is already having negative impacts upon populations in authoritarian states https://www.foreignaffairs.com/arti...ficial-intelligence-will-reshape-global-order and weaponising it is definitely crossing the Rubicon with no return.

A third and philosophical point. If an AI is given the capability to kill a human without any human input, would that mean it is a sentient being? I ask this because arguably it would have to reason why it should or should not kill a given human being at any given point in time, just like we have too. Although a legal definition of sentient being is one that feels pain, Sentient Being Definition I would think that the ability to think and make reasoned decisions would also qualify as being sentient as well.
 

John Fedup

The Bunker Group
Military AI isn't going away because AI will be huge in the commercial world thereby making military applications much easier to develop. Military AI could very well prove to be as bad or worse than nuclear weapons but like nukes, it can't be uninvented. China and especially Russia will continue to develop MAI because it is a cost effective way to counter the current US military technology advantage. The world's criminal organizations will also exploit AI. AI is just the most recent technology on the road to human extinction.

Perhaps the sentient being definition should be changed to beings capable of changing their environment knowing full well it will induce their extinction.
 

Traveller

Member
I believe we are some way from James Cameron's Skynet in "Terminator". Weaponised AI will have a human accessible safety. I can't see any country releasing a fully autonomous AI device that it can't control.
 

John Fedup

The Bunker Group
AI could be similar to biological weapons, the release could also be accidental. Maybe a pandemic or climate change will beat AI in wiping us out.
 

Traveller

Member
AI could be similar to biological weapons, the release could also be accidental. Maybe a pandemic or climate change will beat AI in wiping us out.
Autonomous AI must be created, during which safeties would be designed. Humans have absolute control. In my opinion we as a species are more at risk from eco-terrorists, those that see humans as destroying the planet and therefore should be culled or annihilated. If these types acquired communicable pathogens we are in trouble.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #6
Autonomous AI must be created, during which safeties would be designed. Humans have absolute control. In my opinion we as a species are more at risk from eco-terrorists, those that see humans as destroying the planet and therefore should be culled or annihilated. If these types acquired communicable pathogens we are in trouble.
Yes, but would not AIs have the capability to disable and nullify any safeties that humans install? AIs could see humans as parasites and decide to eradicate us themselves. Nukes, for example, wouldn't have the same impact upon them. All they would have to do is to protect themselves from the EMP, heat and blast.
 

Traveller

Member
Yes, but would not AIs have the capability to disable and nullify any safeties that humans install?
The AI device is just a machine controlled by software. The software is written by humans and as such humans can set the behavioural parameters of the device. We've got bigger threats from genetically modified food right now than a possibility of a future rogue machine. But that's just my opinion....
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #9
The AI device is just a machine controlled by software. The software is written by humans and as such humans can set the behavioural parameters of the device. We've got bigger threats from genetically modified food right now than a possibility of a future rogue machine. But that's just my opinion....
Yep, but an AI is self learning so it'll soon learn how to override any human installed software restrictions.
 

Todjaeger

Potstirrer
The AI device is just a machine controlled by software. The software is written by humans and as such humans can set the behavioural parameters of the device. We've got bigger threats from genetically modified food right now than a possibility of a future rogue machine. But that's just my opinion....
There is an old programming adage, "to err is human, to really err, use a computer..."

There are several potential paths to an AI-controlled weapons system getting away from human control. If the code is written poorly and/or does not mesh/integrate well, if there are compiler problems, and certain hardware problems the processors might encounter could all trigger a cascade of logic problems.

If those parameters are not set correctly, or if a software/hardware failure or damage were to cause those parameters to be damaged or changed...
 

Traveller

Member
There is an old programming adage, "to err is human, to really err, use a computer..."

There are several potential paths to an AI-controlled weapons system getting away from human control. If the code is written poorly and/or does not mesh/integrate well, if there are compiler problems, and certain hardware problems the processors might encounter could all trigger a cascade of logic problems.

If those parameters are not set correctly, or if a software/hardware failure or damage were to cause those parameters to be damaged or changed...
Thanks. Really. After reading your post I thought about the quality of my ICT section. I hope whoever writes AI code, doesn't come from my mob. Then 'to err' would be an under-statement. Off to read up on Skynet....
 

Todjaeger

Potstirrer
Thanks. Really. After reading your post I thought about the quality of my ICT section. I hope whoever writes AI code, doesn't come from my mob. Then 'to err' would be an under-statement. Off to read up on Skynet....
To put into terms that people who are not familiar with coding might better grasp, consider the two, nearly identical sentences.

"Let's eat Grandma."

AND

"Let's eat, Grandma."

The absence of the comma in the first example causes it to have an entirely different meaning than the second one.

One piece of code that has the wrong syntax, or comes out of the compiler not quite right, or a faulty chip that handles a function call incorrectly, can all cause things to go wrong.

We are a long way from being able to have independently operating robots and/or AI.
 
Top