AI Robots & the Future of Modern Warfare

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
There are moves afoot to ban AI robots from being fielded as weapons of war. Killer robot ban vs faster, more lethal future wars with 'nowhere to hide' This is a highly ethical question and one that needs to be discussed, because up until now, a human has always been part of the kill chain, in that a human has always made the final decision whether or not to wilfully kill another human. Now we are on the dawn of technology that can make that decision for itself without any human input at all.

A recent argument has been put forward that the US needs to develop, operationalise and field weaponised AI (Artificial Intelligence) robots before the Russians and Chinese have the capabilities to defeat the US in a war using these weapons. The New Revolution in Military Affairs With the combination of the AI, sensors and very fast reaction times, any humans will have great difficulty surviving on the battlefield, or anywhere, against such machines. Therefore three powers in an arms race with these weapons, is IMHO, far more dangerous than the nuclear arms race. As it is, AI is already having negative impacts upon populations in authoritarian states https://www.foreignaffairs.com/arti...ficial-intelligence-will-reshape-global-order and weaponising it is definitely crossing the Rubicon with no return.

A third and philosophical point. If an AI is given the capability to kill a human without any human input, would that mean it is a sentient being? I ask this because arguably it would have to reason why it should or should not kill a given human being at any given point in time, just like we have too. Although a legal definition of sentient being is one that feels pain, Sentient Being Definition I would think that the ability to think and make reasoned decisions would also qualify as being sentient as well.
 

John Fedup

The Bunker Group
Military AI isn't going away because AI will be huge in the commercial world thereby making military applications much easier to develop. Military AI could very well prove to be as bad or worse than nuclear weapons but like nukes, it can't be uninvented. China and especially Russia will continue to develop MAI because it is a cost effective way to counter the current US military technology advantage. The world's criminal organizations will also exploit AI. AI is just the most recent technology on the road to human extinction.

Perhaps the sentient being definition should be changed to beings capable of changing their environment knowing full well it will induce their extinction.
 

Traveller

Member
I believe we are some way from James Cameron's Skynet in "Terminator". Weaponised AI will have a human accessible safety. I can't see any country releasing a fully autonomous AI device that it can't control.
 

John Fedup

The Bunker Group
AI could be similar to biological weapons, the release could also be accidental. Maybe a pandemic or climate change will beat AI in wiping us out.
 

Traveller

Member
AI could be similar to biological weapons, the release could also be accidental. Maybe a pandemic or climate change will beat AI in wiping us out.
Autonomous AI must be created, during which safeties would be designed. Humans have absolute control. In my opinion we as a species are more at risk from eco-terrorists, those that see humans as destroying the planet and therefore should be culled or annihilated. If these types acquired communicable pathogens we are in trouble.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #6
Autonomous AI must be created, during which safeties would be designed. Humans have absolute control. In my opinion we as a species are more at risk from eco-terrorists, those that see humans as destroying the planet and therefore should be culled or annihilated. If these types acquired communicable pathogens we are in trouble.
Yes, but would not AIs have the capability to disable and nullify any safeties that humans install? AIs could see humans as parasites and decide to eradicate us themselves. Nukes, for example, wouldn't have the same impact upon them. All they would have to do is to protect themselves from the EMP, heat and blast.
 

Traveller

Member
Yes, but would not AIs have the capability to disable and nullify any safeties that humans install?
The AI device is just a machine controlled by software. The software is written by humans and as such humans can set the behavioural parameters of the device. We've got bigger threats from genetically modified food right now than a possibility of a future rogue machine. But that's just my opinion....
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #9
The AI device is just a machine controlled by software. The software is written by humans and as such humans can set the behavioural parameters of the device. We've got bigger threats from genetically modified food right now than a possibility of a future rogue machine. But that's just my opinion....
Yep, but an AI is self learning so it'll soon learn how to override any human installed software restrictions.
 

Todjaeger

Potstirrer
The AI device is just a machine controlled by software. The software is written by humans and as such humans can set the behavioural parameters of the device. We've got bigger threats from genetically modified food right now than a possibility of a future rogue machine. But that's just my opinion....
There is an old programming adage, "to err is human, to really err, use a computer..."

There are several potential paths to an AI-controlled weapons system getting away from human control. If the code is written poorly and/or does not mesh/integrate well, if there are compiler problems, and certain hardware problems the processors might encounter could all trigger a cascade of logic problems.

If those parameters are not set correctly, or if a software/hardware failure or damage were to cause those parameters to be damaged or changed...
 

Traveller

Member
There is an old programming adage, "to err is human, to really err, use a computer..."

There are several potential paths to an AI-controlled weapons system getting away from human control. If the code is written poorly and/or does not mesh/integrate well, if there are compiler problems, and certain hardware problems the processors might encounter could all trigger a cascade of logic problems.

If those parameters are not set correctly, or if a software/hardware failure or damage were to cause those parameters to be damaged or changed...
Thanks. Really. After reading your post I thought about the quality of my ICT section. I hope whoever writes AI code, doesn't come from my mob. Then 'to err' would be an under-statement. Off to read up on Skynet....
 

Todjaeger

Potstirrer
Thanks. Really. After reading your post I thought about the quality of my ICT section. I hope whoever writes AI code, doesn't come from my mob. Then 'to err' would be an under-statement. Off to read up on Skynet....
To put into terms that people who are not familiar with coding might better grasp, consider the two, nearly identical sentences.

"Let's eat Grandma."

AND

"Let's eat, Grandma."

The absence of the comma in the first example causes it to have an entirely different meaning than the second one.

One piece of code that has the wrong syntax, or comes out of the compiler not quite right, or a faulty chip that handles a function call incorrectly, can all cause things to go wrong.

We are a long way from being able to have independently operating robots and/or AI.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #14
There was an attempted attack on a power substation in Pennsylvania last year with a cheap Quadcopter Likely Drone Attack On U.S. Power Grid Revealed In New Intelligence Report (thedrive.com) It had a long copper wire attached to it, which it is believed to be aimed at shorting power lines.
Simple but effective if it works. Be spectacular too. In the US though you would have to be really clever in hiding your tracks, especially WRT trace evidence. The FBI would throw immense resources into finding the culprits.
 

cdxbow

Well-Known Member
Simple but effective if it works. Be spectacular too. In the US though you would have to be really clever in hiding your tracks, especially WRT trace evidence. The FBI would throw immense resources into finding the culprits.
Yes you would expect/hope so. It will be interesting to see if they get to the bottom of it. State or non state actor? Creative individual who likes to make things go bang? Someone who hated the power company?

Thinking about it, a narrow piece of copper may not short out high voltage/amps systems because it may quickly blow like a fuse before damage is done. A thick copper wire may be necessary which may preclude using a cheap drone b/c of weight.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #16
Yes you would expect/hope so. It will be interesting to see if they get to the bottom of it. State or non state actor? Creative individual who likes to make things go bang? Someone who hated the power company?

Thinking about it, a narrow piece of copper may not short out high voltage/amps systems because it may quickly blow like a fuse before damage is done. A thick copper wire may be necessary which may preclude using a cheap drone b/c of weight.
No 8 fencing wire would probably do the job. It's tougher than copper and if you spliced three strands together that would definitely do the job.
 

hauritz

Well-Known Member
To put into terms that people who are not familiar with coding might better grasp, consider the two, nearly identical sentences.

"Let's eat Grandma."

AND

"Let's eat, Grandma."

The absence of the comma in the first example causes it to have an entirely different meaning than the second one.

One piece of code that has the wrong syntax, or comes out of the compiler not quite right, or a faulty chip that handles a function call incorrectly, can all cause things to go wrong.

We are a long way from being able to have independently operating robots and/or AI.
True ... but software developers think differently to normal humans. They would still release the software and if enough grandmothers got eaten they would eventually just release a patch.
 

hauritz

Well-Known Member
It isn't really the coding of AI that bothers me. The real issue is what degree of autonomy do you give an AI device.

Let's take a battlefield situation where the rules of engagement might be such that your AI device cannot engage without first getting approval from a human controller. Adding this restriction considerably slows the decision making process and if an enemy manages to jam communications it could render the machine impotent.

To counter that you would have to allow your machine a greater degree of autonomy. You would also want to insert some sort of code of ethics. Don't kill villagers, don't kill enemy soldiers that are surrendering, don't engage in combat if civilians are going to be put at risk and so on. Problem is that when the enemy realise this they start hiding behind civilians, or pretend to surrender or start arming children. You might try to overcome this problem by developing a more complex combat algorithm that would enable it to assess a threat situation and come up with an appropriate response ... or you could just turn your robots in to straight out terminator style killing machines.

I suspect self learning AIs wouldn't fare any better. A self learning machine, unfettered by any moral code, wouldn't concern itself with collateral damage. It might even be willing to sacrifice its own soldiers in order to complete its mission. To a machine, humans might be no more important than pieces in a chess game.

AI is divided into three levels.

artificial narrow intelligence (ANI)
artificial general intelligence (AGI)
artificial super intelligence (ASI)

We are currently at ANI. Simple and functional able to perform basic tasks.
AGI is when things start to get scary. Essentially they can do anything we can do.
ASI is when we are basically screwed. When that day come you had better hope our computer overlords will decide that we are worth keeping around.
 

John Fedup

The Bunker Group
It isn't really the coding of AI that bothers me. The real issue is what degree of autonomy do you give an AI device.

Let's take a battlefield situation where the rules of engagement might be such that your AI device cannot engage without first getting approval from a human controller. Adding this restriction considerably slows the decision making process and if an enemy manages to jam communications it could render the machine impotent.

To counter that you would have to allow your machine a greater degree of autonomy. You would also want to insert some sort of code of ethics. Don't kill villagers, don't kill enemy soldiers that are surrendering, don't engage in combat if civilians are going to be put at risk and so on. Problem is that when the enemy realise this they start hiding behind civilians, or pretend to surrender or start arming children. You might try to overcome this problem by developing a more complex combat algorithm that would enable it to assess a threat situation and come up with an appropriate response ... or you could just turn your robots in to straight out terminator style killing machines.

I suspect self learning AIs wouldn't fare any better. A self learning machine, unfettered by any moral code, wouldn't concern itself with collateral damage. It might even be willing to sacrifice its own soldiers in order to complete its mission. To a machine, humans might be no more important than pieces in a chess game.

AI is divided into three levels.

artificial narrow intelligence (ANI)
artificial general intelligence (AGI)
artificial super intelligence (ASI)

We are currently at ANI. Simple and functional able to perform basic tasks.
AGI is when things start to get scary. Essentially they can do anything we can do.
ASI is when we are basically screwed. When that day come you had better hope our computer overlords will decide that we are worth keeping around.
Based on human history, the computer overlords decision won’t bode well for humans. Then again, depending on how long it takes to develop ASI, we may beat the machines by doing ourselves first.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #20
Based on human history, the computer overlords decision won’t bode well for humans. Then again, depending on how long it takes to develop ASI, we may beat the machines by doing ourselves first.
Are you thinking of the Butlerian Jihad there John? For those who are unaware the Butlerian Jihad was a core concept of Frank Herbert's Dune series. It overthrew the control of the machines and AI resulting in the outlawing of all thinking machines.
 
Top