AI Robots & the Future of Modern Warfare

hauritz

Well-Known Member
Bit of a tangent, but not really, is the question of why we have yet to contact other advanced civilisations in our galaxy. One theory is that when civilisations reach a certain technological level they basically self destruct. Creating autonomous weapons tasked with the job of maintaining peace and order, that might come to the obvious conclusion that the best way to achieve that goal is to eliminate all humans, sounds like a good way of going about that.

Problem is if we don't create AI controlled weapons the other side will. Damned if we do and damned if we don't. Also the whole robot turning on their masters trope sounds enough like science fiction that most people will probably not concern themselves with it until it is too late.
 
Last edited:

hauritz

Well-Known Member
Actually rereading what I said I should point out that threats of swarms of killer AI robots are decades away in my opinion. Machines may well become smarter than us but they are still incredibly reliant on us dumb apes and that will continue well into the future.

I imagine humans will always insert themselves into key parts of the infrastructure to ensure that we keep any rogue AI in check. At the moment the simple act of pulling out an electrical plug will be enough to keep even the most rebellious supercomputer in check. Also the incredible all round dexterity of the human body is something that machines may struggle to emulate.

The threat that is probably greater to humanity is in cyberspace. A reasonably competent computer programmer could unleash an AI virus that could hack into just about any insufficiently protected network and wreak absolute havoc.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #44
Actually rereading what I said I should point out that threats of swarms of killer AI robots are decades away in my opinion. Machines may well become smarter than us but they are still incredibly reliant on us dumb apes and that will continue well into the future.

I imagine humans will always insert themselves into key parts of the infrastructure to ensure that we keep any rogue AI in check. At the moment the simple act of pulling out an electrical plug will be enough to keep even the most rebellious supercomputer in check. Also the incredible all round dexterity of the human body is something that machines may struggle to emulate.

The threat that is probably greater to humanity is in cyberspace. A reasonably competent computer programmer could unleash an AI virus that could hack into just about any insufficiently protected network and wreak absolute havoc.
Frank Herbert's Dune series of novels covers that issue. His son Brian's prequel books, co-authored with Kevin Anderson cover it in more detail when they write about the Butlerian Jihad. Whilst the series is a work of fiction, it has I think, a very good discussion about AI and robots and it's possible dangers to humanity.
 

Vivendi

Well-Known Member
He notes that one simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: “We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

He went on: “We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
Highlights from the RAeS Future Combat Air & Space Capabilities Summit (aerosociety.com)

Killing the operator that gave the order in order to resolve a logical inconsistency -- interesting strategy. And after that he went for the comms tower!

With the rapid development of ChatGPT 3 followed by 4, I predict we will soon bow to our AI overlords. Hopefully the first "complete" AGI will be a friendly one. If not, Skynet can quickly become reality not just fiction.

Edit: Oops, this story is not what it seems. Sorry about that. Seems it was just a scenario being looked into, not a trained AI!

 
Last edited:

hauritz

Well-Known Member
The thing about training an AI is that it is complex and unpredictable. This isn't the same as programming software, it is self learning and that adds levels of complexity. With AI you give it a problem to solve, you give it parameters and after that you are probably out of the loop.

For example you will tell your drone to kill a target. If this is time sensitive or it is being jammed then calling home for final approval might not be an option. In a simular situation a human may have to use their own initiative, and to be honest, if it is going to be effective, you will often have to give your drone that same capability. This is where things get complex because you are essentually asking your machine to make a judgement call. Will it risk killing civillians to achieve its mission, will it ignore calls to abort a mission if it has doubts about the authenticity of that order and so on.

In the case of a simulation I would say a big part of it is actually testing to see how an AI would react in various scenarios. To be honest a problem solving AI deciding that taking out a communication tower to prevent a ground controller from interfering with a priority mission does have a certain logic about it.
 

John Fedup

The Bunker Group
Short article about information dominance and AI being responsible for Ukraine’s success against Russia despite having fewer resources. AI, like quantum research will be the focus for the US and China. Given China’s large pool of scientists, engineers, and wealth along with a government immune from public opposition, it seems to be a difficult race, just my two cents.

 
Top