Welcome to DefenceTalk.com Forum!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

  1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

AI Robots & the Future of Modern Warfare

Discussion in 'Strategy & Tactics' started by ngatimozart, May 4, 2019.

Share This Page

  1. ngatimozart

    ngatimozart Super Moderator Staff Member Verified Defense Pro

    Joined:
    Feb 5, 2010
    Messages:
    5,993
    Likes Received:
    852
    Location:
    In the rum store
    There are moves afoot to ban AI robots from being fielded as weapons of war. Killer robot ban vs faster, more lethal future wars with 'nowhere to hide' This is a highly ethical question and one that needs to be discussed, because up until now, a human has always been part of the kill chain, in that a human has always made the final decision whether or not to wilfully kill another human. Now we are on the dawn of technology that can make that decision for itself without any human input at all.

    A recent argument has been put forward that the US needs to develop, operationalise and field weaponised AI (Artificial Intelligence) robots before the Russians and Chinese have the capabilities to defeat the US in a war using these weapons. The New Revolution in Military Affairs With the combination of the AI, sensors and very fast reaction times, any humans will have great difficulty surviving on the battlefield, or anywhere, against such machines. Therefore three powers in an arms race with these weapons, is IMHO, far more dangerous than the nuclear arms race. As it is, AI is already having negative impacts upon populations in authoritarian states https://www.foreignaffairs.com/arti...ficial-intelligence-will-reshape-global-order and weaponising it is definitely crossing the Rubicon with no return.

    A third and philosophical point. If an AI is given the capability to kill a human without any human input, would that mean it is a sentient being? I ask this because arguably it would have to reason why it should or should not kill a given human being at any given point in time, just like we have too. Although a legal definition of sentient being is one that feels pain, Sentient Being Definition I would think that the ability to think and make reasoned decisions would also qualify as being sentient as well.
     
    Simon Ewing Jarvie likes this.
  2. John Fedup

    John Fedup Well-Known Member

    Joined:
    Sep 9, 2013
    Messages:
    3,863
    Likes Received:
    252
    Location:
    Vancouver and Toronto
    Military AI isn't going away because AI will be huge in the commercial world thereby making military applications much easier to develop. Military AI could very well prove to be as bad or worse than nuclear weapons but like nukes, it can't be uninvented. China and especially Russia will continue to develop MAI because it is a cost effective way to counter the current US military technology advantage. The world's criminal organizations will also exploit AI. AI is just the most recent technology on the road to human extinction.

    Perhaps the sentient being definition should be changed to beings capable of changing their environment knowing full well it will induce their extinction.
     
    recce.k1 likes this.
  3. Traveller

    Traveller Member

    Joined:
    Mar 8, 2019
    Messages:
    86
    Likes Received:
    21
    Location:
    Australia
    I believe we are some way from James Cameron's Skynet in "Terminator". Weaponised AI will have a human accessible safety. I can't see any country releasing a fully autonomous AI device that it can't control.
     
  4. John Fedup

    John Fedup Well-Known Member

    Joined:
    Sep 9, 2013
    Messages:
    3,863
    Likes Received:
    252
    Location:
    Vancouver and Toronto
    AI could be similar to biological weapons, the release could also be accidental. Maybe a pandemic or climate change will beat AI in wiping us out.
     
  5. Traveller

    Traveller Member

    Joined:
    Mar 8, 2019
    Messages:
    86
    Likes Received:
    21
    Location:
    Australia
    Autonomous AI must be created, during which safeties would be designed. Humans have absolute control. In my opinion we as a species are more at risk from eco-terrorists, those that see humans as destroying the planet and therefore should be culled or annihilated. If these types acquired communicable pathogens we are in trouble.
     
    recce.k1 likes this.
  6. ngatimozart

    ngatimozart Super Moderator Staff Member Verified Defense Pro

    Joined:
    Feb 5, 2010
    Messages:
    5,993
    Likes Received:
    852
    Location:
    In the rum store
    Yes, but would not AIs have the capability to disable and nullify any safeties that humans install? AIs could see humans as parasites and decide to eradicate us themselves. Nukes, for example, wouldn't have the same impact upon them. All they would have to do is to protect themselves from the EMP, heat and blast.
     
  7. Traveller

    Traveller Member

    Joined:
    Mar 8, 2019
    Messages:
    86
    Likes Received:
    21
    Location:
    Australia
    The AI device is just a machine controlled by software. The software is written by humans and as such humans can set the behavioural parameters of the device. We've got bigger threats from genetically modified food right now than a possibility of a future rogue machine. But that's just my opinion....
     
  8. John Fedup

    John Fedup Well-Known Member

    Joined:
    Sep 9, 2013
    Messages:
    3,863
    Likes Received:
    252
    Location:
    Vancouver and Toronto
    If AI software is written by the developers behind ALIS software we are screwed.:D
     
  9. ngatimozart

    ngatimozart Super Moderator Staff Member Verified Defense Pro

    Joined:
    Feb 5, 2010
    Messages:
    5,993
    Likes Received:
    852
    Location:
    In the rum store
    Yep, but an AI is self learning so it'll soon learn how to override any human installed software restrictions.
     
  10. Todjaeger

    Todjaeger Potstirrer

    Joined:
    Jul 27, 2006
    Messages:
    4,677
    Likes Received:
    406
    Location:
    not in New England anymore...
    There is an old programming adage, "to err is human, to really err, use a computer..."

    There are several potential paths to an AI-controlled weapons system getting away from human control. If the code is written poorly and/or does not mesh/integrate well, if there are compiler problems, and certain hardware problems the processors might encounter could all trigger a cascade of logic problems.

    If those parameters are not set correctly, or if a software/hardware failure or damage were to cause those parameters to be damaged or changed...
     
    ngatimozart likes this.
  11. Traveller

    Traveller Member

    Joined:
    Mar 8, 2019
    Messages:
    86
    Likes Received:
    21
    Location:
    Australia
    Thanks. Really. After reading your post I thought about the quality of my ICT section. I hope whoever writes AI code, doesn't come from my mob. Then 'to err' would be an under-statement. Off to read up on Skynet....
     
  12. Todjaeger

    Todjaeger Potstirrer

    Joined:
    Jul 27, 2006
    Messages:
    4,677
    Likes Received:
    406
    Location:
    not in New England anymore...
    To put into terms that people who are not familiar with coding might better grasp, consider the two, nearly identical sentences.

    "Let's eat Grandma."

    AND

    "Let's eat, Grandma."

    The absence of the comma in the first example causes it to have an entirely different meaning than the second one.

    One piece of code that has the wrong syntax, or comes out of the compiler not quite right, or a faulty chip that handles a function call incorrectly, can all cause things to go wrong.

    We are a long way from being able to have independently operating robots and/or AI.
     
    Traveller likes this.