There are moves afoot to ban AI robots from being fielded as weapons of war. Killer robot ban vs faster, more lethal future wars with 'nowhere to hide' This is a highly ethical question and one that needs to be discussed, because up until now, a human has always been part of the kill chain, in that a human has always made the final decision whether or not to wilfully kill another human. Now we are on the dawn of technology that can make that decision for itself without any human input at all.
A recent argument has been put forward that the US needs to develop, operationalise and field weaponised AI (Artificial Intelligence) robots before the Russians and Chinese have the capabilities to defeat the US in a war using these weapons. The New Revolution in Military Affairs With the combination of the AI, sensors and very fast reaction times, any humans will have great difficulty surviving on the battlefield, or anywhere, against such machines. Therefore three powers in an arms race with these weapons, is IMHO, far more dangerous than the nuclear arms race. As it is, AI is already having negative impacts upon populations in authoritarian states https://www.foreignaffairs.com/arti...ficial-intelligence-will-reshape-global-order and weaponising it is definitely crossing the Rubicon with no return.
A third and philosophical point. If an AI is given the capability to kill a human without any human input, would that mean it is a sentient being? I ask this because arguably it would have to reason why it should or should not kill a given human being at any given point in time, just like we have too. Although a legal definition of sentient being is one that feels pain, Sentient Being Definition I would think that the ability to think and make reasoned decisions would also qualify as being sentient as well.
A recent argument has been put forward that the US needs to develop, operationalise and field weaponised AI (Artificial Intelligence) robots before the Russians and Chinese have the capabilities to defeat the US in a war using these weapons. The New Revolution in Military Affairs With the combination of the AI, sensors and very fast reaction times, any humans will have great difficulty surviving on the battlefield, or anywhere, against such machines. Therefore three powers in an arms race with these weapons, is IMHO, far more dangerous than the nuclear arms race. As it is, AI is already having negative impacts upon populations in authoritarian states https://www.foreignaffairs.com/arti...ficial-intelligence-will-reshape-global-order and weaponising it is definitely crossing the Rubicon with no return.
A third and philosophical point. If an AI is given the capability to kill a human without any human input, would that mean it is a sentient being? I ask this because arguably it would have to reason why it should or should not kill a given human being at any given point in time, just like we have too. Although a legal definition of sentient being is one that feels pain, Sentient Being Definition I would think that the ability to think and make reasoned decisions would also qualify as being sentient as well.