AI Unmanned Vehicles (UAV /UGV /USV / UUV).

StobieWan

Super Moderator
Staff member
Couldn't find a post relating to this so apologies if it's been mentioned buuuut:


This is way more interesting than it might have been since two other announcements came out -

Firstly that the RN is looking for an RFI regarding fitting cats and traps for what sounds like a fairly hefty UAS

Secondly, LANCA - the RAF's Lightweight Affordable Novel Combat Aircraft. which is also intended to be interoperable with RN carriers.

Edit:

Project Mosquito. Early days yet.

This could potentially be very interesting.
 

StobieWan

Super Moderator
Staff member
  • Thread Starter Thread Starter
  • #2
To add to the above,
It seems like just about every other country has a loyal wingman type program running these days. At least the UK is coming straight out and saying that they intend to arm these things. Armed drones with advanced AI decision-making capabilities ... I can't see any possible way that could go wrong.

Better to think of them as long range missiles with loitering capabilities - there's still going to be a human in the loop for shoot/no shoot decisions, be it locally with a human pilot relaying instructions to the drone or a remote pilot via sat link. There's no legal frame work to support autonomous engagements as far as I understand it so nothing's really changing there. Maybe once there's a workable legal framework in place, things might shift, but I suspect not for some time, if ever.

More on topic however, that really does open up UK carrier capability to all sorts of possibilities -whether we develop something locally or end up buying MQ-9 Stingray or similar.

That's my take on drones - there does seem to be a shift to using more AI in engagement cycles however :


There is a line of thought that as most of these engagements consist of data harvested and presented by a machine, to a human, with a recommendation to shoot or not, how much of a shift is it to let the system take the shot ? And can you hang around to let the wetware catch up if you're facing hypersonic threats ?
 

Musashi_kenshin

Well-Known Member
Better to think of them as long range missiles with loitering capabilities - there's still going to be a human in the loop for shoot/no shoot decisions, be it locally with a human pilot relaying instructions to the drone or a remote pilot via sat link. There's no legal frame work to support autonomous engagements as far as I understand it so nothing's really changing there. Maybe once there's a workable legal framework in place, things might shift, but I suspect not for some time, if ever.
That's how I see things as well. But it's important to remember that we can't just wait until all the little legal niceties are resolved before we start development. We absolutely must be developing these systems now. Otherwise we'll get left behind and end up fighting a truly 21st century conflict with just upgraded 20th century weapons.

When Armenia got its backside handed to it recently, it simply wasn't ready for drone warfare. There were of course many other reasons why it lost, but you don't want to hamstring yourself by denying yourself weapons potential enemies are more than happy to use.

Besides, what's going to go wrong with a drone in a purely naval operation? Sure using drones controlled by AI over civilian targets could be very dangerous, but what's it going to do over water?
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
Besides, what's going to go wrong with a drone in a purely naval operation? Sure using drones controlled by AI over civilian targets could be very dangerous, but what's it going to do over water?
Sink a cruise liner with 4,000 punters plus crew on it. Or attack a civilian target onshore. Or attack a neutral target. There's plenty that could possibly go wrong over the oggy, so don't be so cavalier about it. Plan for the worse and hope for the best.
 

Musashi_kenshin

Well-Known Member
Sink a cruise liner with 4,000 punters plus crew on it.
Do cruise liners routinely traverse conflict zones? Commercial aircraft regularly have flight plans diverts around dangerous areas, so why would civilian shipping be any different? If or when China begins its attack on Taiwan, you can bet civilian shipping will stay hundreds of miles away.

Or attack a civilian target onshore.
Unlikely unless the target was relatively near the shoreline. You could as an example limit AI missions to those over open water rather than littoral engagements. Or in support of air defence.

Or attack a neutral target.
Possible, but again the same rules would apply to civilian shipping.

There's plenty that could possibly go wrong over the oggy, so don't be so cavalier about it. Plan for the worse and hope for the best.
Not being cavalier in the slightest, just pointing out that the risks of AI-controlled munitions are highest over densely populated areas. There are risks over naval engagements, but I think in reality they would be minimal. After all the chances of the Royal Navy initiating a naval sneak attack are very low.

Besides this is all assuming AI control actually works. The RN isn't going to deploy AI drones that can become easily confused, otherwise they could turn on allied assets!
 

StobieWan

Super Moderator
Staff member
  • Thread Starter Thread Starter
  • #6
That's how I see things as well. But it's important to remember that we can't just wait until all the little legal niceties are resolved before we start development. We absolutely must be developing these systems now. Otherwise we'll get left behind and end up fighting a truly 21st century conflict with just upgraded 20th century weapons.

When Armenia got its backside handed to it recently, it simply wasn't ready for drone warfare. There were of course many other reasons why it lost, but you don't want to hamstring yourself by denying yourself weapons potential enemies are more than happy to use.

Besides, what's going to go wrong with a drone in a purely naval operation? Sure using drones controlled by AI over civilian targets could be very dangerous, but what's it going to do over water?

There's scope for all sorts of confusion - you just have to look at how badly humans do at times in the fog of war, and extrapolate.


It's fine if you're duking it out in the North Atlantic vs the red banner fleet but a scrap in more populated waters with lots of neighbouring neutrals can get interesting. Witness the shoot down of the Iranian airliner a bit back.
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
Do cruise liners routinely traverse conflict zones? Commercial aircraft regularly have flight plans diverts around dangerous areas, so why would civilian shipping be any different? If or when China begins its attack on Taiwan, you can bet civilian shipping will stay hundreds of miles away.



Unlikely unless the target was relatively near the shoreline. You could as an example limit AI missions to those over open water rather than littoral engagements. Or in support of air defence.



Possible, but again the same rules would apply to civilian shipping.



Not being cavalier in the slightest, just pointing out that the risks of AI-controlled munitions are highest over densely populated areas. There are risks over naval engagements, but I think in reality they would be minimal. After all the chances of the Royal Navy initiating a naval sneak attack are very low.

Besides this is all assuming AI control actually works. The RN isn't going to deploy AI drones that can become easily confused, otherwise they could turn on allied assets!
Well lets see. WW2 Atlantic Ocean. Kreigsmarine U-Boats attacked neutral ships. Early 1941 they attacked a USN destroyer which was clearly identified as a USN warship. IIRC they also sank a liner as well early on in the piece. Never presume that risks are minimal just because an activity may happen at sea. Who said anything about AI just attacking a cruise ship in war time? Do you think it would just be used in war time? What is a large component of NATO naval activity at the moment? It certainly isn't being tied up alongside doing the brass work and drinking gin. It's out in the Red Sea and environs hunting pirates and terrorists etc., and down the eastern end of the Mediterranean.
 

JohnJT

Active Member
There is a line of thought that as most of these engagements consist of data harvested and presented by a machine, to a human, with a recommendation to shoot or not, how much of a shift is it to let the system take the shot ? And can you hang around to let the wetware catch up if you're facing hypersonic threats ?
That already exists to a certain extent. The Phalanx CIWS in auto mode will fire on any target that enters it's programmed area of engagement and meets it's set criteria for speed, heading, etc., all without any human input.
 
Last edited:

Musashi_kenshin

Well-Known Member
Do you think it would just be used in war time? What is a large component of NATO naval activity at the moment?
If on training exercises, there would be no need to arm the AI-controlled drones with active weapons - simulated strikes would be sufficient. As for anti-piracy operations, they pose little threat to whatever their base would be so arguably you wouldn't need to put AI in control for anything other than surveillance. Anti-terrorist operations over land are one of the areas I indicated AI might not be appropriate.

As for the examples given by you and Stobie of historical mistakes made, weren't those all caused by human error that was not justifiable in the circumstances?

I am not advocating that humans be taken completely out of the decision-making process. However, there are fewer reasons for an AI to make mistakes if programmed appropriately. It doesn't get angry, or tired or paranoid. It can also be programmed with various safety perameters so that it doesn't fire pre-emptively (i.e. it doesn't go "I may be under threat, better fire first and ask questions later").

For example, the shooting down of Iran Air Flight 655 would probably not have taken place if an AI had been involved. It would have been able to a) assess that a civilian airline would have been unable to respond to challenges on a military frequency, requiring clear instructions on a civilian channel and b) cross-reference the time with scheduled take-off of passenger aircraft at the time, taking into account time differences.

Similarly, an AI would have been unlikely to react aggressively to PS752 just because Iranian command were jittery about the possibility of a US retaliatory strike. (It would have also identified the direction of travel, which would have automatically ruled out an external threat.)

Again, none of this means AI is or will be perfect. But it has the potential to reduce collateral damage rather than increase it. Like driverless cars. There are always people saying no to them on the basis they assume they'll go rogue, when data suggests they will be safer. More care needs to be taken with AI and military assets, but it doesn't mean they should be ruled out entirely.
 

Musashi_kenshin

Well-Known Member
Those who ignore history are bound to repeat it.
You can't repeat history, by definition.

Rather than cherry-pick a few examples from up to a century ago, a better question is whether cruise liners currently traverse conflict zones.

Naval conflicts can spring up at a moment's notice and liners, freighters, fishing vessels, and ferries will get caught in the crossfire
So when was the last time a civilian passenger ship was sunk in error in a naval conflict? You're not allowed to count ones sunk deliberately that were criticised by other powers.

Besides, I go back to what I said previously. Countless lives have been lost due to human error in just the last decade. It always gets written off as "unfortunate", but no leader ever resigns, let alone is prosecuted. Yet people get their knickers in a twist over theoretical platforms just because they involve AI.

Why are deaths at human hands acceptable yet any due to AI error would be unthinkable? Such a position is not morally justifiable. If we are to say "no innocent deaths at the hands of AI", we should also be saying "no innocent deahts at the hands of humans" and automatically prosecute at all levels when it happens.

But we don't, because we're pragmatic. So let's be pragmatic over AI.

Please do not post links to subscription only articles. What I can read says "may have attacked retreating forces in Libya last year". It is not confirmed and says nothing about making mistakes. Retreating forces may under some circumstances still be legitimate targets, not least because they may be repositioning and preparing for renewed attacks.
 

Git_Kraken

Active Member
You can't repeat history, by definition.

Rather than cherry-pick a few examples from up to a century ago, a better question is whether cruise liners currently traverse conflict zones.



So when was the last time a civilian passenger ship was sunk in error in a naval conflict? You're not allowed to count ones sunk deliberately that were criticised by other powers.

Besides, I go back to what I said previously. Countless lives have been lost due to human error in just the last decade. It always gets written off as "unfortunate", but no leader ever resigns, let alone is prosecuted. Yet people get their knickers in a twist over theoretical platforms just because they involve AI.

Why are deaths at human hands acceptable yet any due to AI error would be unthinkable? Such a position is not morally justifiable. If we are to say "no innocent deaths at the hands of AI", we should also be saying "no innocent deahts at the hands of humans" and automatically prosecute at all levels when it happens.

But we don't, because we're pragmatic. So let's be pragmatic over AI.



Please do not post links to subscription only articles. What I can read says "may have attacked retreating forces in Libya last year". It is not confirmed and says nothing about making mistakes. Retreating forces may under some circumstances still be legitimate targets, not least because they may be repositioning and preparing for renewed attacks.
Are you intentionally ignoring the lesson in that phrase? If you don't learn from your past you will make the same mistakes.

Autonomous military drones may have attacked humans, UN says
or this one
An incident raises concerns over autonomous killer drones - DroneDJ

Also, you asked for examples of when a passenger ship was sunk in a warzone and I gave you two that had strategic impacts on the war, in particular, the Lusitania. And as for "recent examples" there are plenty of ships that have been hit by missiles that are not warships. It's splitting hairs to focus only on cruise ships. There is plenty of civilian traffic in conflict zones and it doesn't matter if its a cruise ship or not. It's still civilian traffic.

As for people making mistakes, when a person makes a mistake they are held accountable for that mistake. Who's accountable for an AI. Do they charge the engineers or the software designers? Accountability is important.
 

Musashi_kenshin

Well-Known Member
As for people making mistakes, when a person makes a mistake they are held accountable for that mistake.
Can you give me the names of the military personnel that have been court martialed and imprisoned for civilian casualties during the aerial campaign against Daesh? Or against any Syrian forces for that matter.

Who's accountable for an AI.
The people who authorised its use and/or that gave the AI its orders.
 

Git_Kraken

Active Member
Can you give me the names of the military personnel that have been court martialed and imprisoned for civilian casualties during the aerial campaign against Daesh? Or against any Syrian forces for that matter.
You seem to be reaching irrelevant conclusions here. There are plenty of people who are court marshaled for making errors. I'm sure you could do some research yourself and find some. The fact that some are not isn't the point. They are still accountable whether they are actually held to account or not. A computer is not accountable because it's an object.
 

t68

Well-Known Member
You seem to be reaching irrelevant conclusions here. There are plenty of people who are court marshaled for making errors. I'm sure you could do some research yourself and find some. The fact that some are not isn't the point. They are still accountable whether they are actually held to account or not. A computer is not accountable because it's an object.

Someone will always be accountable as the computer will only do what data is inputted into it.

So if it makes a mistake it will put down to the parameters set up
 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
Someone will always be accountable as the computer will only do what data is inputted into it.

So if it makes a mistake it will put down to the parameters set up
Not necessarily because by definition AI is self taught and always learning. It could be argued that if that is the case then it is capable of reasoning. If it is accepted that it is reasoning then it it does have to validate its decision making process, just like we do. So how do we define reasoning?

That depends upon what kind of philosophy you adhere to. I would define reasoning as the ability to reach a logical conclusion from the evidence or data presented. If my definition is acceptable then AI would certainly be capable of reasoning because that is what it does. I believe that it is a conceit and arrogance to presume that homo sapien is the only species capable of reasoning because we definitely see it in other species where they make complex decisions. Monkeys and dolphins come to mind. Since we have seen fit to develop and build AI we have now created another reasoning species. Whether it ever becomes sentient is another story and I certainly hope not because if does we may have sown the seeds of our own destruction. Frank Herbert may have written a prophetic vision of our future WRT to AI.
 

cdxbow

Well-Known Member
We already have seen machines programmed to kill humans without a human in the kill chain, there called land mines. It's simple machine logic, if weight on a pressure sensor is that of a human then I explode. No humans in the kill chain and the decision made by a machine. True, a very simple machine, nonetheless a machine.

The disasters that have been caused by mines should be salutatory lessons as we leverage machine logic in our weapons.
 

Musashi_kenshin

Well-Known Member
You seem to be reaching irrelevant conclusions here. There are plenty of people who are court marshaled for making errors.
But it's still rare if it happened during a military operation. That's why I asked for examples of personnel who had action taken against them during the Syria air campaigns, because that's a recent operation.

This is why I simply don't believe the idea that if we keep decision making 100% human led there will be better outcomes and more accountability - because people dodge accountability all the time now, writing civilian deaths off as collateral damage.

The disasters that have been caused by mines should be salutatory lessons as we leverage machine logic in our weapons.
That's a false comparison. Mines do not think or assess information, they react in a mechanical way, just as a gun does. Should we ban automatic weapons and return to single-shot rifles?

Whether it ever becomes sentient is another story and I certainly hope not because if does we may have sown the seeds of our own destruction.
If it happens the answer will be to offer AI mechanical bodies and recognise them as citizens, rather than immediately try to destroy them out of fear.
 

cdxbow

Well-Known Member
That's a false comparison. Mines do not think or assess information, they react in a mechanical way, just as a gun does. Should we ban automatic weapons and return to single-shot rifles?
I'm sorry you are wrong. Mines do assess information, ie how heavy is the pressure input. That is an 'assessment', true it is mechanical and simple, but nonetheless an assessment. So a mine is a system, which is left to it own devices, that assesses an input and then decides, by its own logic, whether it explodes and kills people. That is an autonomous machine, that decides on it's own to kill. The comparison to automatic weapons is is both wrong and irrelevant.

And, yes I still do believe they offer a salutatory lesson as we include machine learning and autonomy in our weapons.
 
Top