Electronic Warfare


The Bunker Group
The other think about AI is once it achieves the ability to make its own decisions and rationalise, does that make it a sentient being? If that's the case we have created something that can out think and out smart us well beyond any capability that we can ever dream of.
My thoughts are that we have to open the aperture a bit on granting autonomy to "simple AIs" - ie those dedicated to doing limited functions within operating parameters.
This's what I think the AI environment will be. My experience with AI off course within my own Industry of Finance. It's well known that Financial Industry is getting more and more AI oriented. What we're doing mostly changing potential human error in daily transaction, either due to fatigue, carelessness or fraudulent.

However we never let the AI evolve on their own. There's scores of people that continue watching the Algorithms movement. Any changes on Algorithm outside paradigms that we are set, then we will rectify that. I once talk with head of modelling from one of the largest Bank in US. He told me he has 500 people that manage AI Models and watch the AI algorithm movement on daily basis.

This's what also happen with Google, Amazon, Ali Baba and all other gate ways providers. This's also what most Telco doing. As long as we maintain the AI movement, then we the human still in the control of AI evolvement.
As long as the main code of the AI still within our control, and we continue cautiously watching their paradigms movement, then it will always be working within the working environment that we allowed.

Will there be accidents done by AI, it can be and it will be. However it can also be less than what human capable off. Will AI create potential collateral damages in battle field, it can, but it also be controlleable more than Human factors in the field.

I'm not saying that what being shown in Hollywood will not happen. However I also see that(base on my experience with AI in my Industry so far) it'll back to us the Human on how to manage and control it.


Super Moderator
Staff member
Verified Defense Pro
  • Thread Starter Thread Starter
  • #22
However you will always get that one fool who wants to take the science further. I can understand that because scientists are curious people by nature and we're always wanting to see what happens next. Once the barrier is breached by AI then we do have an existential problem the has all the possibilities of becoming a human life extinction event. Already I understand AI is self learning and what happens if it starts accessing human historical records and then starts rationalising? We are going to have a bit of a problem on our hands if that genie ever gets out of the bottle.

My big problem with the safety protocols that exist around AI, is not related to the technology side of the protocols, but to the human components of the system. The human components are the weakest link, because they are prone to errors and corruption.

I don't have a problem with simple AI as long as sufficient safety protocols are enabled. However how do you define a simple AI and differentiate it from more advanced AI? What is the borderline between the two? These are topics that have to be discussed with appropriate standards formulated, legislated and published. Next question who is going to be the international body that does this and enforces the legislation? Certainly a helluva a lot to discuss.

John Fedup

The Bunker Group
Any international body enforcing AI standards for the PRC will be screwed around just like the WHO investigating COVID or worse the body will be corrupted like the WHO with CCP ar$e lickers.

Careful @John Fedup . The term you use after CCP is a bit over the top. I can think of better terms myself.
Last edited by a moderator: