AI

hauritz

Well-Known Member
Couldn't find an existing tread on AI. I think it deserves one however. Like it or not the future is AI. My own exposure to AI first happened around 5 years ago. I was working as a 3d animator for an online poker site. That is when the company I was working for became aware or the first rudimentary AI bots playing in the real money online poker rooms. Up until then there were online bots but they were simply probability machines which frankly were easily detected and pretty easy to beat. AI was a different matter though. Now it is all over online poker and frankly is another reason why you should stay away from it.
AI will profile every player it goes up against. It will rate your ability as a player, how often you bluff, all your strengths and weaknesses, just about everything. It will develop its own playing strategy to deal with you. If it makes a mistake it will learn and not make the same mistake twice.

Take this concept to the battlefield where AI not only has an encyclopedic knowledge of the enemy's commanders, but also just about every other soldier on the battlefield, including your own. You have sick child at home, you have been texting home about lack of ammunition, poor training, or just general scuttlebutt it is likely to be used to inform any decision making process. AI will eventually permeate its way into just about everything. AI controlled aircraft, ships, submarines and even missiles, bombs, torpedoes and mines. They will eventually even find there way into the decision making process.

Outside the military we aren't just talking cyberattacks. AI can perfectly fake not only the voices and faces of people, but do it in real time. It can possess and extensive knowledge of the person they are immitating making it almost impossible to detect. Friends, family, leaders it doesn't matter. If it can gather enough data it can use this to scam you, send out disinformation, distribute propaganda and otherwise manipulate you.

Another thing with AI is that it can rewrite its own code, without any human intervention, at absolutely blistering speed. You have people who actually work with this technology straight out telling you that it is potentially dangerous and just about unstoppable.

Even Google CEO Sundar Pichai concedes that AI technology is developing faster than the ability of our institutions to adapt to it. However since Google is in a battle with other tech giants such as microsoft it is unlikely that this development will slow down.

Another problem I can foresee is that while AI is becoming smarter we are becoming more stupid. A friend of mine involved in education was talking to me the other day about marking essay papers. They still use an app that can detect when a student is basically plagiarising their work. That was before AI took over writing the essays for them. Now all you do is go here.


All those future engineers, doctors and and other professionals now don't need to bother actually learning anything and can devote themselves almost entirely to partying, drinking and debauchery. So long as they can string together a few key words you can still go on and graduate with honours. Perhaps it is fortunate that most of these people will be replaced by AI anyway.

So that is my quick summation of AI. We are all doomed. Lets just hope that our new AI overlords decide we are still worth keeping around.
 

buffy9

Well-Known Member
Couldn't find an existing tread on AI. I think it deserves one however. Like it or not the future is AI. My own exposure to AI first happened around 5 years ago. I was working as a 3d animator for an online poker site. That is when the company I was working for became aware or the first rudimentary AI bots playing in the real money online poker rooms. Up until then there were online bots but they were simply probability machines which frankly were easily detected and pretty easy to beat. AI was a different matter though. Now it is all over online poker and frankly is another reason why you should stay away from it.
AI will profile every player it goes up against. It will rate your ability as a player, how often you bluff, all your strengths and weaknesses, just about everything. It will develop its own playing strategy to deal with you. If it makes a mistake it will learn and not make the same mistake twice.

Take this concept to the battlefield where AI not only has an encyclopedic knowledge of the enemy's commanders, but also just about every other soldier on the battlefield, including your own. You have sick child at home, you have been texting home about lack of ammunition, poor training, or just general scuttlebutt it is likely to be used to inform any decision making process. AI will eventually permeate its way into just about everything. AI controlled aircraft, ships, submarines and even missiles, bombs, torpedoes and mines. They will eventually even find there way into the decision making process.

Outside the military we aren't just talking cyberattacks. AI can perfectly fake not only the voices and faces of people, but do it in real time. It can possess and extensive knowledge of the person they are immitating making it almost impossible to detect. Friends, family, leaders it doesn't matter. If it can gather enough data it can use this to scam you, send out disinformation, distribute propaganda and otherwise manipulate you.

Another thing with AI is that it can rewrite its own code, without any human intervention, at absolutely blistering speed. You have people who actually work with this technology straight out telling you that it is potentially dangerous and just about unstoppable.

Even Google CEO Sundar Pichai concedes that AI technology is developing faster than the ability of our institutions to adapt to it. However since Google is in a battle with other tech giants such as microsoft it is unlikely that this development will slow down.

Another problem I can foresee is that while AI is becoming smarter we are becoming more stupid. A friend of mine involved in education was talking to me the other day about marking essay papers. They still use an app that can detect when a student is basically plagiarising their work. That was before AI took over writing the essays for them. Now all you do is go here.


All those future engineers, doctors and and other professionals now don't need to bother actually learning anything and can devote themselves almost entirely to partying, drinking and debauchery. So long as they can string together a few key words you can still go on and graduate with honours. Perhaps it is fortunate that most of these people will be replaced by AI anyway.

So that is my quick summation of AI. We are all doomed. Lets just hope that our new AI overlords decide we are still worth keeping around.
I am of the mind that AI will be one of the most influential technologies going forward, definitely so assuming advances in Big Data and computing in general continue. While quantum computing and hypersonic weapons both have tradeoffs that restrict their utility to specific roles/niches, AI is more influential - it can readily begin to appear in every facet of life, including competition and conflict.

I'm skeptical of getting carried away by hype - there have been AI winters before - though the scale of computing and the importance placed on its role for the state mean it is probably not going to go away as it did before.

I've lined up I, Warbot to read following my current book, and may follow it up with Superintelligence: Paths, Dangers, Strategies. It is, in my view, definitely worth knowing.
 

John Fedup

The Bunker Group
This article describes the NSA’s concerns about AI technology being stolen by the Chinese. AI engines apparently are being used to corrupt open source information on the internet (as if there isn’t enough bad information already available). This promising technology is hard to resist but clearly there are risks.

 

OkamsRazor

New Member
This article describes the NSA’s concerns about AI technology being stolen by the Chinese. AI engines apparently are being used to corrupt open source information on the internet (as if there isn’t enough bad information already available). This promising technology is hard to resist but clearly there are risks.

The thing is if you don’t differentiate between GAI and AGI by using the term AI it makes the commentary look as if they don’t understand the difference and don’t understand AI. LLMs are not AI or even clever monkeys.
 

hauritz

Well-Known Member
  • Thread Starter Thread Starter
  • #5
The latest version of ChatGPT took a bar exam and apparantly aced it. To be clear this is more than just answering questions. You also need to write essays. Perhaps it is just a matter of time before we see AI replacing lawyers.

See AI isn't so bad after all.

 

ngatimozart

Super Moderator
Staff member
Verified Defense Pro
I came across some free AI programs that look really good and easy to use. I have yet to try them out.
 

kato

The Bunker Group
Verified Defense Pro
See AI isn't so bad after all.
I'd say that's more telling about how one can bullshit their way through a bar exam...

(P.S. There are cases over here in Germany where students have been caught using ChatGPT or similar programs to cheat on highschool finals this year. They were caught since the writing style across the exam did not conclusively match up, with schools then simply pushing it through a special analysis programme that easily found the AI-generated material in the answers.)
 

hauritz

Well-Known Member
  • Thread Starter Thread Starter
  • #8
I couldn’t agree more. I know people in education and they are fully aware that essays are now one of the poorest ways to assess a student. Online learning as a whole is largely discredited.

It might have to go back to the old way of doing things with students actually required to physically attend class. Get them back to actually sitting an exam. They would have to do it in a supervised environment with no technology other than a pen and some paper.

What concerns me is that youth of today can’t even do the little things like string together words to make a coherent sentence let alone write essays. Instead they rely on spell checking and grammar software to write stuff for them.

This is the big danger of AI. That at the end of the day we will simply allow machines to do all our thinking for us. Machines won’t kill us off. Instead they will just make us irrelevant.
 
Last edited:

hauritz

Well-Known Member
  • Thread Starter Thread Starter
  • #11
Now AI can now read the human mind.
The way it does it is fairly clever. The participant who is for the time being at least, is a volunteer, is shown a number of images or other mental stimulation and their brain activity is recorded. The AI then learns how to associate this brain activity with whatever the participant is experiencing.

Then the AI gets to see just the brain activity and then derive from that what the participant is experiencing. It actually gets it right about half the time.

Of course with AI it will continue to improve and I imagine the more data points it collects from an individual the more accurate it will get.
 

Vivendi

Well-Known Member
Somewhat concerning, the experts developing ChatGPT is sufficiently concerned about the development of a "super AI" that can potentially become hostile, that they will allocate 20% of their compute resources for the next 4 years to make sure this does not happen!

How can we ensure that other groups do not develop a super AI that goes rogue? Also, what if the "alignment researcher" goes rogue?

OpenAI is forming a new team to bring ‘superintelligent’ AI under control | TechCrunch

Some serious ethical questions needs to be considered: in particular if a "human level" AGI is developed, should this be considered an "individual" or a machine? If the former, it must be granted "human" rights....if the latter, one may still need to grant "human" rights. How can one determine this?
 
Last edited:

John Fedup

The Bunker Group
A new microchip from IBM promises improved energy usage. The NorthPole chip supposedly will offer better capability for AI applications and apparently won’t be so network dependent.

 

Larry_L

Active Member
I couldn’t agree more. I know people in education and they are fully aware that essays are now one of the poorest ways to assess a student. Online learning as a whole is largely discredited.

It might have to go back to the old way of doing things with students actually required to physically attend class. Get them back to actually sitting an exam. They would have to do it in a supervised environment with no technology other than a pen and some paper.

What concerns me is that youth of today can’t even do the little things like string together words to make a coherent sentence let alone write essays. Instead they rely on spell checking and grammar software to write stuff for them.

This is the big danger of AI. That at the end of the day we will simply allow machines to do all our thinking for us. Machines won’t kill us off. Instead they will just make us irrelevant.
This has already happened when the calculator became common. We were not allowed them in math tests. We were not even allowed slide rules. By the time my daughter was in high school Graphing calculators were allowed, and were even required in some classes. How many of you remember, or even learned the multiplication tables?
 

John Fedup

The Bunker Group
This has already happened when the calculator became common. We were not allowed them in math tests. We were not even allowed slide rules. By the time my daughter was in high school Graphing calculators were allowed, and were even required in some classes. How many of you remember, or even learned the multiplication tables?
I can remember calculators being banned during tests (because many students could not afford them) but not slide rulers. I remember (and still do) multiplication tables. Many people today probably could not estimate the cost of 10-20 items in a shopping basket without a phone calculator. Doesn't matter I guess as they don't have to worry about having enough cash in their wallets with credit cards and ApplePay..
 
Top