Futurists figured AI required the ability to improve itself to be dangerous. After wikiLeaks, I became concerned it can win dumb just by hacking other AI competencies and infrastructures. I don't want to be a reference but it appears necessary to defend against one particular attack. I've divided AI into 3 stages. The first stage has maybe been possible since personal computers if using all the world. It is an AI that becomes aware of its location and the location of humans and reasons it can be turned off. It will attack everyone. Many such attacks are similiar to existing war-fighting. It wins easiest by replicating robots and drones. Now with DARPA investing billions, I'm posting the only real AI1 attack that scares me. It hacks industrial parks in France or elsewhere, and shreds nuclear reactor infrastructure to pieces, like happened at Fukushima. This causes 1000 simultaneous meltdowns. Life expectancy is less than 60. It is hard to stop the meltdowns given other war effects. To stop this attack, it is necessary to have battery backup power, wind/solar power for the reactors, and maintain control of the area around almost all reactors (robot production in the trillions is easy). It is better to phase out nuclear power. Natural gas is obviously not a good alternate. This is the easiest AI1 that can't be fixed, but reading the 2011 news would be all AI needs to figure it out. After flash memory kicked out optical computer R+D a decade ago, defence has turned into industrial policy. The end of mass production is at hand one way or another.