
A vessel’s captain (a pilot or ship officer) sets sights on a target then the vessel’s computer, not the captain, immediately calculates the missile’s aim and firing time. The missile is fired by the computer and the circuitry in the missile continues to aim and destroy the target.
This computer used a simple form of AI. Future AI could replace the human captain and determine (by some unknowable strategy) the need to preemptively attack a target using data collected by its military. The military could have “trained” its AI on any number of selected historical or fictional scenarios and priorities. The destructive outcome is unknowable to the humans who created it. Two, three, or four militaries using their own uniquely trained, unethical, unknowable AI war system could result in unimaginable chaos.
The U.S. government can’t stop other governments from developing physical, biological, or AI warfare tools and so it’s racing to build its own, “just in case.” U.S. companies/contractors are lobbying for this technology for obvious economic reasons.
This apocalyptic thinking, is explored in David Chapman’s new free online book: “Better without AI“. This link is to his 5th page, which also has an excellent white blood cell video to illustrate his points.
Question to readers: What do you think?
The movie Doctor Strangelove demonstrates the problem of AI in command of devastating weapons. It is at best an automaton, not “intelligence” as such.
@rautakyy: As Chapman’s book describes, the next AI will make the designation of “intelligence” a fuzzier term, if it is not already.
A very scary scenario! Sadly technology never seems to go back only forward!