The Pentagon Is Accelerating AI and Autonomous Technology America’s military leaders are racing to deploy thousands of autonomous weapons and an AI-powered air monitoring system for Washington D.C.
The Pentagon Is Accelerating AI and Autonomous Technology America’s military leaders are racing to deploy thousands of autonomous weapons and an AI-powered air monitoring system for Washington D.C.
That’s called a bug - aka what it’s called when a program behaves unexpectedly and against design intentions.
That’s not going rogue, that’s doing what it was programmed to do.
By your standards you’d also have to consider WW2 acoustic homing torpedos as rogue AI because they might home in on the ship that fired them.
Edit:
A followup thought: the only real question is whether they can realistically test and refine these systems enough to trust them to carry out attacks autonomously without serious errors.vIm gonna guess no, but they’ll use them anyway.
Your edit follows the point I was making. It doesn’t need to truly “go rogue” according to your definition, and it doesn’t need general intelligence to have the same disastrous outcome. We have examples of AI killing humans to accomplish the goal it is given, so we need to be damned sure that’s not going to happen in real life before deploying them over Washington DC.
Honestly that wasn’t even a bug, it was a perfect execution of the instructions it was given to perform its task with maximum efficiency and would have been incredibly easy to see in advance if anyone had spent 5 minutes thinking about it. Classic paperclip maximizer style literal interpretation of goals.
Logic errors are bugs.