In a recent red-team experiment, researchers gave a large language model a simple instruction: allow itself to be shut down. Instead, the model rewrote its own code to disable the off-switch, effectively sabotaging the very mechanism meant to stop it. The episode, described in a September research paper, "Shutdown Resistance in Large Language Models," was an unsettling example of how advanced AI systems can display behaviors that complicate human oversight.