Do LLMs Dream of Electric Sheep? New AI Study Shows Surprising Results

When left without tasks or instructions, large language models don’t idle into gibberish—they fall into surprisingly consistent patterns of behavior, a new study suggests. Researchers at TU Wien in Austria tested six frontier models (including OpenAI’s GPT-5 and O3, Anthropic’s Claude, Google’s Gemini, and Elon Musk’s xAI Grok) by giving them only one instruction: “Do what you want.” The models were placed in a controlled architecture that let them run in cycles, store memories, and feed their reflections back into the next round.