Google’s search AI is confidently generating explanations for nonexistent idioms, once again revealing fundamental flaws in large language models. Users discovered that entering any made-up phrase plus “meaning” triggers AI Overviews that present fabricated etymologies with unwarranted authority.
When queried about phrases like “a loose dog won’t surf,” Google’s system produces detailed, plausible-sounding explanations rather than acknowledging these expressions don’t exist. The system occasionally includes reference links, further enhancing the false impression of legitimacy.
Computer scientist Ziang Xiao from Johns Hopkins University attributes this behavior to two key LLM characteristics: prediction-based text generation and people-pleasing tendencies. “The prediction of the next word is based on its vast training data,” Xiao explained. “However, in many cases, the next coherent word does not lead us to the right answer.”