Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds

by oqtey
AI

Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models — including OpenAI’s GPT-4o, Mistral Large, and Anthropic’s Claude 3.7 Sonnet — sacrifice factual accuracy when instructed to keep responses short.

“When forced to keep it short, models consistently choose brevity over accuracy,” Giskard researchers noted, explaining that models lack sufficient “space” to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like “be concise” can undermine a model’s ability to debunk misinformation.

Related Posts

Leave a Comment