Bixonimania, a made-up eye condition created to prove that large language models (LLMs) could be easily deceived, ended up tricking human researchers as well. Bixonimania was created in 2024 by a team led by Almira Osmanovic Thunström, a medical researcher at the University of Gothenburg, Sweden. They wanted to see whether large language models like ChatGPT or Gemini could see through what was considered obvious misinformation, or whether they would swallow this medical misinformation and present it as valid information. Within weeks of Thunström’s team uploading two fake studies about bixonimania to a preprint server, the made-up condition was already...