AI fact checks can increase belief in false headlines, study finds

Phys.org  December 4, 2024 Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Researchers at Indiana University investigated the impact of fact-checking information generated by a popular large language model (LLM) on belief in and sharing intent of political news headlines in a preregistered randomized control experiment. Although the LLM accurately identified most false headlines (90%), they found that the information did not significantly improve participants’ ability to discern headline accuracy or share accurate news. In contrast, viewing human-generated fact checks enhanced discernment in both […]