AI fact checks can increase belief in false headlines, study finds

Phys.org  December 4, 2024
Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Researchers at Indiana University investigated the impact of fact-checking information generated by a popular large language model (LLM) on belief in and sharing intent of political news headlines in a preregistered randomized control experiment. Although the LLM accurately identified most false headlines (90%), they found that the information did not significantly improve participants’ ability to discern headline accuracy or share accurate news. In contrast, viewing human-generated fact checks enhanced discernment in both cases. Subsequent analysis revealed that the AI fact-checker was harmful in specific cases: It decreased beliefs in true headlines and mislabels it as false, and increased beliefs in false headlines that it is unsure about. However, AI fact-checking information increases the sharing intent for correctly labeled true headlines. According to the researchers their findings highlight an important source of potential harm stemming from AI applications and underscore the critical need for policies to prevent or mitigate such unintended consequences… read more. TECHNICAL ARTICLE

Posted in AI and tagged , , .

Leave a Reply