AI fact checks can increase belief in false headlines, study finds

Phys.org  December 4, 2024 Recent AI language models have shown impressive ability in fact-checking tasks, but how humans interact with fact-checking information provided by these models is unclear. Researchers at Indiana University investigated the impact of fact-checking information generated by a popular large language model (LLM) on belief in and sharing intent of political news headlines in a preregistered randomized control experiment. Although the LLM accurately identified most false headlines (90%), they found that the information did not significantly improve participants’ ability to discern headline accuracy or share accurate news. In contrast, viewing human-generated fact checks enhanced discernment in both […]

Study: Transparency is often lacking in datasets used to train large language models

MIT News  August 30, 2024 To improve data transparency and understanding of training language models on vast, diverse and inconsistently documented datasets an international team of researchers (USA – MIT, Harvard, UC Irvine, industry, University of Colorado, Olin College of Engineering, Carnegie Mellon University, and France, Canada) convened a multi-disciplinary effort between legal and machine learning experts to systematically audit and trace more than 1,800 text datasets. They developed tools and standards to trace the lineage of these datasets, including their source, creators, licenses and subsequent use. They found sharp divides in the composition and focus of data licensed for […]

Machine-learning system based on light could yield more powerful, efficient large language models

MIT News  August 22, 2023 Optical neural networks (ONNs) have recently emerged to process deep neural networks (DNN) tasks with high clock rates, parallelism, and low-loss data transmission. However, existing challenges for ONNs are high energy consumption due to their low electro-optic conversion efficiency, low compute density due to large device footprints and channel crosstalk, and long latency due to the lack of inline nonlinearity. An international team of researchers (USA – MIT, UCLA, industry, Germany) experimentally demonstrated a spatial-temporal-multiplexed ONN system that simultaneously overcomes all these challenges. They exploited neuron encoding with volume-manufactured micrometre-scale vertical-cavity surface-emitting laser (VCSEL) arrays […]