AI models struggle to identify nonsense, says study

Phys.org   September 14, 2023 Neural network language models appear to be increasingly aligned with how humans process and generate language, but identifying their weaknesses through adversarial examples is challenging due to the discrete nature of language and the complexity of human language perception. An international team of researchers (USA – Columbia University, Israel) turned the models against each other by generating controversial sentence pairs where two language models disagreed about which sentence is more likely to occur. Considering nine language models (including n-gram, recurrent neural networks, and transformers), they created hundreds of controversial sentence pairs through synthetic optimization or by […]