MIT News August 14, 2024
The flexible nature of large language models (LLMs) allows them to be used for many applications. A team of researchers in the US (MIT, industry) used LLMs for challenging task time series anomaly detection. They addressed two aspects novel for LLMs: the need for the model to identify part of the input sequence (or multiple parts) as anomalous; and the need for it to work with time series data rather than the traditional text input. Their framework included a timeseries-to-text conversion module, as well as end-to-end pipelines that prompt language models to perform time series anomaly detection. They investigated two paradigms for testing the abilities of LLMs to perform the detection task – first one was a prompt-based detection method that directly asked a language model to indicate which elements of the input are anomalies. The second one was the forecasting capability of a LLM to guide the anomaly detection process. They evaluated their framework on 11 datasets spanning various sources and 10 pipelines. They showed that the forecasting method significantly outperformed the prompting method in all 11 datasets with respect to the prompt-based score. They concluded while large language models can find anomalies, state-of-the-art deep learning models are still superior in performance, achieving results 30% better than large language models… read more. Open Access TECHNICAL ARTICLE

The new method could someday help alert technicians to potential problems in equipment like wind turbines or satellites. Credit: MIT News; iStock