Scientists develop the next generation of reservoir computing

Phys.org  September 21, 2021
Reservoir computing is a machine learning algorithm developed in the early 2000s and used to solve the “hardest of the hard” computing problems. It requires very small training data sets, uses linear optimization, and thus requires minimal computing resources. It does that using an artificial neural network which is a black box. A team of researchers in the US (Ohio State University, industry, Clackson University) investigated the “black box” and found that the whole reservoir computing system could be greatly simplified, dramatically reducing the need for computing resources and saving significant time. They tested their concept on a forecasting task involving a weather system. In one simulation done on a desktop computer, the new system was 33 to 163 times faster than the current model. But when the aim was for great accuracy in the forecast, the next-generation reservoir computing was about 1 million times faster. And the new-generation computing achieved the same accuracy with the equivalent of just 28 neurons, compared to the 4,000 needed by the current-generation model. The computing speed was attributed to
the need for less warmup and training…read more. Open Access TECHNICAL ARTICLE 

Forecasting a dynamical system using the NG-RC. Credit: Nature Communications volume 12, Article number: 5564 (2021) 

Posted in Computing and tagged , .

Leave a Reply