How to figure out what you don’t know

TechXplore  October 26, 2020 Machine learning optimizes flexible models to predict data. In scientific applications, there is a rising interest in interpreting these flexible models to derive hypotheses from data. Researchers from Cold Spring Harbor Laboratory tested this connection using a flexible, yet intrinsically interpretable framework for modelling neural dynamics. Many models discovered during optimization predict data equally well, yet they fail to match the correct hypothesis. They developed an alternative approach that identifies models with correct interpretation by comparing model features across data samples to separate true features from noise. Their results reveal that good predictions cannot substitute for […]