How to figure out what you don’t know

TechXplore  October 26, 2020
Machine learning optimizes flexible models to predict data. In scientific applications, there is a rising interest in interpreting these flexible models to derive hypotheses from data. Researchers from Cold Spring Harbor Laboratory tested this connection using a flexible, yet intrinsically interpretable framework for modelling neural dynamics. Many models discovered during optimization predict data equally well, yet they fail to match the correct hypothesis. They developed an alternative approach that identifies models with correct interpretation by comparing model features across data samples to separate true features from noise. Their results reveal that good predictions cannot substitute for accurate interpretation of flexible models and offer a principled approach to identify models with correct interpretation…read more. TECHNICAL ARTICLE

… Using machine learning, researchers can test a hypothesis many times to find the best answers, rather than stopping at an incomplete answer that might have limited value in only a few special circumstances. Credit: Mikhail Genkin/Engel lab

Posted in Modeling and tagged , .

Leave a Reply