Sniffing Out Errors

Inside Big Data  November 26, 2019
Error Analysis builds a model out of your existing model’s errors. From this it is possible to gain an understanding of where the model is succeeding and what can be amended to improve performance. Additionally, this process can be trivially integrated into the data science pipeline and run multiple times to iteratively improve model performance. Building a simple model using all your input features to explain the error will give an indication of which features are driving most of the error. A highly interpretable linear model in error analysis will yield much faster and clearer directions. Causes of errors could be messy data insufficient information, feature overfit, or feature overfit within the algorithm. Following an iterative approach to predictive model building, such as error analysis, allows you to consistently test and improve…read more.

Posted in Big data and tagged .

Leave a Reply