Removing human bias from predictive modeling

Phys.org  October 30, 2019
Predictive modeling is increasingly being employed to assist human decision-makers. However, there is growing recognition that employing algorithms does not remove the potential for bias and can even amplify it if the training data were generated by a process that is itself biased. Researchers at the University of Pennsylvania propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, they applied the method to a dataset on the criminal histories of individuals at the time of sentencing to produce “race-neutral” predictions of re-arrest. They demonstrated that a common approach to creating “race-neutral” models—omitting race as a covariate—still results in racially disparate predictions. When their method was applied to these data, the racial disparities were removed from predictions with minimal impact on predictive accuracy…read more. TECHNICAL ARTICLE 

Credit: CC0 Public Domain

Posted in Predictive modeling, Predictive software and tagged .

Leave a Reply