Removing human bias from predictive modeling

Phys.org  October 30, 2019 Predictive modeling is increasingly being employed to assist human decision-makers. However, there is growing recognition that employing algorithms does not remove the potential for bias and can even amplify it if the training data were generated by a process that is itself biased. Researchers at the University of Pennsylvania propose a method to eliminate bias from predictive models by removing all information regarding protected variables from the data to which the models will ultimately be trained. Motivated by models currently in use in the criminal justice system that inform decisions on pre-trial release and parole, they […]