Bias in Machine Learning Models and Corrective Methods

All models but in particular Machine learning (ML) models run the risk of incorporating bias or unfairness in their outcomes. This is often driven by the underlying data used to train or calibrate the model. Since these models are not only increasingly used to make important decisions in our financial lives but also in other aspects like granting university admission, social benefit assignment, predicting risk of criminal recidivism, and part of hiring tools to review job applicant’s resumes, these biases have social, ethical as well as legal implications. This lecture gives a brief overview of the definitions of fairness used in the industry and some of the methods used to correct for unfairness in ML models.

Kevin Oden

Managing Partner, Kevin D. Oden & Associates, LLC

Kevin D. Oden is the founder and managing partner of the risk management consulting firm, Kevin D. Oden & Associates and is the managing director of the RMA’s Model Validation Consortium, a new offering providing a suite of high-quality model validation services at a competitive price point for RMA member banks.

Kevin holds a Ph.D. in math from UCLA and was a leader in risk management and model validation for Wells Fargo Bank.

Key:

Complete
Failed
Available
Locked
Keynote Session
Select the "View On-Demand Recording" button to begin.
Select the "View On-Demand Recording" button to begin.
Survey
7 Questions