22 Apr Recognizing and Avoiding Machine Learning Bias
THE PROMISE OF MACHINE LEARNING
After decades of unkept promises and false starts, Artificial Intelligence is now gaining traction as a valuable tool for businesses. One branch of AI – machine Learning – is becoming particularly important as a way to help organizations make better decisions, improve effectiveness, and reduce risk.
Machine learning is already ever-present in our individual lives. Who doesn’t use facial recognition, Siri, The Weather app, or Waze? Slowly, but at an accelerating pace, businesses of all sizes are also beginning to deploy machine learning. Human Resources departments are using it to predict the likely long-term success of job candidates. Online retailers use it to recommend products to customers. Lenders apply machine learning to evaluate the credit worthiness of loan applicants.
But as exciting as it is to imagine all the ways machine learning could help a business, and as tempting as it may be to start rolling out machine-learning applications, we can’t forget there is an important human factor that, if not considered, can lead to failed outcomes.
THE POTENTIAL FOR BIAS
Just as decision-making by humans is subject to flaw, so too is AI-assisted decision-making. Humanity is blessed with the ability to apply experience to make good judgements, but also cursed with the tendency to apply one’s own biases to issues at hand, intentionally or not. The old adage that computers only do what humans tell them to do is a bit of a stretch when applied to machine learning, because in a sense the computer is teaching itself what to do. But it teaches itself using human-built models and human-selected training data.
For years, we have used the phrase “garbage in – garbage out” to explain how bad computer programming can lead to bad outcomes. With machine learning, we need to add the phrase “bias in – bias out.” When decisions are made by individuals, there is always the possibility that bias will creep into the results on a case-by-case basis. When decisions are made with machine learning, there is now the opportunity to introduce bias at scale, which can be much scarier.
Bias can be introduced into machine learning a number of ways, starting with the way datasets are chosen to train the models.
-
-
- Datasets may be chosen based on personal experiences of the modeler or influenced by personally held beliefs or hypotheses of the modeler.
- Modelers may have a limited awareness of what sets are available.
- Sample bias can occur when the modeler chooses data that doesn’t represent the whole pool, contains too many faulty data points, or is too small a sample size.
- Data may be excluded from the training sets because the modeler falsely believes that data is unimportant to the model.
- Datasets may have prejudices already built in, for example containing stereotypes of certain classes of people.
-
Bias may also be introduced during the training process when parameters are tweaked to control the learning process. Outlier or unexpected results in trained models may lead the modeler to incorrectly assume the model is wrong and reconfigure the model to conform with expected behaviors. These are known as Black Swan events.
THE CONSEQUENCES OF BIAS
Biased machine learning outcomes can have very real and dangerous consequences, leading to decisions or actions that are unethical, immoral, or just plain wrong.
Consider the example of a machine learning algorithm that helps a parole board decide whether or not to grant parole to a convict. In the notable case of COMPAS (Correctional Offender Management Profiling for Alternative Sanctions), the model predicted twice as many false positives of recidivism for black offenders (45%) than white offenders (23%) because of the model that was chosen, the process of creating the algorithm, and the data that was used.
Or consider a model that helps a child welfare agency determine whether a child should be removed from their family because of abusive circumstances. If the model’s data sources are from public health providers, but not available from private providers, and if middle- and upper-class families are more likely to be served by private providers, the model may skew negative outcomes to lower-income families.
A model created to assist a Human Resources department determine the likely long-term success of a job applicant may favor male over female applicants if the historical training dataset of resumes consists of more males than females.
AVOIDING BIAS IN MACHINE LEARNING
Bias in machine learning is almost always unintended. Developing explicit best practices to recognize and address bias will go a long way to reducing or eliminating it.
First and foremost, be transparent. Share and discuss chosen datasets and algorithms with others, both inside and outside the organization. The more people that are involved, the greater the chance that bias will be uncovered and overridden.
Maintain a diverse team of data architects and AI modelers. Developers from different backgrounds, and who may have experienced prejudices in their personal lives, will be more attuned to how biases can creep into machine learning models.
Set standards for data labeling and seek representative and balanced datasets. Be particularly aware of data labels that mark gender, race, or other potential sources of bias. This includes labels that might infer gender or race, such as postal zone or job title.
Incorporate pragmatic evaluation and testing of bias into the development cycle, using tools developed for just such purposes. Examples include Google’s What-If, Amazon’s SageMaker Clarify, IBM’s AI Fairness 360, or the open source FairML.
Create an independent team, outside the modeling team, to make ethical evaluations. Discuss openly and honestly the prejudices and biases that may exist in an organization’s processes and practices, and actively search for how those biases may become imbedded in training data. Use this team or other outside independent parties to conduct reviews of annotated data in the training datasets.
With these practices, you greatly improve your chances of eliminating bias in your machine learning models.
ADDITIONAL INFORMATION
For more key considerations, watch VP of Corporate Development Glen Hilford’s presentation,
“Deconstructing AI – A Deeper Dive into Common AI Solutions.”