Achieving Fairness in AI is Hard — But it Doesn’t Have to Be
Artificial Intelligence is already revolutionizing many domains of life, from decisions about employment to consumer finance to housing. However, it also has the potential to perpetuate — without any conscious effort on the part of modelers — deeply ingrained patterns and practices of discrimination against women, racial and ethnic minorities, and other protected classes.
Just as AI promises to leverage data on a scale never before seen to make more informed decisions, the opacity of the process by which it does so can undermine its trustworthiness and promise for millions of Americans.
In “An Introduction to Artificial Intelligence and Solution to the Problems of Algorithmic Discrimination,” authors Nick Schmidt and Bryce Stephens explain pervasive discrimination problems in the application of AI technologies, while also pointing to potential remedies. Particularly striking in their report is its discussion of evidence provided by the FDIC that even information as trivial as whether a user is using an iOS or Android phone, or whether their email address contains their name, can present serious adverse impacts against disadvantaged groups — possibly even leading to illegal discrimination. This is because including these variables may result in a disparate impact effect stemming from the relationship between that variable and a protected class characteristic. This is especially difficult to justify if these model inputs lack a compelling explanatory relationship to an applicant’s creditworthiness.
These seemingly innocuous variables betray the difficulty many AI practitioners will have in diagnosing and mitigating discrimination. Disparate impact effects have the potential to creep into models with little warning and can be very difficult to detect for those without training in ethical AI methods, or a working familiarity with the legal framework of disparate impact analysis.
What is more, correcting discriminatory outcomes can also be impossible if a clear definition of how fairness is measured (“calibration”, “balance for the negative class”, “balance for the positive class”, etc.) is not specified from the outset, as different types of fairness can cut against one another.
Faced with these difficulties, it can seem difficult to know where to start. But with decades of experience, identifying and mitigating discrimination is a specialty of SolasAI.
We use novel model analysis, explainable AI techniques, and cutting-edge disparity testing to build alternative models that optimize on reducing disparity while maximizing performance. By working with model developers in a way designed to integrate with their existing model governance structure, we’re able to place a palette of options at the disposal of clients, who can then make informed decisions on their company’s AI needs from a wide array of options.
In this way, SolasAI provides a way forward for those interested in an ethical AI. By using machine learning technologies to map and evaluate the features and hyperparameters of a model for fairness, SolasAI provides a path forward for people interested in leveraging AI in a legally-compliant and socially-conscious way.
For a higher level summary of the themes discussed here, see also the following summary.
For more information on how to proactively tackle the challenges of fairness in the AI context, see our working guide.