By Larry Bradley, CEO at SolasAI & James Kane, Managing Principal at Capco
This piece is in partnership with Capco ahead of our upcoming joint webinar AI for Lending Webinar: Competitive Advantage & Regulatory Compliance on Wednesday, September 20. For more information or to register, click here.
In the Wild, Wild West of generative AI innovation, companies that can demonstrate fairness compliance will soar where others stumble.
As organizations worldwide harness the power of AI to gain a competitive edge, they must also navigate an intricate landscape of evolving regulations and ethical considerations. The expectations of governments, regulators, and consumers are rapidly focusing on the need to test and justify model fairness to prevent the harm that AI-generated algorithmic discrimination would have on the global economy and on everyday lives. Against this backdrop, compliance is a crucial consideration within the innovation process that adds significant value to AI-powered companies.
What is algorithmic bias, and why is it such a concern in AI? Let’s examine home loans, as an example. AI can help lenders decide who qualifies for a mortgage by analyzing certain variables, such as prior delinquencies. As the AI learns from the data, it adjusts its calculations to reach the most statistically accurate prediction. When this process works fairly, banks can provide more loans without increasing risk, and consumers can be charged lower interest rates.
But numbers aren’t always neutral. For example, the data (or sometimes the lack of data) that go into some credit scoring systems have been criticized for amplifying racial bias, especially against Black Americans whose loan histories are disproportionately impacted by a history of discrimination. A survey of 5,000 adults found that more than half of Black Americans reported having a low or no credit score, compared to 41% for Hispanics, 37% for whites, and 18% for Asian Americans. The legacy of past discrimination is baked into the data — regardless of the intentions of the model’s developers.
One of AI’s main capabilities is the ability to find subtle patterns in data and extrapolate them into actionable judgements. However, when the data that the AI uses to solve tasks is tainted by discrimination, it can inadvertently amplify that discrimination and crystallize it into a real-world policy that hurts people and breaks the law. The technology still needs a human observer, such as a compliance officer, to discern whether the AI causes a disparate impact. SolasAI, the leader in innovative algorithmic fairness AI software, helps companies utilize the least discriminatory alternative and demonstrate that they have made reasonable efforts to minimize disparity and bias in decision-making algorithms.
Even if regulators, lawmakers, and customers aren’t watching, fairness testing is still the right thing to do. Algorithmic discrimination happens regardless of the intentions of a model’s developers, and it can inadvertently amplify bias and crystallize it into a real-world policy that hurts people in the real world.
Fairness testing reduces the chances of a company making legally or reputationally harmful decisions through its use of AI. For example, financial institutions that make decisions about loans and credit are subject to fair lending standards and laws. While those laws don’t specifically mention machine learning, they are clear when it comes to prohibiting the disparate impact that AI can cause by learning from variables that are a proxy for bias, such as racial discrimination.
AI decision-making is notoriously opaque. For decades, the term ‘black box’ has referred to the unknowable, inaccessible inner workings of machine learning. Fairness testing addresses this directly by using AI to identify how variables interact in the business’ model and whether it’s causing disparate impact against a protected class of people. The end result is a transparent accounting of the algorithm’s process so businesses can demonstrate to customers and regulators how its AI is learning without propagating historical discrimination.
To make accurate judgments, AI models need high-quality data. That includes data that has been evaluated for disparate impact caused by algorithmic bias. If the raw data going into the algorithm — or the relationships among that data — are proxies for historical patterns of discrimination rather than results of real-world dynamics, the results won’t be as accurate as a less discriminatory alternative.
Companies that develop successful AI-powered technology solutions without considering compliance regulations will hit serious roadblocks when the time comes to scale up to a new market. This includes trying to expand into new geographical regions or into new industries. For example, the European Union is well into the process of developing regulations to govern the technology’s use within its member states. Any American company that wants to launch services in Europe will need to comply. In its final form, the EU’s AI Act will require developers to continuously assess algorithms for bias that leads to harmful, discriminatory outcomes when applied to high-risk, life-impacting situations.
AI and other learning algorithms have the potential to positively or negatively impact business, customers, society, and our planet. When it comes to steering AI to be a force for good, it will be crucial to proactively pursue compliance functions equipped with the right tools and knowledge to ensure trusted, transparent, and viable applications of algorithms and their associated automation.
Proactive compliance can happen fast enough to match the speed of innovation in generative AI. For example, SolasAI partnered with Capco, a global technology and management consultancy specializing in driving digital transformation in the financial services industry, to accelerate the time to reach compliance for unbiased, production-grade AI models regardless of scale. Previous compliance efforts took weeks to navigate from start to finish; now, businesses can meet and clear regulatory standards within a few days. Read more about that partnership below.