In 1949—back in the days of news pioneer Edward R. Murrow—the Federal Communications Commission introduced the “Fairness Doctrine,” a policy requiring broadcast license holders to present controversial issues in a manner that fairly reflected differing viewpoints.
It was ultimately abolished by the Reagan administration in 1987–when Dan Rather was proving he was no Murrow—a move some suggest contributed to today’s fractured landscape. Yet, while the notion of fairness on American broadcast airways may have perished, the topic of fairly adjudicating the fast-growing use of Artificial Intelligence by businesses has blossomed, at least on the state level.
In this wild west landscape that is AI, the virtual tumbleweeds that are topics such as hiring practices, healthcare policymaking and mortgage loan adjudication are rolling by quickly. And keep in mind, there are no standard operating procedures as it relates to fairness and AI. Thus, algorithmic bias testing has never been more essential.
Bias testing is an essential process in examining the fairness, impartiality and neutrality of AI models. It detects potential discriminatory outcomes against specific individuals or groups. After all, just consider how biased algorithms can perpetuate and amplify existing societal inequalities and stereotypes; cause legal action against companies due to discriminatory practices; or produce inaccurate predictions, hindering business decision-making and eroding trust with customers and stakeholders.
But whose responsibility is all of this to monitor and police? The Federal Government? States? Big Tech?
Republican leadership in Washington has held true to its tenets of smaller federal government and increased states’ rights, reflected in President Trump’s April 2025 executive order, “Restoring Equality of Opportunity and Meritocracy,” which fundamentally eliminated federal disparate impact enforcement, even though the laws and court precedent supporting fairness standards are still very much in place. As a result, more than a dozen states have enacted or are pursuing legislation to fill the void including California, Illinois, New York, Massachusetts and Colorado, among others.
This, however, creates a conundrum and great uncertainty for any businesses that conduct commerce across multiple states, having to manage AI bias regulations with a myriad of state regulators as opposed to, say, one or two in Washington. And this uncertainty leads to lower confidence, particularly in the banking sector. Additionally, public trust becomes a part of the equation as the economic outlook can impact how consumers view banks and safety and privacy for AI is paramount and becoming part of the public’s lexicon. Moreover, don’t forget that companies can still be held liable later for things they do now, even if the enforcement is less energetic right now.
It’s important to understand that bias can be introduced in various stages of the AI development lifecycle and for many different reasons. Unrepresentative or incomplete training data can lead to biased model outcomes, while flaws in the logic or implementation of the algorithm can also lead to unfair results. Complete and accurate data sets that reflect a history of socioeconomic disparities (e.g. access to financial services, access to healthcare) can be used to build models, therefore projecting the problems of the past into the future. Conscious or unconscious biases of individuals involved in building or deploying the AI system can be reflected in the model’s behavior, and applying an AI system to a context different from its intended purpose can introduce bias.
None of these examples, of course, are about deliberately bad intentions or are even necessarily about poor model development. They are largely about applying straightforward critical thinking and analysis to a business problem. Fair use of AI is good for the health of both business and consumers, bringing a broader audience into consideration for products and services.
Introducing a fairness testing and mitigation discipline into model development is also proven to intrinsically improve the quality of models – data scientists think more critically about the construction of the models, and businesses are finding that model performance can be maintained and even improved while they expand their target audiences.
Focusing on this perspective, the real value proposition of fairness testing, helps ensure long-term compliance while avoiding near-term controversy. It also provides for the protection of significant investments companies have already put into governance and risk controls and is critical to seizing opportunities for growth.
What are we hearing from our partners across financial services, insurance, and fintech? They are all taking a long-term view, warning this is not the time for a compliance vacation. To wit:
Finally, never forget that addressing algorithm bias is an ongoing and iterative process that requires a combination of technical approaches, ethical considerations, and human oversight to ensure fair and equitable AI systems.
The FCC may have killed its Fairness Doctrine in the 1980s, but it’s alive and well in the new AI wild west.