• MyActuary Weekly
  • Posts
  • When the Algorithm Answers for Itself: AI Governance Regulations by Colorado for Insurance

When the Algorithm Answers for Itself: AI Governance Regulations by Colorado for Insurance

An algorithm does not need to intend to discriminate in order to discriminate. It simply needs to be trained on data that reflects existing inequalities, optimized for patterns that correlate with protected characteristics, and deployed at scale across millions of pricings and underwriting decisions. This is what has been happening across segments of the American insurance market as predictive models proliferated faster than the governance frameworks designed to oversee them. Colorado has decided that this is no longer acceptable, and the regulatory framework it has built over the past 4 years is the most serious attempt yet by any US state to hold the insurance industry accountable for what its algorithms actually do rather than what they are intended to do.

Introduction

Artificial intelligence is actively reshaping how insurance companies assess risk, price policies, and make underwriting decisions. From algorithms that parse credit scores and social media habits to predictive models that evaluate health risk, AI-driven tools have become deeply embedded in the insurance industry. While these tools can improve efficiency and broaden access to coverage, they also carry a significant risk: the potential to discriminate against consumers based on race, gender, disability, or other protected characteristics, often invisibly and at scale.

Colorado has positioned itself at the forefront of addressing this challenge. In 2021, the state enacted Senate Bill 21-169, a landmark piece of legislation requiring insurers to demonstrate that their use of external consumer data, algorithms, and predictive models does not result in unfair discrimination. As of October 15, 2025, Colorado's amended Regulation 10-1-1 extends AI governance requirements to private passenger automobile insurers and health benefit plan insurers, marking a pivotal moment in the evolving relationship between AI, insurance, and consumer protection.

The Foundation: Senate Bill 21-169

The story of Colorado's AI governance in insurance begins with SB 21-169, signed into law on July 6, 2021. The legislation was a direct response to the rapid expansion of big data in insurance practices. Insurers had begun using a wide array of nontraditional data sources including credit histories, educational attainment, job titles, purchasing habits, civil judgments, court records, and even social media behavior to make underwriting and rating decisions. While these data points can correlate with actuarial risk, they also correlate closely with race, ethnicity, socioeconomic status, and other protected characteristics.

SB 21-169 prohibited insurers from using external consumer data and information sources, known as ECDIS, as well as algorithms and predictive models built on such data, in ways that result in unfair discrimination against consumers on the basis of race, color, national origin, religion, sex, sexual orientation, disability, gender identity, or gender expression. The first regulation under SB 21-169, Regulation 10-1-1, was adopted in 2023 and applied to life insurers.

Newsletter continues after job posts…

👔 New Actuarial Job Opportunities For The Week

Have you signed up to our weekly job alerts on Actuary List? We post 50 new handpicked jobs every week that match your expertise. To ensure you don’t miss out, sign up here. Here are a few examples of new jobs this week:

Interested in advertising with us? Visit our sponsor page

Expanding the Reach: Auto and Health Insurance

For years, auto and health insurance consumers remained outside the formal scope of Colorado's AI governance rules even as algorithmic tools continued to shape their premiums and coverage decisions. The amended Regulation 10-1-1, formally adopted on August 20, 2025 and effective October 15, 2025, extends the full governance and risk management framework requirements to private passenger automobile insurers and health benefit plan insurers.

Under the amended regulation, both auto and health insurers must build and maintain a comprehensive AI governance infrastructure. This includes written policies and procedures governing how ECDIS and predictive models are selected, developed, tested, deployed, and continuously monitored. The governance framework must be overseen at the board or senior leadership level, ensuring that accountability for AI use flows to the highest levels of the organization.

What Biased Algorithms Look Like in Practice

The regulation is not responding to abstract theoretical concern. In stakeholder consultations conducted as part of Colorado's rulemaking process, it emerged that male drivers were being quoted auto insurance premiums that were $58 lower than comparable female drivers, a disparity driven by algorithmic factors with no demonstrated relationship to actual driving risk. In health insurance, predictive models incorporating socioeconomic proxies can effectively penalize low-income or minority consumers with higher premiums, reduced coverage, or denial of claims. These outcomes do not require discriminatory intent. They require only that the model was trained on historical data reflecting existing inequalities and that nobody was required to test whether the outputs were fair before deploying them at scale.

What This Means for Actuaries

For actuaries, Colorado's framework is a professional responsibility story. Actuaries have always been the people in insurance organizations best positioned to understand what predictive models are actually doing, where the assumptions are questionable, and where the outputs may be producing results that are technically defensible under one metric and genuinely unfair under another. The skills required to meet Colorado's requirements, model inventory management, bias testing methodology, documentation of testing protocols, and governance framework design, are not foreign to the actuarial toolkit. They are extensions of what sound actuarial practice already demands.

In practical terms, actuaries working with Colorado-regulated insurers now need to be engaged in several areas that may previously have sat outside their formal scope. Model governance frameworks require actuarial input on how to define and test for unfair discrimination in a statistically rigorous way. The inventory of ECDIS requires someone who understands what external data sources are actually doing inside a model, not just what the vendor claims they do. Bias testing requires a methodology, and methodology in a context this consequential requires actuarial judgment rather than a vendor checkbox. The annual compliance reports that must be signed by a corporate officer are only credible if the underlying work was done by people who understand what they are attesting to.

The broader implication is that AI governance is becoming a core actuarial competency whether the profession formally claims it or not. If actuaries do not step into this space, it will be filled by data scientists, compliance officers, and attorneys who lack the combination of statistical rigor, risk management orientation, and professional accountability standards that make actuarial involvement genuinely valuable. Colorado's framework is an invitation for the actuarial profession to own a problem it is uniquely qualified to solve.

Compliance Timelines and Practical Starting Points

Auto and health insurers newly covered by the regulation must submit an interim compliance progress report by December 1, 2025, with annual compliance reports required beginning July 1, 2026. Reports must be signed by a corporate officer who attests to their accuracy, and insurers falling short of requirements must submit corrective action plans.

For any insurer beginning this work now, the practical starting point is a thorough inventory and gap analysis. Every use of external consumer data and every algorithmic model touching underwriting, pricing, or claims needs to be mapped and documented before any governance framework can be built around it. Many insurers will find that this inventory exercise alone reveals uses of data and models that senior leadership did not know existed. That discovery is uncomfortable but it is the point. You cannot govern what you have not identified, and you cannot test for bias in a model you are not aware of.

Colorado as a National Model: Is This Good Policy?

Colorado's approach deserves credit for being serious. The phased rollout starting with life insurance, the stakeholder consultation process, and the line-specific tailoring of requirements all reflect genuine regulatory craft rather than legislation designed primarily for press releases. The requirement for ongoing monitoring and annual reporting rather than one-time certification is particularly important. Bias in algorithmic systems is not a problem you solve once and put away. Models drift, data inputs change, and new proxy variables emerge. A continuous accountability loop is the appropriate response to a continuous risk.

Other states are watching. New York, California, Connecticut, and New Jersey are all considering similar frameworks. The national conversation about AI regulation in insurance is gaining momentum, and Colorado's multi-year effort positions it as the reference point others will build from, adapt, and in some cases improve upon. Whether you view this as an overdue correction or an administrative burden will likely depend on whether you have ever had to explain to a consumer why an algorithm decided they were a worse risk than their neighbor.

Looking for clarity on consulting, income, or next steps?

Last week we covered What Small Insurers Get Wrong About the Appointed Actuary.
👉 If you missed the last week’s issue, you can find it here.

💼 Sponsor Us

Get your business or product in front of thousands of engaged actuarial professional every week.

💥 AI Prompt Of The Week

About This Prompt

This prompt helps actuaries prepare for high-impact networking or client meetings. By asking AI for guidance, they get tailored strategies, talking points, and confidence to approach conversations with purpose and professionalism.

The Prompt:

I’m meeting with John Doe (insert person’s name) for a networking call (insert purpose here, such as interview or client prospect). John works at Company ABC in the risk management department (insert department) as a Vice President (insert title). Please suggest how I should approach this meeting and give me some talking points tailored to his role and company.

🌟 That’s A Wrap For Today!

We’d love your thoughts on today’s newsletter to make My Actuary Weekly even better. Let us know below:

Login or Subscribe to participate in polls.