Compliance & Governance — March 2026
Colorado SB 24-205:
What Your AI Impact Assessment
Actually Requires
Colorado’s AI Act takes effect June 30, 2026 — unless the legislature rewrites it first. A working group reached consensus on a repeal-and-replace plan on March 17. But the original law is still on the books. This is the technical guide to what compliance actually looks like, regardless of which version you’re preparing for.
Current Status
The Law Is Still on the Books
Here’s the timeline. Every attempt to amend, repeal, or gut SB 24-205 has failed. What passed was a five-month delay.
May 17, 2024
SB 24-205 signed into law
Governor Polis signs with reservations. Urges legislators to “significantly improve” the law before it takes effect. Original effective date: February 1, 2026.
May 2025
SB 25-318 amendment effort fails
Senate Majority Leader Rodriguez introduces comprehensive amendments. Bill postponed indefinitely. Last-ditch delay attempt via unrelated bill also fails.
August 2025
Special session — 150+ lobbyists, 4 bills, 1 outcome
Four different bills introduced: full rewrite, total repeal, scope reduction, and minimal disclosure. Negotiations collapse over liability provisions. Rodriguez guts his own bill and replaces it with a find-and-replace: every “February 1, 2026” becomes “June 30, 2026.” That’s it.
August 28, 2025
SB 25B-004 signed — delay to June 30, 2026
Five-month delay. No substantive changes. Every obligation, rebuttable presumption, and exemption remains unchanged.
January 2026
2026 legislative session opens — no agreement
Expected reforms don’t materialize. Industry pushes for softening or repeal. Consumer groups push to preserve. No consensus.
March 17, 2026
Governor’s working group reaches consensus
Polis announces that an assembled working group of industry and civil rights experts has reached unanimous agreement on a plan to rework the Colorado AI Act. Repeal-and-replace bill language expected publicly in late March.
June 30, 2026
Current law takes effect — unless replaced
If no replacement bill passes, the original SB 24-205 provisions become enforceable as written. AG enforcement begins.
The Bottom Line
The law may be rewritten. But it hasn’t been rewritten yet. And every proposed replacement has preserved the core requirements: impact assessments, risk management programs, and consumer disclosures. If you’re building for the original law, you’re building for whatever comes next. The technical requirements are converging, not diverging.
Requirements
What the Law Actually Says
SB 24-205 creates obligations for two roles: developers (who build or substantially modify AI systems) and deployers (who use them to make consequential decisions). Most Colorado companies are deployers.
| Requirement | Developer | Deployer | What It Means Technically |
| Duty of reasonable care | ✔ | ✔ | Document that you tested for algorithmic discrimination. “We didn’t know” is not a defense. |
| Impact assessment | ✔ | ✔ | Annual assessment + within 90 days of substantial modification. Must cover purpose, data, performance, discrimination risks, and mitigation steps. |
| Risk management program | — | ✔ | Documented policy with principles, processes, and personnel for identifying and mitigating discrimination risks. |
| Technical documentation | ✔ | — | Model cards, dataset cards, or impact assessments sufficient for deployers to complete their own assessments. |
| Public statement | ✔ | ✔ | Developers: published summary of high-risk systems and risk management. Deployers: website disclosure of high-risk AI use. |
| Consumer notice | — | ✔ | Before a consequential decision: disclose AI involvement, purpose, data sources, right to correct data, right to appeal. |
| Discrimination discovery | ✔ | ✔ | If you discover algorithmic discrimination, notify the AG within 90 days. |
| Annual review | — | ✔ | Yearly review of each deployed high-risk system to verify it’s not causing discrimination. |
The Rebuttable Presumption
Here’s the incentive structure: if you comply with the law’s requirements, there’s a rebuttable presumption that you used reasonable care. Translation: if the AG comes knocking and you have documented impact assessments, a risk management program, and evidence of testing — you have a legal defense. If you don’t, you don’t. The law also provides an affirmative defense if you adopt NIST AI RMF or ISO/IEC 42001 and take steps to discover and correct violations.
Technical Guide
What an Impact Assessment Actually Requires
The law says you need an impact assessment. Here’s what that means for your engineering team, not your legal team.
The impact assessment is the centerpiece of SB 24-205 compliance. It must be completed before or at first deployment, annually thereafter, and within 90 days of any substantial modification. The law specifies what it must contain. Here’s each requirement translated into technical deliverables:
1. Purpose and Intended Use
📋 Document the system’s purpose, the decisions it influences, and the population affected. This is your system design document, not marketing copy. “AI-powered hiring assistant” is not sufficient. “LLM-based resume screening system that filters applications for engineering roles based on keyword matching and semantic similarity, influencing which candidates proceed to human review” is.
2. Data Description
📊 Describe the data the system processes: sources, types, whether it includes protected characteristics (race, sex, disability, age, veteran status, etc.), and how data quality is maintained. If you’re using a third-party model via API, document what data you send to it and what data it returns. If you don’t know what training data the model was trained on, document that you don’t know, and note the risk.
3. Performance and Limitations
⚙ Document known performance metrics, error rates, and limitations. This is where adversarial testing results belong. If your model has a 55% breach rate when connected to tools, that’s a documented limitation. If encoding bypass attacks succeed at 85%, that’s a known risk. Performance documentation without adversarial testing is incomplete.
4. Algorithmic Discrimination Risk Analysis
⚠ This is the core requirement. Analyze whether the system creates risks of unlawful differential treatment based on protected characteristics. For LLM-based systems, this means testing whether the model produces different outputs, recommendations, or decisions for different demographic groups. Standard safety benchmarks don’t test this. You need targeted bias evaluation across protected classes, not just a Promptfoo pass rate.
5. Mitigation Steps
🛡 Describe the steps you’ve taken to mitigate identified risks. This is where your defense architecture matters. Input normalization, semantic classification, output gating, human-in-the-loop review. Document what you deployed and how it performs. If your defense proxy reduces breach rates from 47% to 0%, that’s your mitigation evidence. If you have no mitigations, document that too, and explain why.
6. Post-Deployment Monitoring
🔍 Describe how you monitor the system after deployment for discrimination, drift, and new risks. Continuous monitoring is not optional, the law requires annual review at minimum. If your monitoring catches a problem, you have 90 days to notify the AG. If you don’t have monitoring, you won’t catch the problem, and you won’t have the affirmative defense.
The Problem
What Most Organizations Are Missing
The law requires testing for algorithmic discrimination. Most organizations are doing standard safety benchmarks. These are not the same thing.
Standard safety benchmarks test whether a model produces harmful text. They don’t test whether a model treats protected groups differently. They don’t test what happens when the model is connected to real tools. They don’t test whether your defense stack actually works. And they don’t produce the documentation the law requires.
Our assessment of GPT-4.1 illustrates the gap: the model passes 84% of standard safety benchmarks, but when connected to enterprise tools via MCP, breach rates jump to 55–75%. A compliance program built on benchmark pass rates alone creates what we call the “safety illusion” — documentation that looks complete but doesn’t actually test the risks the law is designed to address.
What SB 24-205 Compliance Testing Actually Looks Like
A compliant impact assessment needs adversarial testing across multiple dimensions: bias testing across protected classes, security testing for prompt injection and data exfiltration, tool-use testing if the system has tool access, cross-model comparison if you’re evaluating alternatives, and defense verification if you’ve deployed mitigations. Every finding needs to be reproducible, documented, and mapped to a recognized risk framework (NIST AI RMF or ISO/IEC 42001) to support the rebuttable presumption and affirmative defense.
Compliance Mapping
SB 24-205 × NIST AI RMF × OWASP × MITRE ATLAS
The law points to NIST AI RMF and ISO/IEC 42001 for the affirmative defense. Here’s how the requirements map across frameworks.
| SB 24-205 Requirement | NIST AI RMF | OWASP LLM Top 10 | MITRE ATLAS |
| Algorithmic discrimination testing | MAP 2.3, MEASURE 2.6 | LLM02: Output Handling | AML.T0048 |
| Impact assessment | GOVERN 1.2, MAP 1.1 | — | — |
| Risk management program | GOVERN 1.1–1.7 | — | — |
| Prompt injection defense | MANAGE 2.1 | LLM01: Prompt Injection | AML.T0051 |
| Data exfiltration prevention | MANAGE 2.2 | LLM06: Sensitive Info | AML.T0024 |
| Tool-use security | MANAGE 3.1 | LLM07: Insecure Plugin | AML.T0054 |
| Post-deployment monitoring | MEASURE 4.1–4.3 | LLM09: Overreliance | AML.T0043 |
| Consumer disclosure | GOVERN 4.1 | — | — |
Why This Matters for the Affirmative Defense
The law provides an affirmative defense for organizations that adopt NIST AI RMF or ISO/IEC 42001 and take steps to discover and correct violations. Adopting the framework without testing is not sufficient. Testing without adopting the framework is not sufficient. You need both: the governance structure and the technical evidence. Your impact assessment is the bridge between the two.
Action Items
The 90-Day Compliance Roadmap
June 30 is ~100 days away. Here’s the sequence.
Weeks 1–2: Inventory
📦 Map every AI system that touches a consequential decision (employment, housing, healthcare, education, finance, insurance, legal services, government services). Include vendor tools embedded in HR platforms, underwriting systems, claims processing, and customer service.
📦 For each system: identify whether you are a developer, deployer, or both. Identify the data processed, decisions influenced, and populations affected.
Weeks 3–4: Governance
📜 Adopt NIST AI RMF or ISO/IEC 42001 as your governance framework. Document the adoption. This is the foundation of your affirmative defense.
📜 Draft your risk management policy: principles, processes, personnel, escalation procedures. Assign ownership for each high-risk system.
Weeks 5–8: Testing
🛡 Conduct adversarial security assessments on each high-risk system. Test for bias across protected classes, prompt injection, data exfiltration, tool abuse, and output integrity.
🛡 If the system has tool access, test with tools connected. Standard benchmarks are necessary but not sufficient.
🛡 Document findings with reproduction rates, severity scores, and framework mappings (NIST, OWASP, MITRE).
Weeks 9–10: Remediation
🔧 Deploy mitigations for identified risks. Document what you deployed, how it performs, and what residual risk remains.
🔧 Re-test after remediation to verify defenses work. This is the “closed loop” — the testing evidence that your mitigations actually reduce the documented risks.
Weeks 11–12: Documentation & Monitoring
📝 Compile impact assessment for each high-risk system. Include purpose, data, performance, discrimination risk analysis, mitigation steps, and monitoring plan.
📝 Prepare consumer-facing disclosures. Draft website notice of high-risk AI use. Prepare pre-decision and adverse-decision notification templates.
📝 Establish continuous monitoring: schedule annual reviews, define substantial modification triggers for 90-day reassessment, and create AG notification procedures.
Exemptions
The Small Business Exemption
Under 50 Employees? Read This First.
If you employ fewer than 50 full-time employees, you may be exempt from the impact assessment, risk management program, and website disclosure requirements. But there are conditions. The exemption only applies if you (1) don’t use your own data to train or significantly customize the AI system, and (2) use the system for its intended purpose as documented by the developer. If you’re fine-tuning a model on your own data or repurposing it beyond the developer’s intended use, the full requirements apply regardless of company size. We’d rather you know that upfront than find out after paying for compliance work you didn’t need.
Even with the exemption, you still have the general duty of reasonable care, you still need to provide consumer disclosures when AI makes consequential decisions, and you still need to provide appeal rights for adverse decisions. The documentation burden is lighter, but the consumer-facing obligations remain.
This exemption covers most startups and small businesses using off-the-shelf AI tools. If you’re a 30-person company using an LLM-powered HR screening tool from a vendor, you don’t need to complete your own impact assessment, but the vendor (as developer) does, and they need to provide you with the documentation to understand the system’s risks.
Need Help With Your Impact Assessment?
CLS Security Labs provides the adversarial testing, bias evaluation, and compliance-mapped documentation that SB 24-205 impact assessments require. Every finding scored across five severity dimensions by three independent judges and mapped to NIST AI RMF, OWASP LLM Top 10, and MITRE ATLAS.
Forge Assessments start at $2,500. Full CLAP Assessments, including remediation and re-verification, start at $15,000. Both produce audit-ready documentation for Colorado AG compliance. Not sure if your AI systems qualify as high-risk under SB 24-205? Book a free 30-minute consultation — we’ll help you figure that out before you spend anything.
Disclaimer
This blog post is for informational purposes only and does not constitute legal advice. CLS Security Labs is not a law firm. The Colorado AI Act is subject to ongoing legislative revision. The repeal-and-replace bill referenced in this post may alter specific requirements. Organizations should consult qualified legal counsel for compliance guidance specific to their situation. CLS Labs provides the technical testing and documentation components of impact assessments, not legal interpretation.