The Outsize Impact of the Colorado AI Act
On February 1, 2025, Every company doing business in Colorado will need to comply
The Colorado Artificial Intelligence Act (CAIA) applies to companies that develop, deploy, or operate AI systems within Colorado, regardless of whether they are headquartered in the state. This includes businesses that offer products or services in Colorado, such as banks, which often use AI for loan decisions, fraud detection, and customer service. To comply, banks must conduct detailed risk and impact assessments for high-risk AI systems, ensuring these systems do not result in unfair discrimination or harm to customers. Transparency is a critical requirement; banks must disclose when AI is used to make decisions impacting customers (e.g., loan approvals) and provide customers the right to opt out of fully automated decision-making. Banks are also required to implement robust AI governance frameworks that include regular audits of AI models to prevent bias, ensure fairness, and maintain accuracy. Additionally, banks must adhere to Colorado's data privacy laws, ensuring customer data used by AI systems is secure, minimized, and processed only with explicit consent when sensitive information is involved. Continuous monitoring and reporting are also essential, with regulators empowered to review compliance and impose penalties for violations.
See our video on YouTube discussing how to comply with the rule:
Using the NIST AI Risk Management Framework as a Safe Harbor
As state and local rules start to proliferate in the US, Colorado (and hopefully other states) say that if a company is using the national NIST AI RMF, that is a safe harbor for compliance with the (often more detailed) state rules.
Introduction to the NIST AI RMF
The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a voluntary framework designed to help individuals, organizations, and society manage the risks associated with artificial intelligence (AI) systems. It promotes the trustworthy and responsible use of AI, ensuring that AI developments align with societal values and legal requirements. This framework is structured around four key functions: Governing, Mapping, Measuring, and Managing. Here's an overview of each section and guidance on how to comply with the framework.
1. Governing
Governing involves setting the policies and procedures necessary to manage AI risks effectively. It ensures that AI deployment aligns with organizational values, legal requirements, and societal norms. It starts with a policy about what you will do (and not do) with AI, but also what models to use. Make sure there is a responsible person for AI at your company.
2. Mapping
Map who can do what with AI and potential risks associated with AI. At a large company, onboarding an employee should automatically set them up with certain AI access. On the risk side, high probability/high impact risks are the most important to address, including cybersecurity specifically for AI and maintaining data privacy and security.
3. Measuring
Measuring involves assessing the performance and effectiveness of AI systems in mitigating identified risks. Part of this is having a full audit tool that tracks what each user, agent, and model does, including model updates. This data is available for compliance and regulatory purposes, but can also be used by the technology team. Users can also report problems with the AI, creating a key “feedback loop” for something that otherwise might not be noticed.
4. Managing
Managing AI risks involves reactive measures – such as responding to a cyber incident involving AI, as well as proactive measures, such as re-authorizing your AI admins on a regular basis. Develop and implement strategies to reduce identified risks to acceptable levels. Prepare for potential incidents by developing response plans and conducting regular drills.
Conclusion
AI Regulation is already here. Artificial Intelligence Risk, Inc. has an award-winning enterprise AI platform customized for your industry. This includes a built-in compliance system for regulated industries (e.g. banks and health care) that facilitates full compliance with the NIST AI RMF. Please contact us for a free demo or trial. Check us out at https://www.aicrisk.com
AIR-GPT was used in the production and editing of this article.
Copyright © 2025 by Artificial Intelligence Risk, Inc. This article may not be redistributed in whole or part without express written permission from the author.