Skip to main content

Clifford Chance

Clifford Chance
Artificial intelligence<br />

Artificial intelligence

Talking Tech

Colorado's New Comprehensive AI Legislation

Decoding SB 24-205: Colorado’s New AI Legislation

Artificial Intelligence 22 July 2024

On May 17, 2024, Colorado enacted the Concerning Consumer Protections in Interactions with Artificial Intelligence Systems Act (the Colorado AI Act), making it the first U.S. state to pass comprehensive legislation regulating AI.  Starting on February 1, 2026, developers and deployers of AI systems affecting consumers will be required to comply with a range of requirements, or risk enforcement action by the Colorado Attorney General (AG).

Key Context and Definitions

The Colorado AI Act requires developers and deployers of high-risk AI systems to use reasonable care to protect consumers from algorithmic discrimination. It also contains certain requirements for consumer-facing AI systems that are not high-risk.

Key definitions are as follows:

  • "Algorithmic discrimination" is any condition in which the use of an AI system results in an "unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under state or federal law."
  • "Consequential decision" is "a decision that has a material legal or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of: (a) education enrollment or an education opportunity; (b) employment or an employment opportunity; (c) a financial or lending service; (d) an essential government service; (e) health-care services; (f) housing; (g) insurance; or (h) a legal service".
  • "Deployer" is a person doing business in the state of Colorado that deploys a high-risk AI system.
  • "Developer" is a person doing business in the state of Colorado "that develops or intentionally and substantially modifies" an AI system.
  • "High-risk AI system" is any AI system "that, when deployed, makes, or is a substantial factor in making, a consequential decision."  This definition is subject to  exceptions, including for AI systems where AI is intended to perform a narrow procedural task and anti-malware, anti-virus and AI-enabled video game technologies, which are not a substantial factor in making a consequential decision.
  • "Intentional and substantial modification" means a deliberate change to an AI system that results in any new reasonably foreseeable risk of algorithmic discrimination.  The definition is subject to exceptions.

Key Colorado AI Act Requirements

The Colorado AI Act places certain distinct obligations on AI developers and deployers, which include the following: 

Developer Obligations

A developer is required to equip deployers with comprehensive information about high-risk AI systems to enable deployers' adequate assessment of such systems' reasonably foreseeable discriminatory impacts. The information must include:

  • High-level summaries of the type of data used to train a system.
  • The system's known or reasonably foreseeable limitations, including risks of algorithmic discrimination.
  • The intended purpose, benefits, and uses of the system.
  • The criteria used to evaluate the system and measures taken to mitigate. known or reasonably foreseeable algorithmic discrimination risks.
  • Data governance measures.
  • Intended outputs.

A developer must ensure that its website clearly outlines the types of high-risk AI systems it makes available, along with the measures the developer takes to manage algorithmic discrimination risks. This information must be kept up to date.

Deployer Obligations

A deployer must establish a comprehensive risk management policy and program designed to identify and mitigate reasonably foreseeable risks of algorithmic discrimination from high-risk AI systems.  Such policy and program must be regularly reviewed and updated to ensure that they remain "reasonable."  "Reasonableness" is assessed by considering the National Institute of Standards and Technology's AI Risk Management Framework, the size and complexity of the deployer, the sensitivity and volume of data the deployer processes, and other factors.   

Deployers are required to conduct an impact assessment for any high-risk AI system at least once a year, and within 90 days after any intentional and substantial modification to the system is made available.  The impact assessment must include a range of information, including, for example:

  • A statement describing the purpose, intended use, deployment context, and benefits afforded by the system.
  • An analysis of any known or reasonably foreseeable risks of algorithmic discrimination and steps taken to mitigate them.
  • A description of the categories of data for inputs and outputs.
  • The post-deployment monitoring and safeguards put in place by the deployer.

A deployer may leverage another risk assessment for compliance with this requirement, and may employ a third party to conduct such assessment.  Deployers must keep records associated with risk assessments for at least three years after discontinuing the use of an AI system and may be requested to provide these to the AG on 90 days' notice. 

A deployer that deploys a high-risk AI system to make or be a substantial factor in making a consequential decision about a consumer, must notify the consumer about e.g., the purpose of the system and the nature of the consequential decision, disclose deployer's contact information, and provide the consumer the right to opt out of processing of their personal data.   

In instances where a deployer deploys a high-risk AI system to make or be a substantial factor in making an adverse consequential decision about a consumer, the deployer must disclose to the consumer e.g., the type and sources of data used to make the decision and how the system contributed to the decision.  Consumers must be given an opportunity to correct any incorrect personal information and to appeal an adverse consequential decision. Where feasible, this appeal should be under human review.

Analogous Developer and Deployer Obligations

The Colorado AI Act also places certain obligations on AI developers and deployers, which are analogous: 

  • Disclosures.  Both developers and deployers that provide a consumer facing AI system should disclose to consumers that they are interacting with an AI system, unless this "would be obvious to a reasonable person." 
  • Incident Reporting.  Developers have a duty to report to the AG and all deployers and other developers of a high-risk AI system any known or reasonably foreseeable risks of algorithmic discrimination that could arise from intended uses of such system within 90 days of a developer: (i) discovering that the system has been deployed and has caused or is reasonably likely to cause algorithmic discrimination; or (ii) receiving a credible report of algorithmic discrimination from a deployer.

Similarly, within 90 days of a discovery, deployers must notify the AG if a high-risk AI system they have deployed has caused algorithmic discrimination.

Enforcement

The AG is empowered to investigate potential violations of the Colorado AI Act. The most significant enforcement power is the issuance of notices of violation to developers, deployers, and other persons who are alleged violators. A violation would also constitute an "unfair trade practice" under Colorado state law. The enforcement powers extend to bringing legal actions against non-compliant entities, including injunctions to prevent ongoing violations, civil penalties to deter future non-compliance, and restitution for consumers harmed by discriminatory AI practices. The specifics of how these powers will be exercised are likely to be further clarified through rulemaking and enforcement over time.  There is no private right of action.

Practical Considerations

Developers and deployers conducting business in Colorado are encouraged to review the Colorado AI Act requirements and prepare for compliance with detailed obligations, including:

  • Understanding the impact of AI systems on consumers.
  • Drafting disclosures regarding consumer interactions with AI.
  • Disclosing reasons for adverse consequential decisions and enabling. attendant processes to correct inaccuracies and appeals.
  • Preparing for incident disclosure requirements.

Want to know more? Clifford Chance Partner James McPhillips spoke to Bloomberg Law about the new Act. 

Clifford Chance can help entities navigate the complexities of the new law, including regulatory compliance and guidance on governance and other procedures.