The company xAI on Thursday filed a federal lawsuit seeking to block Colorado’s enforcement of the state’s new artificial intelligence (AI) law before it takes effect.
The lawsuit names Colorado Attorney General Philip Weiser and aims to block the Consumer Protections for Artificial Intelligence (CPAI) law, which will require developers of “high-risk” AI systems to exercise “reasonable care” to protect consumers from algorithmic discrimination.
The complaint raises six constitutional claims, focused primarily on First Amendment and Equal Protection grounds. The suit argues that developing an AI model is an “expressive act” protected by the First Amendment, and that the CPAI effectively demands the company redesign its systems, forcing it to alter training data and system prompts to conform to the state’s views on fairness and race. Citing the Supreme Court rulings in 303 Creative v. Elenis and Moody v. NetChoice, xAI states that Colorado is forcing a government-mandated change to expressive content, which triggers a “strict scrutiny” analysis. Under strict scrutiny, the government must demonstrate that a law serves a compelling state interest and uses the least restrictive means possible to achieve that interest. xAI argues that Colorado fails to satisfy this standard of judicial review.
The lawsuit further alleges that the law’s key terms, including “historical discrimination,” are unconstitutionally vague, and that its Equal Protection carve-out for AI used to “increase diversity or redress historical discrimination” enforces a race-based double standard without compelling justification.
The suit also challenges the law’s extraterritorial reach, arguing that because the CPAI applies any time a Colorado resident is affected by an AI system, regardless of where that interaction takes place, it unconstitutionally regulates transactions that are occurring entirely outside the state. The Constitution’s Dormant Commerce Clause prohibits states from regulating commerce that occurs outside their borders, and xAI argues that the CPAI violates this clause.
The CPAI goes into effect on June 30. The law defines high-risk systems as any AI system that, “when deployed, makes, or is a substantial factor in making, a consequential decision.” Developers must make extensive public disclosures about how their systems are evaluated and what steps are taken to mitigate bias, and must notify the state attorney general within 90 days of discovering that a system has caused or is reasonably likely to have caused “algorithmic discrimination.” Violations are treated as unfair trade practices under Colorado’s Consumer Protection Act, carrying a civil penalty of $20,000 per violation, with the AG holding exclusive enforcement authority.
The complaint emphasizes that AG Weiser himself has previously called the law “really problematic.” According to media reports, the Colorado AG’s Office has declined to comment on the lawsuit.
This marks the newest litigation in the emerging landscape of AI. In January, Google and Character.AI agreed to settle a lawsuit linked to a teenager’s 2024 suicide. Last September, Anthropic agreed to a $1.5 billion settlement involving a class-action lawsuit over pirated materials used to train its models.