President Donald Trump’s executive order on artificial intelligence invites analysis of a question so complex that it rarely gets asked: “What exactly do states have the authority to regulate?”
The current, somewhat trite answer is, “The residuary powers reserved under the Tenth Amendment.” Omitting the legalese, that means that states can do whatever the federal government cannot.
States have the power to look out for the health, safety, and welfare of their residents. Thus, for instance, they have the power to address local concerns through zoning laws, professional certifications via licensing regimes, and ensure public safety through law enforcement. These authorities make up what’s often referred to as a state’s “police powers.”
While this generic reading of state power is not necessarily wrong, it’s imprecise. As the AI Litigation Task Force created by Trump’s EO starts its work, a more specific answer is warranted.
The task force is charged with challenging “unconstitutional, preempted, or otherwise unlawful State AI laws that harm innovation.” Reading between these lines, its mission is to contest state laws that interfere with the Administration’s vision for a national AI policy framework. This isn’t an unlimited charge, though. Federal courts reviewing state laws will only strike them down if they fail to align with the Constitution’s allocation of authority or otherwise prove unlawful.
Many stakeholders in AI debates liberally interpret the authorities afforded to states. Based on concerns of existential risk to humanity and the idea that states must protect the health of their citizens, state legislators have proposed and enacted laws that impose significant obligations on the development of AI. Some assume they must have this right, since protecting the lives of their residents is a core priority and unquestioned authority of state governments. After all, since the founding, states have been able to enforce quarantines out of a concern for public health—aren’t aggressive AI laws just extensions of such public health measures, but tailored to the threat of modern threats?
It’s not that simple. States’ police powers are reasonably broad, but not unlimited. States must respect both an upper bound—the purview of enumerated powers reserved for federal authority—and a lower bound—the rights retained by the states’ citizens. These constraints have been tested in litigation throughout our Constitution’s history, notably when state law conflicts with the federal government’s exclusive authority over interstate commerce and when states unduly limit the freedoms of their residents.
These notions are relatively blurry and highly contextual. As national regulatory policy evolves, so too does the extent of preemption. The Lochner era, for example, was a paradigm shift for state police power: as courts expansively interpreted the individual liberty to contract, states’ police power over health, labor protections, and market regulation shrank significantly—only to be restored later. Likewise, individual liberties and valid justifications for their abridgment have evolved to fit developments in civil rights law—from Brown v. Board to Dobbs and Lawrence.
Despite these significant changes in context, the constitutionality of states’ exercise of their police powers follows a bounded framework. This can be observed in the jurisprudence on public health measures—a prime example of police powers. Quarantine orders, from nineteenth-century epidemics to Covid-19, have a direct link to protecting local communities—one of the most important elements of state police powers. They respect the upper and lower bounds of police powers. First, they are geographically specific: they only affect local residents or people coming into local communities. Second, they directly reduce the risk to state residents: quarantines are known solutions to real threats to the health and safety of local communities. They infringe the individual liberties only insofar as is necessary to protect state residents’ vital interests.
When the Supreme Court reviews laws passed pursuant to a state’s police powers, it consistently assesses geographical specificity and justified infringements on individual freedoms, from Morgan’s Steamship Co. to Roman Catholic Diocese of Brooklyn. Federal courts have struck down state measures whose scope was overly broad in their abridgment of individual rights—this was the case in Preterm Cleveland, where a restrictive order overshot the public health objective. A heightened standard of scrutiny is also applied wherever the state limits the exercise of fundamental constitutional rights—for example, consider that courts have struck down state laws that unduly burdened residents’ First Amendment rights in Roman Catholic Diocese of Brooklyn and Second Amendment rights in McCarthy et al.
When States pass AI-related laws out of purported concern for local residents’ welfare, these conditions must also be met. Does this law concern only the state’s geographical purview? Does the law rationally address an issue facing local communities? These bounds will be heavily scrutinized by the AI Litigation Taskforce and federal courts.
Having established the legal backdrop, we can identify areas of state law susceptible to challenges on constitutional grounds.
State laws concerning AI’s use in employment and hiring, such as Illinois’ IHRA Amendment and Artificial Intelligence Video Interview Act, are likely well within the scope of state police powers.
State laws regulating speech are more ambiguous. Where they are narrowly construed to apply only to their residents, advance their general welfare, and otherwise adhere to First Amendment case law, they are probably safe from the AI EO’s task force—this includes the New York State Fashion Workers Act and the Colorado Candidate Deepfake Disclosure Law. Likewise, laws extending the scope of CSAM-related offenses to include AI-generated materials are unlikely to be successfully challenged, even under the intense First Amendment scrutiny mentioned above.
However, laws like Illinois’ HB4875, which prohibit commercial dissemination of AI-generated likeness without prior authorization, may be found to exceed the scope of police powers. Requiring the collection of authorization from non-residents for the dissemination of their likeness may restrict the speech for Americans well outside of Illinois state lines. Whether the benefits of such a law justify this incursion remains unclear.
State laws on transparency and safety are likely most open to challenge by the AI Litigation Task Force. California’s SB53 and New York’s RAISE Act, which require pre-deployment risk assessment, security protocols, and incident reporting, are particularly open to challenge because they tend to regulate AI labs before deployment within state jurisdiction, and their specific protection of residents’ welfare is diffuse at best. Likewise, provisions included in Colorado’s AI Act requiring that AI providers take care to protect their users from discrimination may be overbroad relative to the protection they offer to Colorado residents. Laws regulating the training of AI models in particular are open to Task Force challenges as they would invariably regulate interstate commerce in AI technologies.
With several hundred state laws on AI, the AI Litigation Task Force will need to be selective in its litigation. The brief overview above should set the scene for the intense jurisdictional battle ahead. While states may not be thrilled about a politically-driven assault on their legislation, policymakers who have done their homework on the bounds of police power need not worry. If anything, this sort of trial-by-litigation will clarify the purview of state action on AI and ensure that effective and appropriate AI bills go into effect.

