A patchwork of state AI laws is creating 'a mess' for US businesses
The laws governing artificial intelligence are increasingly different depending on where you are in the US, a mounting source of confusion for businesses racing to capitalize on the rise of AI.
This year, Utah state lawmakers are debating legislation requiring certain businesses to disclose if their products interact with consumers without using a human.
In Connecticut, state legislators are considering a bill that would place stricter limitations on transparency about the inner workings of AI systems considered "high risk."
They are among the 30 states (plus the District of Columbia) that have either proposed or adopted new laws placing constraints, either directly or indirectly, on how AI systems are designed and used.
The legislation targets everything from the protection of children and data transparency to reducing bias and protecting consumers from AI-generated decisions in healthcare, housing, and employment.
"It’s really just a mess for business," Goli Mahdavi, a lawyer with Bryan Cave Leighton Paisner, said about the still-developing bills and newly enacted statutes. "It’s just a lot of uncertainty."
The reason for the patchwork of laws across the US is the lack of action from Washington to offer direct federal regulation of the fast-evolving technology, largely because not all US lawmakers agree new laws are needed to police AI.
Things are different in other parts of the world. The European Union passed a comprehensive AI law called the AI Act into law this year. And China has adopted more politically focused AI laws that target AI-generated news distribution, deepfakes, chatbots, and datasets.
Yet the state laws being debated or enacted in the US do reflect priorities set out by the federal government, Mahdavi said.
President Biden, for example, has directed AI developers and users to apply AI "responsibly" in an executive order issued last October. Then in January, the administration added a requirement for developers to disclose their safety test results to the government.
The state laws do share some common themes, but their subtle differences can make business compliance a challenge.
California, Colorado, Delaware, Texas, Indiana, Montana, New Hampshire, Virginia, and Connecticut adopted consumer protection laws that entitle consumers to be notified about automated decision making and the right to opt out of profiling technology used to produce legally significant effects.
The laws broadly prohibit companies from subjecting consumers to automated decision making technology without their consent.
Businesses, for example, may not profile consumers according to their work performance, health condition, location, financial condition, and other factors, unless they explicitly agree.
Colorado’s laws extend further to prohibit AI from generating discriminatory insurance rates.
However, the term "automated decision making," which appears in most of the laws, is defined differently between states.
In some cases a decision for employment or financial services is no longer considered automated so long as the decision is made using some level of human involvement.
New Jersey and Tennessee have so far stopped short of enacting opt-out provisions. However, the states do require those using AI for profiling and automated decision making to perform risk assessments to ensure consumer personal data is protected.
In Illinois, a law that went into effect in 2022 limits employers from using AI in video assessments of job candidates. Consent from a candidate is required before an employer can use AI to evaluate the candidate’s video image.
In Georgia, a narrow law tailored to optometrists’ use of AI went into effect in 2023. The law requires that AI devices and equipment used to analyze eye images and other eye assessment data may neither be solely relied upon to generate an initial prescription nor a first-time prescription renewal.
New York became the first state to require that employers conduct bias audits of their AI-enabled employment decision tools. The law went into effect in July 2023.
Multiple states followed that trend more broadly, requiring that entities and individuals that use AI conduct data risk assessments before using the technology to process consumer data.
What helps so many states get these laws through their legislatures is "a historic level of single-party control," said Scott Babwah Brennen, head of online expression policy at UNC-Chapel Hill's Center on Technology Policy.
Last year, state legislatures in roughly 40 states were dominated by a single party. That count has more than doubled from 17 that had such control in 1991.
Click here for the latest technology news that will impact the stock market.
Read the latest financial and business news from Yahoo Finance