As Congress wound down its session without enacting any new artificial intelligence laws, state lawmakers have begun their own efforts to regulate the technology.
Colorado is just Signed and put into effect The bill, one of the most comprehensive in the nation, provides guidelines for companies that develop and use AI, with a focus on mitigating consumer harm and discrimination caused by AI systems. Democratic Gov. Jared Polis said he looks forward to continuing discussions at the state and federal levels.
In other states, Like New Mexicohas focused on regulating how computer-generated images appear in the media and political campaigns. Like Iowahas criminalized sexually provocative computer-generated images, particularly those depicting children.
“We can’t just sit back and wait,” Rep. Krista Griffiths, D-Wilmington, Delaware, who proposed the AI restrictions, told States Newsroom. “These are issues that, rightly, our constituents want protected.”
Griffith said, Delaware Privacy ActThe bill was signed into law last year and takes effect on Jan. 1, 2025. It gives residents the right to know what information companies are collecting, to correct inaccuracies in that data, and to request that data be deleted. The bill is similar to other state laws across the country that govern how personal data can be used.
Numerous bills to regulate technology have been introduced in Congress, but none have passed. The bill was passed in the 118th Congress. It is about imposing limits on artificial intelligence models deemed high risk, establishing a regulator to oversee AI development, imposing transparency requirements on the evolving technology, and protecting consumers through accountability measures.
In April, US Privacy Rights Act of 2024 was introduced, and in May the bipartisan Senate Artificial Intelligence Working Group AI Policy Roadmap This is intended to support federal investments in AI while safeguarding against the risks of the technology.
Griffith also introduced legislation this year to create a Delaware Artificial Intelligence Commission, saying the state would fall behind on these already rapidly evolving technologies if it stood by and did nothing.
“The longer we wait, the longer we will delay understanding how it’s being used, how to stop or prevent potential harm from occurring, and also the longer we will miss out on some of the associated efficiencies that could help improve government services and individuals’ lives,” Griffiths said.
States are enacting AI laws At least since 2019However, AI-related bills have increased significantly over the past two years. Over 300 installed“It’s a big step forward,” said Heather Morton, who tracks state legislative activity as an analyst for the nonpartisan National Conference of State Legislatures.
Additionally, so far this year, 11 states have enacted laws governing how AI can be used, regulated, or to provide checks and balances on AI, bringing the total number of states with AI laws to 28.
How do ordinary people interact with AI?
Engineers have been experimenting with decision-making algorithms for decades, with early frameworks dating back to the 1950s, but in recent years, generative AI, which can generate images, language, and responses to prompts in seconds, has been gaining industry traction.
Many Americans will be interacting with artificial intelligence throughout their lives, and industries like banking, marketing, and entertainment build many of their modern business practices on AI systems. These technologies are the foundation for large-scale developments like the power grid and space exploration.
Most people are aware of smaller uses, such as a company’s online customer service chatbots or asking an Alexa or Google Assistant device for information about the weather.
Rachel Wright, a policy analyst at the Council of State Governments, said there’s a potential tipping point in public attitudes about AI that could add urgency for lawmakers to act.
“I think 2022 will be a pivotal year because of ChatGPT,” Wright said. “It was the first time that regular people actually interacted with an AI system and a generative AI system like ChatGPT.”
Competing interests: industry vs. privacy
Andrew Gamino Chong co-founded Trustible, an AI governance management platform, early last year as states began enacting legislation to help organizations identify risky uses of AI and comply with regulations already in place.
State and federal lawmakers understand the risks of passing new AI laws: too much AI regulation could stifle innovation, while unrestrained AI could raise privacy concerns or perpetuate discrimination.
One example is Colorado’s law, which applies to developers of “high-risk” systems that make critical employment, banking, and housing decisions. It holds these developers responsible for ensuring they don’t create algorithms that may be biased against certain groups or characteristics. The law requires that instances of this “algorithmic discrimination” be reported to the Attorney General’s office.
At the time, Logan Selkovnik, founder and CEO of Denver-based Thumper.ai, said: It’s called a bill He called the “far-reaching” proposal well-intentioned and said developers had to think about how the bill’s key social changes would work.
“Are we moving from actual discrimination to the risk of discrimination before it even happens?” he added.
But Delaware Rep. Griffith said life-changing decisions like approving a mortgage should be transparent and traceable. If she’s denied a mortgage because of an algorithmic error, how can she appeal?
“I think it also helps us understand where the technology is going wrong,” she said. “We need to know where it’s going right, but we also need to understand where it’s going wrong.”
Some developers at big tech companies believe federal and state AI regulation could stifle innovation, but Gamino Chong said he actually believes this “patchwork” of state legislation could create pressure for clear federal action from lawmakers who see AI as a big growth area for the US.
“I think this is one of the areas where the discussion around privacy and AI may be a little different because there are competitive and even national security considerations to investing in AI,” he said.
How do states regulate AI?
Wright said late last year. The role of AI in each stateThe report categorizes the approaches states have taken to safeguarding the technology. Many of the 29 laws enacted at the time focused on creating avenues for stakeholder groups to come together and collaborate on how AI should be used and regulated. Other laws recognize the innovation that AI can enable, but regulate data privacy.
Transparency, protection from discrimination, and accountability are also major themes in state legislation. Since the beginning of 2024, laws have been passed on the use of AI in political campaigns, education, crime data, sexual offenses, and deepfakes (computer-generated, lifelike likenesses), broadening the scope of AI regulation by law. Currently, 28 states have passed around 60 laws.
Let’s take a look at the legal landscape in July 2024 in broad terms.
Interdisciplinary collaboration and oversight
Many states have enacted laws that bring together lawmakers, technology industry experts, academics, and business executives to oversee and consult on the design, development, and use of AI, sometimes in the form of councils or working groups, to keep an eye on unintended but foreseeable consequences of unsafe or ineffective AI systems. These include Alabama’sSB78), Illinois (HB3563), Indiana (150), new york (AB A4969, SB S3971B and No. 8808), Texas (HB20602023), Vermont (HB378 and HB410), California (AB302), Louisiana (SCR 49), Oregon (H4153), Colorado (SB24-205), Louisiana (SCR 49), Maryland (818), Tennessee (H 2325), Texas (HB2060),Virginia(S487), Wisconsin (No. 5838) and West Virginia (H5690).
Data Privacy
The second most common are laws that address data privacy and protect individuals from misuse of consumer data. Typically, these laws regulate how AI systems collect data and what they can do with it. These states include California,AB375), Colorado (SB21-190), Connecticut (SB6 and SB1103), Delaware (HB154), Indiana (SB5), Iowa (SF262), Montana (SB384), Oregon (SB619), Tennessee (HB1181), Texas (HB4), Utah (S149) and Virginia (SB1392).
transparency
Some states have enacted laws to inform people when AI is being used. This is most commonly done by requiring companies to disclose how and when AI is being used. For example, employers may need to get employees’ permission to use AI systems that collect data about them. States with transparency laws include: California (SB1001), Florida (1680), Illinois (HB2557), Maryland (HB1202).
Protection from discrimination
These laws often require AI systems to be designed with fairness in mind and to avoid “algorithmic discrimination,” which could lead to AI systems treating people differently based on race, ethnicity, sex, religion, disability, etc. These laws are frequently applied in the criminal justice system, employment, banking, or other professions where computer algorithms make life-changing decisions. These include California (SB36), Colorado (SB21-169), Illinois (HB0053), Utah (Height 366).
election
Laws focused on AI in elections have been passed in the past two years, primarily banning messages and images created by AI or at least mandating specific disclaimers regarding the use of AI in election campaign materials. These include Alabama’sHB172), Arizona (HB2394), Idaho (HB664), Florida (HB919), New Mexico (HB182), Oregon (SB1571), Utah (SB131), Wisconsin (SB664).
school
States that have enacted AI in education laws primarily impose requirements on the use of AI tools.HB1361) explained how they can use the tools to customize and accelerate learning, and Tennessee (1711) directs schools to develop AI policies for the 2024-25 school year that outline how the board will implement the policy.
Computer-generated sexual images
States that have enacted computer-generated explicit image laws make it a crime to use AI to create sexually explicit images of children, including Iowa.HF2240) South Dakota (S79).
I’m looking forward to
While most of the AI laws enacted focus on protecting users from AI harm, many lawmakers are also excited about AI’s potential.
Recent research has shown that World Economic Forum Artificial intelligence technologies could create around 97 million new jobs worldwide by 2025, outpacing the roughly 85 million jobs that will be replaced by technology and machines, a new study finds.
Griffiths said he looks forward to further digging into the technology’s potential with the working group and said writing legislation around rapidly changing technology is challenging but also exciting.
“When something is complicated or difficult or hard to understand, it’s tempting to run away and bury your head under the blanket,” she says. “But let’s all stop and look at it, understand it, read it, and have an honest conversation about how it’s being used and how it’s helping us.”