this Story It originally appeared state line.
Back in March, Hawaii Sen. Chris Lee introduced a bill urging the U.S. Congress to consider the benefits and risks of artificial intelligence technology.
But he didn’t write it. Artificial intelligence did.
Mr. Lee directed ChatGPT, an AI-powered system trained to carry out conversations on command, to develop a bill highlighting the potential benefits and drawbacks of AI. I got a solution right away. Lee copied and pasted the entire text without changing a single word.
of solution It was adopted in April with bipartisan support.
Lee, a Democrat, had made a statement that using AI to write a bill, an entire law, is probably the single biggest thing we can do to demonstrate the good and the bad of AI. ” Lee, a Democrat, said in an interview. state line.
Get the morning headlines delivered to your inbox
ChatGPT, which received a lot of national coverage this year, is just one example of artificial intelligence. AI can refer to machine learning, where companies use algorithms that mimic the way humans learn and perform tasks. AI can also refer to automated decision-making. More broadly, the word “artificial intelligence” may conjure up images of robots.
Although organizations and experts are attempting to define artificial intelligence, there is no consensus on a single definition. That leaves states struggling with how to understand technology to develop rules.
“There’s no silver bullet that everyone has for what to do next,” Lee said.
A report says the lack of a uniform definition is making it difficult for legislators trying to create regulations for the growing technology. report From the National Conference of State Legislatures. This report from the NCSL Task Force on Artificial Intelligence, Cybersecurity, and Privacy consists of: Member of Parliament From about half of the states.
Many states have already passed laws to research or regulate artificial intelligence. According to one report, lawmakers in at least 24 states and the District of Columbia have introduced AI-related legislation in 2023, and at least 14 states have adopted resolutions or enacted legislation. analysis From the National Legislative Group.
Some, for example texas and north dakota, founded a group to research artificial intelligence.Others, among them arizona and connecticut, worked on the use of artificial intelligence systems within state government agencies. In Colorado, state Sen. Robert Rodriguez told Newsline in June that the state is in the early stages of developing AI regulations that could be introduced in the next Colorado Legislature.
A new Connecticut law would require the state to regularly evaluate systems containing AI, and would require some artificial intelligence to “perform tasks without significant human oversight or learn from experience.” “An artificial system that can improve its performance when exposed to data.” Set. ”
However, each state’s law defines AI differently. for example, louisiana This year’s resolution states that artificial intelligence “combines computer science and robust data sets to enable the delivery of problem-solving solutions directly to consumers.”
How some state laws define artificial intelligence
- Connecticut SB 1103: “Artificial systems that can perform tasks without significant human supervision, under varying and unpredictable conditions, or that can learn from experience and improve such performance when exposed to datasets.” ”.
- Louisiana SCR 49: “We combine computer science with robust datasets to help bring solutions directly to consumers.”
- North Dakota HB 1361: “Personality” does not include “artificial intelligence”.
- Rhode Island H 6423: This includes “computerized technologies, including but not limited to machine learning and natural language processing, that operate in ways similar to human cognitive abilities when solving problems or performing specific tasks.” Includes “methods and tools”.
- Texas HB 2060: “Perceive the environment through data acquisition and processing, interpret the information obtained, take action or imitate intelligent behavior given a specific goal, and learn how the environment has changed due to previous actions. A system that can “learn and adapt its behavior by analyzing how it is influenced.”
“I think this definition is very vague because it’s such a broad and expanding area that people don’t generally understand,” Lee said.
AI is a touchy subject, but Democratic Rep. Jennifer Stewart, who chairs the House Innovation, Internet and Technology Committee, said uncertainty shouldn’t deter lawmakers from moving forward.
“I think we can regulate and utilize what we create,” she said. “And we don’t have to be nervous or scared to go into these waters.”
Other efforts to define AI
The National Artificial Intelligence Initiative Act of 2020 defines AI as “a machine-based system that can make predictions, recommendations, and decisions that affect a real or virtual environment against a set of human-defined goals.” I’m trying to.federal government lawcame into effect on January 1, 2021.
President Joe Biden’s Blueprint for an AI Bill of RightsA set of guidelines developed by the White House on the use of automated systems expands the definition to include “automated systems that have the potential to materially impact the rights, opportunities, or access of American citizens to critical resources or services.” doing.
of european union, GoogleThis is an industry association known as BSA | Software Alliance And many more entities have elaborated similar but different definitions of artificial intelligence. However, AI experts and lawmakers are still identifying a definitive definition and considering whether a specific definition is needed to pursue a regulatory framework.
At the most basic level, artificial intelligence refers to machine-based systems that produce results based on input information, said Sylvester Johnson, vice president for public interest technology at Virginia Tech.
But different AI programs operate based on how these systems are trained to use data, so lawmakers need to know that, Johnson said.
“AI advances very quickly,” he said. “If you want to keep the people who make policy and legislatures well-informed at the federal and state levels, it’s important to provide some kind of concise and accurate way to keep people up to date on trends and changes. We need a designed ecosystem, which is what’s happening in technology.”
Jake Morabito, director of the National Legislative Exchange Council’s Communications and Technology Task Force, said determining how far to expand the definition of AI is an important issue. ALEC, a conservative public policy organization, supports free market solutions and enforcement of existing regulations that may cover a variety of uses of AI.
A “light touch” approach to AI regulation would help the United States become a technology leader on the world stage, but given the enthusiasm for ChatGPT and other systems, lawmakers at all levels need to do more to improve understanding. There is a need to study the development, Morabito said.
“This technology is out of the bag, so I don’t think we can put it back in the bottle,” Morabito said. “We need to fully understand that. And lawmakers need to understand how to maximize the benefits, reduce the risks and ensure that this technology is developed at home rather than overseas.” , I think we can do a lot of things.”
Some experts believe that lawmakers do not need a definition to govern artificial intelligence. Alex Engler, a governance research fellow at the Brookings Institution, argues that definitions are not entirely necessary when it comes to applications of artificial intelligence, or the specific areas in which AI is used.
Instead, he said, a set of basic rules should apply to programs that use automated systems, regardless of their purpose.
“You can basically say, ‘I don’t care what algorithm you use, it has to meet these criteria,'” Engler said. “This doesn’t mean there’s literally no definition. It just means that we don’t consider some algorithms and we don’t consider others.”
Focusing on specific systems, such as generative AI that can create text or images, may be the wrong approach, he said.
The core question, Engler said, is: “How do we update civil society and consumer protections so that people still have protection in the age of algorithms?”
potential harm
Legislation passed in some states over the past few years attempts to answer this question. Although Kentucky is not on the front lines (its legislature just recently created a new committee focused on technology), state Sen. Whitney Westerfield, a Republican and member of NCSL’s AI Task Force, The “avalanche of bills” across the United States scared people.
He noted that AI technology is not new, but now that the topic is gaining attention, the public and even members of Congress are starting to react.
“When they finished, [legislators] Once you get the legislative hammer, everything is nailed down,” Westerfield said. “And I think if stories come out about this or that, it doesn’t even need to influence voters, it just adds fuel to the fire.”
The potential harms associated with the use of artificial intelligence are creating momentum for further regulation. For example, some AI tools can cause tangible harm by replicating human biases and resulting in decisions and actions that favor certain groups over others, says the Human Rights Data Analysis Group. said Megan Price, executive director of.
This nonprofit applies data science to analyze human rights violations around the world. Price has designed several methods for statistical analysis of human rights data, which are useful in her research estimating conflict-related deaths in Syria. The organization also uses artificial intelligence in some of its own systems, she said.
Price said the potential impact of artificial intelligence and its powers has created a good sense of urgency among lawmakers. And it’s important to weigh use against potential harm, as her team does.
“So the real question is, when a mistake is made, what is the price and who pays for it?” she asked.
Virginia Tech’s Johnson said a new focus on social justice in technology is also noteworthy. “Public interest technology” is a growing movement among social justice organizations that focuses on how artificial intelligence works for the common good and public good.
“If there is any reason to be hopeful that we will actually advance our ability to regulate technology in ways that improve people’s lives and their outcomes, this is [public interest technology] That’s the way to go,” Johnson said.
This is a resolution from Hawaii State Senator Chris Lee.:
Hawaii-AI-pdf
state line is part of States Newsroom, a nonprofit news network supported by a coalition of grants and donors as a 501c(3) public charity. Stateline maintains editorial independence. If you have any questions, please contact Editor Scott S. Greenberger. [email protected].follow the state line Facebook and twitter.