Artificial Intelligence: A new technology that has taken the field by storm.
Just Super | E+ | Getty Images
“Europe is now the global standard setter for AI,” Thierry Breton, the European Commission’s head of domestic markets, wrote about X.
European Parliament President Roberta Mezzola said the law is pioneering and will enable innovation while protecting fundamental rights.
“Artificial intelligence is already part of our daily lives. From now on, it will also be part of the law,” she wrote in a social media post.
Dragos Tudlace, the lawmaker who oversaw the EU’s negotiations on the deal, welcomed the deal, but noted that the biggest hurdle remained implementation.
The EU AI law enacted in 2021 classifies technologies into risk categories, ranging from the technology being banned if it is “unacceptable” to high, medium and low risk. Masu.
After passing final checks and receiving approval from the European Council, the regulation is expected to come into force at the end of parliament in May. It will then be implemented in stages from 2025 onwards.
Some EU countries have previously imposed self-imposed restrictions on government-led regulations, citing concerns that repressive regulations could create hurdles for Europe’s progress in competing with Chinese and American companies in the high-tech sector. has been claiming. Germany and France, home to some of Europe’s most promising AI startups, are also among the critics.
The EU is scrambling to respond to the impact of technological developments on consumers and the market dominance of major companies.
Last week, the coalition enacted landmark competition legislation to rein in major U.S. companies. Under the Digital Markets Act, the EU can crack down on anti-competitive behavior by big tech companies and force them to open up their services in sectors where their dominant position stifles small businesses and hinders users’ freedom of choice. . Six companies, the US giant Alphabet, Amazon, Apple, Meta, Microsoft, and China’s ByteDance, are under scrutiny as so-called “gatekeepers.”
Despite strong companies such as Microsoft, Amazon, Google, and chipmaker Nvidia actively investing in AI, concerns are growing about the potential for artificial intelligence to be misused.
Governments are concerned that deepfakes – a type of artificial intelligence that generates fake events such as photos and videos – could be deployed in the lead-up to a series of important global elections this year.
Some AI backers are already self-policing to avoid misinformation. Google announced Tuesday that it will limit the types of election-related queries that can be asked of its Gemini chatbot, saying it has already implemented the change in the United States and India.
“The AI Act will move the development of AI toward greater human control over technology and help it harness new discoveries for economic growth, social progress, and unlocking human potential.” said Tudras. he said on social media on March 12th.
“The AI Act is not the end of the journey, but rather the starting point for a new governance model built around technology. “You have to concentrate on the ground,” he added.
Legal experts said the law is a major milestone for international AI regulation and could pave the way for other countries to follow suit.
“Again, the EU was the first to act, and it has developed a very comprehensive set of regulations,” said Stephen Farmer, a partner and AI expert at international law firm Pillsbury. Ta.
“The EU was an early mover in the rush to regulate data, giving us GDPR. We’re seeing global convergence on this,” he added, referring to the EU’s General Data Protection Regulation Act. continued. “The AI law seems to be repeating history.”
Mark Ferguson, a public policy expert at Pinsent Masons, said the law’s passage is just the beginning, and companies need to work closely with lawmakers to ensure the law is implemented as rapidly evolving technology continues to evolve. He added that there was a need to understand how it would be implemented.