Artificial intelligence (AI) is becoming an increasingly large part of information technology investments and societal debate. Many governments are beginning to define laws and regulations to govern the impact of AI on their citizens, with a focus on safety and privacy. IDC predicts: By 2028, 60% of governments worldwide will adopt a risk management approach In formulating a framework for AI and generative AI policy (IDC FutureScape: Predictions for Governments Around the World in 2024). This article focuses on new regulations in Europe and the United States and how they impact CIOs.
AI Regulation in Europe
In late 2023, the European Union (EU) drafted an AI bill, which was subsequently approved by the EU Parliament on March 13, 2024. As one member noted, the EU now has its first binding law on artificial intelligence that protects the human rights of workers and citizens. The regulation will come into full force 24 months after publication. The law balances the need to protect democratic rights, the rule of law, and environmental sustainability while encouraging innovation, especially in Europe. AI applications that threaten citizens’ rights, such as predictive policing and indiscriminate scraping of internet facial images, are prohibited. Similarly, the use of biometric information systems by law enforcement agencies is also prohibited.
The EU AI law will require member states to create a database of high-risk AI systems to monitor their activity in the EU market. National governments will be required to enforce regulations and monitor developments in the AI market.
Like the General Data Protection Regulation (GDPR), which was adopted by the European Parliament in 2016 and fully implemented in May 2018, the AI Law is the product of extensive discussions with member states that began five years ago. As the world’s first AI regulatory framework, the EU AI Law has the potential to set AI standards for other jurisdictions, much like the GDPR did for information privacy.
Although the UK is not an EU member state, it announced its intention to develop an AI regulatory framework on February 6, 2024, based on the March 2023 AI Regulation White Paper and its response to the UK-hosted AI Safety Summit held at Bletchley Park in November 2023. However, the UK Parliament was prorogued on May 30 in preparation for a general election on July 4, 2024. Any new AI legislation will have to wait until a new government is formed in the second half of 2024.
US AI Regulation
Discussions on AI regulation have begun in the United States, but no concrete AI legislation has yet been enacted. In September 2023, the U.S. Senate took steps to plan potential AI regulation through public hearings and private consultations. Several bills have been drafted to regulate topics such as AI in political advertising or to protect individual rights regarding voice or appearance resemblance from being replicated using generative AI. The National Institute of Standards and Technology is developing an AI Risk Management Framework. While AI-specific legislation is being developed, several existing laws, such as the Federal Aviation Administration Reauthorization Act and the National AI Initiative Act at the federal level, as well as several state laws, such as California’s CCPA privacy regulation and Illinois’ Biometric Information Privacy Act, provide for some form of AI regulation.
As the US government debates potential regulation, the AI industry is advocating for self-regulation. In July 2023, seven major US AI companies, including Microsoft, Meta, Alphabet, and Amazon, agreed to a short voluntary code of conduct that emphasizes safety, security, and trust. It is worth noting that these four companies, with market capitalizations of over $8 trillion, are among the top six most valuable companies in the US. It is expected that the AI industry will demonstrate strong resistance to regulation and continue to advocate for self-regulation.
As in the UK, the passage of AI legislation will likely depend on the outcome of the US general and presidential elections this November.
The Impact of AI Regulation on CIOs
This is of interest to CIOs of global companies where AI is used for external interactions with customers and suppliers, such as chatbots that assist with online purchases. As with the GDPR, companies operating in the EU must comply with the EU regulation, and the AI law will impact global companies by 2026 at the latest. This means that internal operations supported by AI, such as hiring and promoting employees, will also be subject to EU regulation. At the same time, US regulations will be slow to come into effect and may lean towards industry self-regulation depending on government mandates. CIOs will therefore need to understand and respond to a different set of AI regulations depending on where their companies operate. Expanding beyond the US and EU to, for example, China, India, and Singapore, compliance across multiple jurisdictions becomes even more challenging for CIOs.
CIOs should discuss regulatory compliance with their AI providers and ensure that AI products comply with relevant laws and regulations through independent validation.
A final complication is enforcement. AI is evolving quickly, and it takes time for individuals and organizations to gather evidence and file a regulatory complaint, not to mention the time it takes for legal proceedings to occur. The EU, which has imposed fines for violations amounting to 7% of global revenue, could be a pioneer in enforcement. As we have seen with the GDPR, the EU will not hesitate to move forward with enforcement. AI advocates include some of the world’s largest companies, and they have strong, long-lasting legal defenses in any jurisdiction. The next few years will tell us to what extent new AI regulations are enforced.
International Data Corporation (IDC) is the world’s leading provider of market intelligence, advisory services, and events for the technology market. IDC is a wholly owned subsidiary of International Data Group (IDG Inc.), the world’s leading technology media, data, and marketing services company. Recently named Analyst Firm of the Year for the third consecutive year, IDC’s Technology Leaders solutions deliver industry-leading research and advisory services, robust leadership and development programs, and expert guidance backed by best-in-class benchmarking and intelligence data sourcing from the industry’s most experienced advisors. Contact us today for more information.
Read more about global AI regulation in the following reports from IDC: Global AI Regulation and Policy in 2023 and Understanding the landscape of AI regulatory frameworks – Differences in destinations and travel times illustrate regulatory complexity.
Dr. Ron BabinDr. Babin, an adjunct research advisor to IDC, is a senior management consultant and professor specializing in outsourcing and IT management (ITM) issues. Dr. Babin is a professor of IT management at the Ted Rogers School of Management at Ryerson University in Toronto, where he also serves as director of corporate and executive education.
Mr. Babin has extensive experience as a senior management consultant with two global consulting firms. He was a partner at Accenture and previously at KPMG, where he ran the IT Management and Strategy practice in Toronto. While at KPMG, he was a member of the Nolan Norton Consulting Group. His consulting work focuses on helping client executives improve the business value delivered by IT within their organizations. With over 20 years as a management consultant, Mr. Babin has worked with dozens of clients across most industry sectors, primarily in North America and Europe. Currently, Mr. Babin’s research is focused on outsourcing, with a particular focus on vendor-client relationships and social responsibility. He has authored several papers and books on these topics.