Mon. Dec 23rd, 2024
President Biden Issues Broad Artificial Intelligence Directive Targeting Safety, Security

On October 30, 2023, President Biden recognized that artificial intelligence (AI) is the most important technology of our time, predicting more technological changes in the next five to 10 years than in the past 50 years. After predicting it would accelerate, he announced an executive order. Direct action to establish new AI standards. These directives were presented by the White House as constituting the most important actions taken by the government to address the safety, security, and trust of AI, and address a wide range of issues for private and public organizations domestically and internationally. covers. These issues include fostering innovation, developing international standards, and ensuring governments use AI responsibly, as well as safety and security, privacy, equity and civil rights, health care, employment, and education.

Safety and security

Some of the order’s most comprehensive directives relate to AI safety and security, including mandatory testing, federal reporting, and screening of specific models. Specifically, before making an AI system publicly available, developers of underlying models that pose a “substantial risk to national security, national economic security, or national public health and safety” must comply with federal regulations when training the model. The government must be notified and all red results shared. – Team safety test. That standard is established by the National Institute of Standards and Technology (NIST). The Department of Homeland Security (DHS) established an AI Safety and Security Commission to apply these test standards to critical infrastructure areas, and the Department of Energy and Department of Homeland Security established an AI Safety and Security Commission to apply these test standards to critical infrastructure areas. in addition to addressing AI system threats to critical infrastructure. , radiological, nuclear, and cybersecurity risks. Additionally, to prevent AI from being used to manipulate hazardous biological materials, the order requires agencies funding life science projects to establish standards for biosynthetic screening as a condition of federal funding. I am instructing you to do so.

The order also relates to security, introducing several safety-related measures to address deepfakes, cybersecurity, and government use of AI. First, to prevent fraud and deception, the Department of Commerce will develop guidance on authenticating official content and watermarks to clearly label AI-generated materials. Second, the order calls for the establishment of an advanced cybersecurity program to develop AI tools to identify and remediate vulnerabilities in critical software. Third, the order requires the National Security Council (NSC) and White House Chief of Staff to ensure that the U.S. military and intelligence community uses AI safely, ethically, and effectively, and to counter adversarial uses of AI. It requires the creation of a national security memorandum to ensure that the

privacy

Regarding privacy, the order calls on Congress to pass a data privacy bill and directs federal support for privacy-preserving technologies and techniques, such as encryption, through the establishment of a research coordination network, among other things. The National Science Foundation will work with the network to accelerate the adoption of these technologies by federal agencies. Additionally, federal agencies’ collection and use of commercially available information containing personally identifiable data, including information obtained from data brokers, will be evaluated and privacy guidance will be strengthened.

discrimination, prejudice, etc.

Regarding efforts to combat AI-based discrimination, bias, and other abuses, guidance will be issued to landlords, federal benefit programs, and federal contractors to ensure that AI algorithms are not used to exacerbate discrimination. Algorithmic discrimination will be addressed through training, technical assistance, and coordination between the Department of Justice (DOJ) and federal civil rights offices on best practices for investigating and prosecuting civil rights violations related to AI. Finally, the order affects many other areas such as employment, health care, education, innovation, foreign affairs, and government, with directives such as:

  • The Department of Health and Human Services will advance AI in healthcare and drug development by establishing a safety program to receive and remediate reports of AI-related harms and unsafe medical practices.
  • Develop principles and practices for dealing with turnover. Labor standards. Workplace equity, health and safety. and worker data collection.
  • Create a report on the potential impact of AI on the labor market.
  • Investigate and identify options to increase federal support for workers facing AI-induced labor disruption.
  • Create the National AI Research Resource, a tool that gives AI researchers and students access to key AI resources and data.
  • We encourage the Federal Trade Commission (FTC) to use its authority to promote a fair, open, and competitive AI ecosystem.
  • Facilitate the safe development of AI overseas to reduce risks to critical infrastructure.
  • Facilitate fast and efficient agreements for distributors to acquire specific AI products and services.

our view

A March 2023 open letter from Elon Musk, Max Tegmark, Steve Wozniak and others calls for AI systems more powerful than GPT-4 to help refocus AI research and development on principles such as: They are asking that all training be suspended for at least six months. Safety, interoperability, transparency, and reliability are being answered in part by numerous initiatives in the United States and around the world that provide regulators with better insight into system risks and capabilities. It seems that. Recently, the United States and the United Kingdom have announced closer cooperation on AI safety (expected to combine Executive Order protections for AI development with existing work by the United Kingdom’s Frontier AI Task Force) and increased cooperation in their respective countries. We announced the establishment of the AI ​​Safety Research Institute. And a few weeks ago, EU lawmakers created the world’s first comprehensive legal framework for AI by agreeing to large parts of Article 6 of the bill, which outlines the types of systems that will be designated as “high risk.” Progress has been made towards an AI law. Subject to greater regulatory oversight. Similar to the Presidential Decree’s requirements for models that pose significant risks to areas such as national security and public health and safety, the EU AI law requires high-risk systems to undergo testing before being placed on the market.

The order is a far-reaching and ambitious move, but in contrast to its immediate impact, it is also forward-looking. For example, although the set of technical conditions for models that are subject to reporting requirements for systems posing significant risks to areas such as national security has not yet been defined and will be updated regularly, the order states: In the meantime, it said the safety and security of the federal government will be maintained. The reporting requirements are: i) more than 10^26 integer or floating point operations per second (FLOPS); or ii) using primarily biological sequence data and more than 10^23 computational power. Applied to the trained model. Integer or FLOPS. For reference, it is Estimation GPT-4 is just below the order threshold at 2.10 x 10^25 FLOPS. However, the order provides a concrete roadmap, and AI industry stakeholders should be mindful of its many provisions, which center on safety, security, and trust, and position themselves accordingly. there is.