Mon. Dec 23rd, 2024
Biden's Ai Executive Order: What It Means For Tech And

Since OpenAI was released, ChatGPT large-scale language model chatbot In November 2022, the technology industry and other industries are racing to implement it. generative artificial intelligence (AI) is used to improve and streamline internal operations, develop new products for customers, or simply test the functionality of technology.

As users continue to experiment with generative AI, some are questioning the ethical and legal implications of this type of technology. Their questions include:

  • Is AI a national security threat?
  • How will the role of IT and cybersecurity change in the future?
  • What guardrails can I apply?
  • How can cybersecurity professionals best protect themselves? Attackers also use generated AI tools?

Last month, the Biden administration stepped into this area of ​​uncertainty with new policies. presidential order It provides guidelines on how AI tools such as ChatGPT and Google’s Bard should be used.

According to an executive order issued on October 30, “The responsible use of AI has the potential to help solve pressing challenges while making our world richer, more productive, more innovative, and safer.” It has a hidden sexuality.” “At the same time, irresponsible use can exacerbate social harms such as fraud, discrimination, prejudice, and disinformation.” Fire workers and strip them of their rights. suppress competition; and pose a risk to national security. ”

The executive order aims to set safe limits on the expansion of AI technology while encouraging development and information sharing with federal agencies and regulators. white house fact sheet The order notes that the following measures will be taken:

  • Require AI developers to share safety test results and other sensitive information with federal agencies.
  • Develop standards, tools, and tests to ensure AI systems are safe, secure, and reliable.
  • Create a way to protect the public from AI-powered fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.
  • Establish a sophisticated cybersecurity program and develop AI tools to discover and fix vulnerabilities in critical software.

While the details are still being worked out, experts say the Biden administration’s executive order will start forcing companies and the technology industry to rethink AI development. At the same time, technology and security professionals have new avenues to open up career opportunities and skill sets to take advantage of the changing landscape.

“Every time the president of the United States issues an executive order, government agencies and private industry will respond. It will lead to employment opportunities for knowledgeable people,” Darren Gugone, CEO and co-founder of Keeper Security, recently told Dice.

“AI is already having a huge impact on cybersecurity, with cyber defenders looking to find new uses for cybersecurity solutions as well as leveraging the power of AI to create more believable phishing attacks. It also affects cybercriminals who develop malware and increase the number of attacks they launch,” Guccione added.

How AI will change cybersecurity and technology careers

President Joe Biden has announced several policies since taking office. presidential order Designed to influence new information technology and cybersecurity developments. These mandates, including the latest in AI, also have the potential to change the way technology professionals approach their work.

When considering the broader impact of generative AI, Piyush Pandey, CEO of security firm Passlock, sees the technology already interacting with personal, customer, and financial data. This means that the role of data privacy and data security managers will need to change and expand, especially regarding how specific datasets are leveraged as part of learning models.

Further changes to the cybersecurity space are also planned, including increased automation of tasks currently performed manually by security teams.

“From intelligent response automation to behavioral analysis to vulnerability remediation prioritization, AI is already adding value to the cybersecurity space,” Pandey told Dice. “As AI automates more tasks in cybersecurity, the role of cybersecurity professionals will evolve rather than become a commodity. Talented cybersecurity professionals with a growth mindset will It will become increasingly valuable as it provides actionable insights to guide AI adoption.”

As the impact of the AI ​​executive order becomes clearer, Marcus Fowler, CEO of Darktrace Federal, red team exercise, Here, engineers play the role of “attacker” and find weaknesses in the network.

“For AI systems, that means testing for security issues, user failures, and other unintended questions. In cybersecurity, red teaming can be very helpful, but it’s not a one-size-fits-all solution. . There are a series of steps that businesses need to take to protect their systems,” Fowler told Dice. “For red teaming to be useful, many systems and safeguards need to be in place. Red teaming is also not a one-and-done deal. There needs to be an ongoing process to test whether there is.”

Technology career opportunities in government

While much of the debate surrounding the executive order has focused on what it means for private companies, the federal government will also have an expanded role in regulating and even supporting the development of these AI tools and platforms. There is.

“The executive order has the potential to create AI jobs for the various government agencies affected by this order, and certainly for regulators,” John Bambenek, principal threat hunter at security firm NetenRich, told Dice. Ta. “In the private sector, the jobs are already there because there is a gold rush to grab market share. What we are seeing is some organizations creating AI safety teams. But even if they do exist in the long term, their impact tends to be minimal.”

Following an executive order requiring private companies to share information about AI with government agencies and regulators, Guccione believes there will be a greater role for federal technology experts to understand the technology and how it is developed. .

“Developers of the most powerful AI systems will be required to share safety test results and other critical information with the U.S. government, and will be required to share extensive There will be extensive red team testing, which is open to the public,” Guccione added. “Furthermore, standardized tools and tests will be developed and implemented to provide governance for new and existing AI systems. Given the range of recommendations and actions included, organizations will No matter where you are or what type of AI system is being used, the impact of this executive order will be felt in every sector.”

Take steps to build a safe culture

Although the executive order is expected to take months or even years to produce results, experts said the White House’s move is likely to bring even more attention to cybersecurity.

This includes further attention to how secure these AI systems are and how attackers can exploit the technology for themselves.

With the proliferation of deepfakes, mass email phishing campaigns, and advanced AI-powered social engineering techniques, companies need to increase their investment in technology experts who understand these threats and how to counter them, says OntiNew. said Craig Jones, vice president of security operations.

“AI can also be leveraged to combat these threats. For example, AI-based security systems can detect and block phishing emails or identify deepfake content,” Jones told Dice. told. “While technology plays an important role in mitigating the risk of social engineering, relying on technology alone is not a foolproof solution. To reduce the impact of social engineering attacks, A balanced approach that combines awareness training and developing a strong security culture is essential.”

Darktrace’s Fowler also noted that the executive order will likely bring more attention to the security pitfalls of AI, and that better developments are needed to address these issues.

“AI safety cannot be achieved without cybersecurity, which is a prerequisite for secure, trustworthy, and general-purpose AI. That means taking action on data security, control, and trust.” Fowler said. “We are hopeful that we will see concrete actions in the executive order that begin to address these challenges. But as governments advance regulations around AI safety, organizations will be able to build and use AI to It is also important to ensure that we remain innovative and competitive in order to stay ahead of the bad guys.”