Sun. Dec 22nd, 2024
National Security Carve Out Weakens Ai Regulation

Earlier this month, the U.S. Senate concluded the AI ​​Insights Forum, a 21-member panel focused on national security. A majority of technology company executives and former government officials attended the forum; was both We’re happy to participate—we called on national security agencies to quickly incorporate AI into their operations. But it’s important that government agencies protect people from the risks of these technologies, and President Joe Biden recently presidential order It correctly recognizes that irresponsible use “can lead to and exacerbate discrimination, prejudice, and other abuses.” Unfortunately, the executive order itself, and the administration’s recent policies, draft policy There are major flaws when it comes to AI. They seek to ensure that the government’s use of AI systems is fair, effective, and transparent, while essentially blocking national security agencies, including the FBI and key parts of the Department of Homeland Security (DHS). Exempted. ).

This two-tiered approach is a mistake. This would allow for the development of a separate (and perhaps less protective) set of rules for AI systems such as facial recognition, social media surveillance, and algorithmic risk scoring of travelers that directly impact people in the United States. It will be. This exempts national security agencies from following sensible basic rules for assessing the impact of AI. This means ensuring that AI is independently evaluated and tested in the real world, taking steps to mitigate harms such as discrimination, training staff, consulting with stakeholders, and providing remedies. need to do it. To cause harm.

These agencies are already integrate rapidly AI is being implemented in many important operations, and the development of these technologies is accelerating. Understanding the full scope of these activities, including what safeguards (if any) are in place to prevent discrimination and protect privacy and other rights, depends on available public information. It’s hard to tell because there are so few. What we know shows the serious risks of carving out national security systems from basic AI rules.

Already, the Department of Homeland Security’s automated targeting system relies on secret algorithms to predict “threats” and identify travelers for increased surveillance. DHS also signed a $3.4 million contract with the researcher to develop the algorithm. “risk assessment” Percentage of social media accounts used by immigration authorities that purport to support terrorism.

DHS, through its Domestic Intelligence Division, runs several programs that comb through Americans’ social media posts looking for “.”defamatory information“And it’s dangerous.”stories and complaints” to scoop up information about an individual’s personal and political views. For example, during the 2020 racial justice protests, DHS used social media monitoring tools to compile documents What protesters and journalists share with law enforcement. Each year, DHS sends thousands of unverified summaries of what it finds on social media to police departments across the country, providing legitimacy for and, in some cases, surveillance. indictment.

None of these programs is supported by empirical evidence demonstrating the effectiveness of their approaches. In 2017, the department’s inspector general called out Five of DHS’s pilot social media monitoring programs have not even been able to measure their effectiveness. Other internal reviews have labeled this type of program questionable orno value” Also, to our knowledge, these systems have never been tested for whether they perpetuate bias or reproduce stereotypes. The policy framework for regulating these programs is weak; loophole Mysterious the department’s general policy on racial profiling and activities protected by the First Amendment.

The FBI is deploying AI-powered facial recognition tools to identify suspects for investigation and arrest, working with the Department of Defense. develop tools It is possible that people can be identified from video footage from street cameras and flying drones.We have carried out such initiatives without sufficient testingtraining, or protective measures For civil rights and civil liberties. The FBI also has a contract with Clearview AI. notorious company It collects photos from the internet to create facial prints without consent and claims access to 30 billion facial photos. We have adopted these systems despite evidence that facial recognition technology is disproportionately popular. misidentify and misclassify people of color, transgender people, women, and members of other marginalized groups. Already, six incidents have been reported due to the use of facial recognition by police. Misidentification arrest and the illegal imprisonment of black people.

These types of clear and present risks will only increase as government agencies incorporate more AI into their systems and consider other less reliable technologies.sentiment analysis” and Gait recognition. National security, intelligence, and law enforcement agencies must be subject to the same basic standards that apply throughout the federal government, but only when strictly justified by specific national security needs. We need a transparent and transparent fix. Congress should strengthen independent oversight by creating agencies like the Privacy and Civil Liberties Oversight Commission, created after 9/11 to review counterterrorism programs.

The time to act now is before irresponsible and harmful AI systems become entrenched under the banner of “national security.”