Mon. Dec 23rd, 2024
Department Of Homeland Security Adds Facial Comparison And Uses Machine

The Department of Homeland Security recently expanded its use cases for artificial intelligence to reflect some of the uses of the technology already published elsewhere, such as facial comparison and machine learning tools used within the department. The list has been updated.

The additions include U.S. Customs and Border Protection’s use of Traveler Verification Services, a tool that deploys facial comparison technology to verify a traveler’s identity, and the Transportation Security Administration’s introduction of the same tool into its pre-screening process. It is included.

The department also added the Federal Emergency Management Agency’s Geospatial Damage Assessment, which uses machine learning and machine vision to assess disaster damage, and CBP’s use of AI to inform port of entry risk assessment decisions. did.

The four additional cases were detected on Oct. 31 using a website tracker used by FedScoop, but all appear to have already been published elsewhere, and three of them have been published for at least a year. , highlighting existing concerns that inventories do not reflect the full range of publicly known AI uses. .

When asked why the usage was added now, a DHS spokesperson pointed to the process for evaluating disclosures.

“DHS’s sensitive law enforcement and national security missions ensure that we have rigorous internal processes in place to evaluate whether certain sensitive AI use cases are safe to share externally. “These use cases were recently authorized for external sharing,” a spokesperson said in an emailed statement.

With the exception of the Department of Defense, intelligence community officials, and independent regulatory agencies, federal agencies are required to publish annual inventories of their AI use under a Trump-era executive order. However, so far they have been inconsistent in terms of categories included, format, and timing. Among the concerns cited by researchers and advocates is the apparent exclusion of public uses from the inventory.

The use of facial comparison technology for traveler authentication services is referenced elsewhere on TSA’s website. At least in early 2021 and since then on the CPB website At least in 2019, according to a page archived by Wayback Machine.And according to the Government Accountability Board report, the Traveler Authentication Service was developed and implemented in 2017.Using AI for Geospatial Damage Assessment is also posted on his website for FEMA From August 2022 onwardsaccording to the Wayback Machine archives.

The spokesperson also noted that Eric Heisen, DHS Chief Information Officer and Chief AI Officer, testified about CBP’s Port of Entry Risk Assessment use case. September hearing Before the House Oversight Subcommittee on Cybersecurity, Information Technology, and Government Innovation.

Ben Winters, senior advisor and director of the AI ​​and Human Rights Project at the Electronic Privacy Information Center, said the lack of speed and completeness of disclosure was “concerning”.

“An inventory of AI use cases is only as valuable as it is followed. , shows why we don’t have accountability mechanisms in place,” Winters said in an emailed statement to FedScoop.

He said he hopes the Office of Management and Budget’s guidance “does not provide a broad exemption for these types of ‘national security’ tools and that DHS continues to choose to prioritize transparency and accountability.” added.

Currently, there is no clear process for agents to add or remove items from inventory. OMB has said in the past that government agencies are “responsible for maintaining inventory accuracy.”

DHS previously added and removed several uses of AI in August. It added Immigration and Customs Enforcement’s use of facial recognition technology and CBP’s use of technology to identify “proof of life” and prevent fraud on agency apps. It also removed references to the TSA system, which was described as an algorithm to address the risk of COVID-19 at airports.

The agency will soon release more information about its generative AI efforts, a spokesperson said.

“DHS is actively considering piloting generative AI technologies across mission areas and expects to share more information in the coming weeks,” the spokesperson said.