Mon. Dec 23rd, 2024
Ai Raises Security Concerns For U.s. Space Force

Adding its name to the list of companies and organizations prohibited from using generated AI, the U.S. Space Force issued a temporary order to its service members (or “guardians”) to stop using personal generated AI accounts while on duty. I put it out.

In the first reported memo, Reuters and confirmed Decryption A Sept. 29 memo by Space Force representatives issued by Lisa Costa, deputy director of innovation and space operations for the U.S. Space Force, is intended to provide guidance on the responsible use of generative AI and large-scale language models.

“A strategic pause to safely integrate generative AI and large-scale language models within the U.S. Space Force has been implemented to determine the best path forward for integrating this capability into the USSF mission,” Space Force reports Officer Maj. Tanya Downsworth said. Decryption. “This is a temporary measure to protect our services and Guardian data.”

Although the memo does not mention the specific AI models being used by service members, the Space Force is the latest U.S. government agency to acknowledge the use of AI tools and seek to put guardrails in place.

In July, the U.S. House of Representatives Office of the Administrator issued a letter to staff restricting the use of OpenAI’s ChatGPT, stating that only the premium, subscription-based ChatGPT Plus service would be allowed, and only under certain conditions. .

“All Guardians are responsible for complying with all cybersecurity, data processing, and procurement requirements when purchasing and using our products. [generative AI]” Costa wrote in the memo.

Downsworth said the department does not track the number of Guardians who have signed up to use the generative AI tool, but it does monitor activity on the network. Cybersecurity and privacy have become critical concerns for policymakers and businesses. In May, Samsung and Apple banned their employees from using ChatGPT, citing fears of data and intellectual property loss from programs that capture input data.

The Space Force memo listed several key points for employees to adhere to, including that all AI model testing must be approved by the CTIO. AI Accounts purchased for personal use must not be affiliated with or associated with any Guardian government identity, organization, location, or function. Additionally, certain types of generative AI tools are not allowed on government devices, and government data must not be used in third-party AI models.

This year, AI has quickly become mainstream with the introduction of more sophisticated generative AI chatbots such as ChatGPT, Google Bard, and Anthropic’s Claude. Everyone from students and teachers to corporate engineers and developers are using these AI chatbots for quick answers and solutions to complex problems and questions.

Reflecting the storyline of black mirror In the episode “Joan is Terrible,” a memo asks service members not to accept or agree to terms of service (TOS) or end user license agreements for generative AI or large-scale language models without prior review and approval. I am.

Despite these concerns and a “strategic pause,” the Space Force remains optimistic about the future use of artificial intelligence in space and military efforts.

“These technologies will undoubtedly revolutionize our workforce and enhance Guardian’s ability to act quickly in areas such as space domain awareness and command and control,” Downsworth concluded. “The Space Force CTIO is actively involved in TF-Lima, the Department of Defense’s GenAI Task Force, which aims to harness the power of these technologies in a responsible and strategic manner.”

Stay on top of cryptocurrency news and get daily updates in your inbox.