The U.S. Space Force has temporarily prohibited its personnel from using synthetic tools during missions to protect government data, according to reports.
Space Force members were informed that they are “not permitted” to use web-based generative AI tools to create text, images, or other media unless specifically approved. according to In response to the October 12 report, Bloomberg cited a September 29 memo sent to Guardian staff (Space Force members).
“Generative AI will undoubtedly revolutionize our workforce and enhance the Guardian’s ability to perform tasks quickly,” Lisa Costa, the Space Force’s deputy director of space operations for innovation, reportedly said in the memo. ing.
However, Costa explained that the adoption of AI and large-scale language models (LLM) needs to be more “responsible”, citing concerns about current cybersecurity and data processing standards.
The U.S. Space Force is a space service branch of the U.S. military tasked with protecting U.S. and allied interests in space.
The U.S. Space Force has temporarily banned the use of web-based generative artificial intelligence tools and the so-called large-scale language models that power them, citing data security and other concerns, according to a memo obtained by Bloomberg News.https://t.co/Rgy3q8SDCS
— Katrina Manson (@KatrinaManson) October 11, 2023
Bloomberg cited comments from Nick Shailan, former chief software officer for the U.S. Air Force and Space Force, saying the Space Force’s decision has already led to at least 500 people using a generative AI platform called “Ask Sage.” reported to be having an impact.
Shayan reportedly criticized the Space Force’s decision. “Clearly, this will put us years behind China,” he complained to Costa and other senior defense officials in a September email.
“This is a very short-sighted decision,” Shailan added.
Shaylan pointed out that the U.S. Central Intelligence Agency and its divisions have developed their own generative AI tools that meet data security standards.
Related: Data protection in AI chat: Is ChatGPT compliant with GDPR standards?
Concerns that LLMs could leak personal information to the public have been a concern for some governments in recent months.
Italy temporarily blocked the AI chatbot ChatGPT in March for alleged violations of data privacy rules, and reversed the decision about a month later.
Big tech companies like Apple, Amazon, and Samsung also Banned Or restricted employees from using AI tools like ChatGPT in the workplace.
Collect this article as NFT To preserve this moment in history and show support for independent journalism in the cryptocurrency space.
magazine: Mr. Musk’s price manipulation allegations, Satoshi AI chatbot, etc.