Recent research shows that companies with cloud-native network discovery and response extra hop We have uncovered a worrying trend as companies struggle with the security implications of employee use of generative AI.
Their new research report Tipping point for generative AIhighlights the challenges organizations face as generative AI technologies become more prevalent in the workplace.
This report takes a deep dive into how organizations are addressing the use of generative AI tools and reveals significant cognitive dissonance between IT and security leaders. I am. Shockingly, 73% of these leaders confessed that their employees frequently use generative AI tools or large-scale language models (LLMs) in their work. Nevertheless, a surprising majority admitted that they were uncertain about how to effectively address the associated security risks.
When asked about their concerns, IT and security leaders said they were more concerned about inaccurate or nonsensical issues than serious security issues such as customer or employee personally identifiable information (PII) leaks (36%) or financial loss. expressed concern about the likelihood of a negative response (40%). twenty five%).
“Combined with innovation and strong safeguards, generative AI will continue to be an elevating force across the industry for years to come,” said Raja Mukerji, co-founder and principal researcher at ExtraHop. states.
One of the surprising findings of this study was that banning generative AI is ineffective. Approximately 32% of respondents said their organization prohibits the use of these tools. However, only 5% of employees reported never using these tools, indicating that bans alone are not enough to curb usage.
The survey also highlighted a clear desire for guidance, particularly from government agencies. A significant 90% of respondents expressed the need for government involvement, with 60% supporting mandatory regulation and 30% supporting government standards that businesses adopt voluntarily.
Despite confidence in current security infrastructure, this study reveals gaps in basic security practices.
82% are confident in their security stack’s ability to protect against generative AI threats, but less than half have invested in technology to monitor the use of generative AI. Surprisingly, only 46 percent have established policies governing acceptable use, and only 42 percent provide training to users on the safe use of these tools.
This finding comes in the wake of the rapid adoption of technologies such as ChatGPT, which have become an integral part of modern business. Business leaders need to understand how their employees are using generative AI to identify potential security vulnerabilities.
You can find a complete copy of the report here.
(Photo provided by henny stander upon unsplash)
See also: BSI: Closing the “AI trust gap” is key to unlocking benefits
Want to learn more about AI and big data from industry leaders? check out AI/Big Data EXPO It will be held in Amsterdam, California, and London. The general event will be held at the same locations as below. digital transformation week.
Check out other upcoming enterprise technology events and webinars from TechForge. here.