The Generative AI Tipping Point is a new study paper from ExtraHop, a leader in cloud-native network detection and response (NDR), which reveals that businesses are finding it difficult to comprehend and handle the security issues associated with employees using generative AI.
The research describes the cognitive dissonance among security professionals as the technology increasingly becomes a workhorse at work. It does this by analysing organizations’ intentions for securing and managing the usage of generative AI technologies.
The results show that while 73% of IT and security leaders acknowledge that their staff members utilize Large Language Models (LLM) or other generative AI technologies occasionally or regularly at work, many are unsure of how to properly handle security threats.
The first priority isn’t security.
IT and security executives are more worried about receiving erroneous or nonsensical answers (40%) than they are about security-related problems, such as the disclosure of trade secrets (33%), financial loss (25%), and personally identifiable information (PII) of customers and employees (36%).
Bans on generative AI are ineffectual.
A comparable percentage of respondents—32%—told that their company has prohibited the usage of generative AI technologies. This group also reported feeling extremely confident about their capacity to defend against AI dangers. Even with these prohibitions, just 5% of workers claim they never use these instruments at work, indicating their inefficiency.
Organizations desire greater direction, particularly from the government.
IT and security leaders demand additional advice, even if over three-quarters (74%) of those questioned have invested in or plan to invest in generative AI protections or security measures this year. Ninety percent of respondents want the government to be involved in some capacity; sixty percent support laws that must be followed and thirty percent support guidelines that companies can choose to follow.
There is a lack of basic hygiene
Eighty-two percent (82%) of respondents are very or somewhat confident in the ability of their present security stack to fend off risks posed by generative AI tools. Less than half, meanwhile, have made investments in technology that enables their company to keep an eye on the application of generative AI. Moreover, just 46% of users receive training on the safe use of these tools, and only 42% of users have policies in place governing permissible use.
Less than a year has passed since ChatGPT launched in November 2022 for businesses to thoroughly consider the benefits and drawbacks of using generative AI tools. Business executives must gain a deeper understanding of how their staff members are utilizing generative AI in the midst of its rapid adoption. This will enable them to see any security holes and make sure that no confidential information or intellectual property is shared improperly.
Co-founder and chief scientist of ExtraHop Raja Mukerji adds, “There is a tremendous opportunity for generative AI to be a revolutionary technology in the workplace.”
“However, as with all emerging technologies we have seen become a staple of modern businesses, leaders need more guidance and education to understand how generative AI can be applied across their organisation and the potential risks associated with it,” according to him.
“By blending innovation with strong safeguards, generative AI will continue to be a force that will uplevel entire industries in the years to come.”