Over 90% of senior executives express concerns about unmonitored use of generative AI tools
A recent study conducted by cybersecurity supplier Kaspersky has revealed growing concerns among C-suite executives about the unregulated proliferation of generative artificial intelligence (GenAI) tools in their organizations. With 90% of senior business leaders expressing worries about the unchecked adoption of these tools, it is evident that generative AI is becoming increasingly integrated into various business operations.
Silent infiltration of generative AI
Kaspersky’s study found that more than half (53%) of the surveyed senior executives believe that generative AI is now actively “driving” certain lines of business within their organizations. However, 59% of them also voiced deep concerns over what they described as a “silent infiltration” of generative AI, leading to heightened cyber risks. This “shadow AI” phenomenon, where employees incorporate generative AI without proper oversight, poses significant challenges to data security and governance.
One alarming finding is that just 22% of the business leaders surveyed have discussed the implementation of internal governance policies to monitor the use of generative AI within their organizations. Additionally, 91% admitted to needing a better understanding of how these tools are used to mitigate security risks.
David Emm, Kaspersky’s principal security researcher, stressed the urgency of addressing this issue, stating, “Given that GenAI’s rapid evolution is currently showing no signs of abating, the longer these applications operate unchecked, the harder they will become to control and secure across major business functions such as HR, finance, marketing, or even IT.”
Data breach concerns
Generative AI relies on continuous learning through data inputs, making data protection a paramount concern. Even when used in good faith, employees may unknowingly transmit sensitive data outside the organization, potentially causing a data breach. Kaspersky’s research reflected this concern, with 59% of leaders expressing serious apprehension over the risk of data loss.
Despite these apprehensions, the study also revealed that 50% of business leaders plan to harness generative AI in some capacity, primarily to automate repetitive tasks. Additionally, 44% intend to integrate generative AI tools into their daily routines. Notably, 24% indicated that they were inclined to use generative AI to automate IT and security functions.
Emm commented on this trend, saying, “One might assume that the prospect of sensitive data loss and losing control of critical business units might give the C-suite pause for thought, but our findings reveal that almost a quarter of industry bosses are currently considering the delegation of some of their most important functions to AI.”
However, he also emphasized the need for comprehensive data management understanding and robust policies before further integrating GenAI into the corporate environment.
The discussion surrounding generative AI’s risks and benefits has gained prominence. UK Prime Minister Rishi Sunak recently called for increased awareness of the risks associated with generative AI. This comes ahead of the AI Safety Summit at Bletchley Park, where industry leaders and experts will discuss the future of regulation and the integration of this emerging technology.
Double-Edged sword of generative AI
Fabien Rech, Senior Vice-President and General Manager at Trellix, highlighted the dual nature of generative AI, stating, “Generative AI is a double-edged sword – as the cybersecurity landscape continues to evolve, the proliferation of generative AI only adds further complexity to the mix.”
Rech emphasized the importance of organizations being aware of the implications of generative AI technology and how it can be harnessed and integrated effectively. While generative AI tools offer the benefit of simplifying day-to-day tasks and improving productivity, there are also concerns regarding their potential misuse in malicious activities such as code injection, phishing, social engineering, and deepfake technology.
He urged organizations to prioritize robust security measures and the integration of appropriate technology solutions to build resilient protection against cyber threats.