In the lead-up to the UK’s AI Safety Summit, a recent study has shed light on the significant concerns held by UK IT professionals regarding the deployment of generative artificial intelligence (AI) applications. According to the study conducted by O’Reilly in September 2023, a staggering 93% of UK IT professionals express apprehensions regarding their organization’s ambitions for generative AI.
Inadequate training tops concerns
The foremost worry among IT professionals revolves around the perceived lack of understanding and training at the executive level, with 28% citing this as their primary concern. This raises questions about the readiness of top leadership to oversee generative AI implementations effectively.
Risk assessment and operational understanding
The study further highlights that 23% of IT professionals express concerns about the absence of comprehensive risk assessments, while 22% worry about an insufficient grasp of the operational aspects of generative AI. These concerns indicate that organizations may dive into AI initiatives without fully comprehending the associated risks and operational complexities.
UK government’s AI ambitions
While the UK government aims to create a conducive regulatory environment for AI through the upcoming Global AI Safety Summit, the study reveals that 25% of IT professionals lack confidence in their organization’s current capabilities to ensure compliance with evolving AI regulations.
Confidence levels vary
In contrast, 51% of IT professionals feel ‘somewhat’ confident that their organizations possess the skills necessary to keep up with the changing regulatory landscape. These varying confidence levels underscore the need for a more robust and unified approach to AI regulation.
Trevor Dearing, director of critical infrastructure at Illumio, emphasizes the need to take immediate action: “If we’re serious about protecting the nation against AI, then we must echo the US strategies of mandating the implementation of security strategies like Zero Trust.” Dearing suggests that adopting such strategies can help reduce the potential impact of AI attacks.
Are we prepared for AI?
The O’Reilly report underscores a potential gap between the UK’s aspirations to lead in AI and IT professionals’ actual skills and preparedness. Approximately 71% of IT teams believe the digital skills gap could hinder the UK government’s ambition to become a global AI leader.
Despite substantial investments in generative AI, the study highlights that workplace policies and staff training have not kept pace. Significantly, employees outside of IT departments have received limited (32%) or no training (36%) regarding the impact of generative AI on the workplace.
Concerns about employee training
This lack of employee training is cited as a significant concern by 27% of IT professionals, similar to their concerns about advanced cybersecurity threats posed by these technologies. The report suggests comprehensive training programs must be implemented to bridge this knowledge gap.
Lack of AI policy in businesses
The O’Reilly study reveals that 41% of IT professionals report the absence of a workplace policy for using generative AI technologies, with an additional 11% unsure of their organization’s policy status.
Lack of formal policies
A recent ISACA study of 2300 digital trust professionals found that only 10% of organizations have formal, comprehensive policies governing the use of AI technology. This lack of policies raises questions about data security and the ethical use of AI within organizations.
Demand for upskilling
In response to these challenges, 82% of IT professionals desire more learning and development opportunities related to generative AI. Notably, 61% are considering changing employers next year if their organization fails to provide upskilling opportunities in generative AI.
While 70% of those who participated in the ISACA Generative AI 2023 Survey believe AI will positively impact their jobs, a striking 81% of them acknowledge the need for additional training to retain their jobs or advance their careers. This highlights the importance of continuous learning in the evolving AI landscape.
A call for action
Alexia Pedersen, VP of EMEA at O’Reilly, emphasizes the importance of investing in generative AI and ensuring that staff are adequately trained while implementing robust workplace policies. She states, “This is not only a strategy for improved recruitment and retention in the face of a widening skills gap but also a necessary step to guarantee ethical and safe AI deployments if Britain wants to fulfill its global ambitions.”
As the UK aims to navigate the complex terrain of generative AI, it is clear that both the private sector and the government must address concerns related to executive training, regulatory compliance, workforce education, and policy development to harness the full potential of AI while safeguarding against its risks. The upcoming AI Safety Summit presents a critical opportunity to collaborate and develop a unified approach towards these challenges.