The UK government is seeking the people’s opinions regarding a brand-new set of voluntary rules for AI cybersecurity. The ‘AI Cyber Security Code of Practice’ will consist of suggestions developers can use to protect their AI products and services from hacking, sabotage, or tampering. At the CYBERUK conference, technology minister Saqib Bhatti said the new guidelines would be the basis for a global standard for AI cybersecurity and enable British businesses to be safe from cyber-attacks.
Securing AI systems
Bhatti said that to make the digital economy a powerful force for the country; we must provide a secure environment for it to grow and develop. This is exactly what they are doing with these new measures, in which the AI models from the design phase will be resilient to adversaries. The DSIT and the NCSC have created a code of practice for developing cyber-secure AI systems based on the NCSC guidelines on the secure AI system published last year.
The draft AI Cyber Security Code of Practice publication comes at a time of mixed news for the UK cybersecurity scene. According to government figures, the sector expanded by 13% last year, but half of the businesses and almost a third of the charities were the targets of breaches in the same period.
The rising demand for generative AI among businesses will probably create new ways for cybercriminals to launch attacks. The GenAI systems are the ones most affected by data poisoning and model theft, says Kevin Curren, a professor of cybersecurity at Ulster University and a senior member of the Institute of Electrical and Electronics Engineers. If the companies cannot prove how the GenAI systems work and how they have reached their conclusions, accountability becomes a problem, and other potential risks cannot be found.
Countering new cyber threats
The new AI cybersecurity guidelines will give businesses a list of the best practices and recommendations on handling these challenges, said the NCSC’s chief executive, Felicity Oswald. The new codes of practice will assist them in creating the cyber security industry to design AI models and software so that they are secure and resilient to attacks by malicious people. Oswald said that establishing standards for our security will strengthen collective resilience, and he praised the organizations for following these requirements to keep the UK online safe. The call for views will operate until companies dealing with AI applications can begin taking steps to strengthen their security, as Curren stated.
Organizations should communicate with data protection experts and monitor the changes in the regulatory practices, he said, which not only helps avoid legal problems but also aids in maintaining consumer trust by providing ethical AI practices and data integrity. Other best practices are the ones regarding the data use, which should be minimized and made anonymous, the data governance policies should be established, the data environments should be secured, the staff should be advised regarding the current security protocols, the impact, and the audits should be performed all the time.
The day’s call for opinions on both codes of practice should be seen in the context of the Conservative government’s whole work on AI safety, which the Minister for AI and Intellectual Property, Viscount Camrose, pointed out. The policies of specific schemes from the opposition Labour Party are still not specified, although the Green Paper on technology policy, promised last year, is missing. Nevertheless, shadow DSIT secretary Peter Kyle promised today that the party will unveil its views on AI in the coming weeks amid a policy push ahead of the general election.