UK critical systems at increased risk from ‘digital divide’ created by AI threats
The National Cyber Security Centre’s (NCSC) reports that over the next two years, a growing divide will emerge between organisations that can keep pace with AI-enabled threats and those that fall behind – exposing them to greater risk and intensifying the overall threat to the UK’s digital infrastructure, cyber chiefs have warned.
A new report, launched by Pat McFadden, the Chancellor of the Duchy of Lancaster at the National Cyber Security Centre’s (NCSC) CYBERUK conference, outlines how artificial intelligence will impact the cyber threat from now to 2027, highlighting how AI will almost certainly continue to make elements of cyber intrusion operations more effective and efficient.
It warns that, by 2027, AI-enabled tools are set to enhance threat actors’ ability to exploit known vulnerabilities, adding that whilst the time between the disclosure and exploitation has already shrunk to days, AI will almost certainly reduce this further, posing a challenge for network defenders.
The report also suggests that the growing incorporation of AI models and systems across the UK’s technology base, particularly within critical national infrastructure and where there are insufficient cyber security controls, will almost certainly present an increased attack surface and opportunities for adversaries.
As AI technologies become more embedded in business operations, organisations are being urged to act decisively to strengthen cyber resilience and mitigate against AI-enabled cyber threats.
Paul Chichester, NCSC Director of Operations, said:
“We know AI is transforming the cyber threat landscape, expanding attack surfaces, increasing the volume of threats, and accelerating malicious capabilities.
“While these risks are real, AI also presents a powerful opportunity to enhance the UK’s resilience and drive growth—making it essential for organisations to act.
“Organisations should implement strong cyber security practices across AI systems and their dependencies and ensure up-to-date defences are in place.”
The integration of AI and connected systems into existing networks requires a renewed focus on fundamental security practices. The NCSC has published a range of advice and guidance to help organisations take action, including by using the Cyber Assessment Framework and 10 Steps to Cyber Security.
The report also highlights, in the rush to provide new AI models, developers will almost certainly prioritise the speed of developing systems over providing sufficient cyber security, increasing the threat from capable state-linked actors and cyber criminals.
Earlier this year, the UK government announced the new AI Cyber Security Code of Practice, produced by the NCSC and the Department for Science, Innovation and Technology (DSIT), which will help organisations develop and deploy AI systems securely.
The Code of Practice will form the basis of a new global standard for secure AI through the European Telecommunications Standards Institute (ETSI).
The assessment builds on the NCSC’s previous report, the near-term impact of AI on the cyber threat assessment, published in January 2024 and looks to highlight the most significant impacts on cyber threats to the UK from AI developments over the coming years.
View The Impact of AI on Cyber Threat – From Now to 2027 assessment

Kerry is a Content Creator at www.systemtek.co.uk she has spent many years working in IT support, her main interests are computing, networking and AI.