Why Security Leaders Are Calling for Immediate AI Regulation, Including DeepSeek - Om Softwares

Anxiety is mounting among Chief Information Security Officers (CISOs) inside security operation centres, particularly regarding the Chinese AI powerhouse DeepSe...

Anxiety is mounting among Chief Information Security Officers (CISOs) inside security operation centres, particularly regarding the Chinese AI powerhouse DeepSeek.

AI was once hailed as a dawn of efficiency and innovation for business, but for those standing on the front lines of enterprise defence, it’s casting far longer and darker shadows.

Four in five (81%) CISOs across the UK believe the Chinese chatbot demands urgent government regulation. They warn that without immediate oversight, the system could trigger a national-scale cyber emergency.

This concern is not hypothetical; it’s rooted in a technology whose data use and potential misuse are setting off alarm bells at the highest enterprise levels.

The results, commissioned by Absolute Security for its UK Resilience Risk Index Report, stem from a survey of 250 CISOs in major British organisations. The evidence indicates that the once-theoretical AI threat has now arrived directly on the desks of security chiefs, and their reactions have been decisive.

In what would have seemed nearly unthinkable a few years ago, more than a third (34%) of security leaders have already enforced blanket bans on AI tools due to cyber risks. A similar portion, 30 percent, have withdrawn certain deployments from active use inside their organisations.

This withdrawal is not rooted in fear of progress but reflects a practical response to an escalating dilemma. Enterprises already contend with complex and aggressive threats, such as the high-profile Harrods breach. CISOs admit they are struggling to keep pace, and the addition of advanced AI to the attacker’s arsenal is a challenge many feel unable to manage.

A widening readiness gap for AI systems like DeepSeek

The central issue with platforms like DeepSeek is their capability to expose confidential business data and be weaponised by malicious actors.

Three in five (60%) CISOs predict a surge in cyberattacks because of DeepSeek’s spread. The same proportion say the tool is already complicating their privacy and governance models, making an already taxing role nearly unmanageable.

This has reshaped perspectives. Once imagined as a potential silver bullet for security, AI is increasingly viewed as a part of the danger. The survey shows that 42 percent of CISOs now see AI as more of a threat than an asset to defence.

Photo of Andy Ward, SVP International of Absolute Security, for an article on why security chiefs demand urgent regulation of AI like DeepSeek.

Andy Ward, SVP International at Absolute Security, explained: “Our research underlines the substantial risks introduced by rising AI tools such as DeepSeek, which are quickly transforming the cyber threat arena.

“As worries increase over their ability to accelerate attacks and endanger private information, firms must urgently reinforce cyber resilience and modernise security frameworks to keep pace with AI-driven dangers.

“That is why four in five CISOs in the UK are pressing for government rules. They have seen how rapidly this technology is moving and how easily it can surpass today’s defences.”

Perhaps most concerning is the open admission of being unready. Nearly half (46%) of senior CISOs concede that their teams are not prepared to counter AI-driven threats. They are witnessing tools like DeepSeek evolving faster than their ability to respond, creating a dangerous exposure gap many argue can only be addressed by national intervention.

“These are not abstract concerns,” Ward added. “The fact that companies are already banning AI systems outright and reworking strategies due to the risks from LLMs like DeepSeek shows the urgency.

“Without a unified regulatory framework – one that defines deployment, governance, and monitoring – we risk massive disruption across all sectors of the UK economy.”

Enterprises are investing to avoid crisis during AI adoption

Yet despite this defensive stance, firms are not abandoning AI altogether. What’s happening is a pause, not a permanent retreat.

Organisations acknowledge AI’s enormous potential and are pouring investment into safe adoption. Indeed, 84 percent say that hiring AI experts is a top priority for 2025.

The investment stretches to board level. 80 percent of companies have pledged AI training for their C-suite leaders. The plan is twofold: enhance existing teams so they can manage the technology, and bring in dedicated specialists to handle its growing complexity.

The expectation – and perhaps hope – is that cultivating internal expertise will balance out intensifying external threats.

The message from UK security leaders is unequivocal: they do not wish to stifle AI innovation but to ensure it evolves securely. For that, they need stronger collaboration with government.

The path ahead demands clear rules, formal oversight, a pipeline of skilled AI workers, and a national strategy to mitigate the security dangers of DeepSeek and the new generation of powerful AI platforms destined to follow.

“The debate is over. We need swift action, policy, and regulation to ensure AI remains a driver of progress, not a spark for crisis,” Ward concluded.