Cybersecurity is entering a new arms race, and the weapon reshaping the battlefield is AI.
Artificial intelligence is both shield and sword: a powerful defense for enterprises and an equally potent tool for attackers. Navigating this landscape requires precision, technical depth, and an understanding of adversaries who seek to exploit it.
To explore how industry leaders are adapting, AI News spoke with Rachel James, Principal AI ML Threat Intelligence Engineer at global biopharma giant AbbVie.
Headshot of Rachel James, Principal AI ML Threat Intelligence Engineer at AbbVie, for an article on business cybersecurity in the AI era.
“In addition to the vendor-provided AI augmentation in our existing tools, we also run LLM analysis on detections, correlations, and associated rules,” James explains.
Her team deploys large language models to cut through endless alerts, flagging duplicates, uncovering patterns, and spotting gaps before adversaries can exploit them.
“We use this to determine similarity, duplication, and provide gap analysis,” she notes, adding that the next stage will pull in more external threat data. “We aim to enhance this with deeper integration of threat intelligence in our next phase.”
At the heart of this strategy sits OpenCTI, a specialized threat intelligence platform that transforms fragmented signals into a coherent threat picture.
AI drives this entire operation, converting unstructured text into structured STIX intelligence. The long-term vision, James says, is to connect this intelligence layer with every part of security operations, from vulnerability tracking to third-party risk management.
Yet she stresses caution. As a key contributor to global industry initiatives, James knows well the hazards of poorly managed AI.
“I must mention the excellent group I work with – the ‘OWASP Top 10 for GenAI’ – as a foundation for understanding vulnerabilities introduced by GenAI,” she says.
Beyond individual flaws, James highlights three unavoidable trade-offs:
Accepting the inherent risk of generative AI’s creative but unpredictable behavior.
Losing transparency as models grow more complex and harder to explain.
Misjudging ROI amid AI hype, leading to inflated expectations or underestimated costs.
Building resilience means understanding attackers as much as tools. Here James’ expertise is vital.
“My specialty is cyber threat intelligence. I’ve researched in depth how threat actors explore, use, and evolve AI,” she notes.
She monitors adversary chatter and underground tool development via open sources and her automated dark web collections, sharing findings on her cybershujin GitHub. She also actively experiments with adversarial techniques.
“As the lead for the Prompt Injection entry at OWASP and co-author of the Guide to Red Teaming GenAI, I create adversarial input methods myself and maintain a strong peer network,” James adds.
Looking ahead, she sees striking alignment between two disciplines. “The cyber threat intelligence lifecycle mirrors the data science lifecycle that underpins AI ML systems,” she says.
That overlap is an opportunity. “Defenders have a unique chance to leverage intelligence data sharing and AI at scale,” she asserts.
Her closing message to peers mixes optimism with realism: “Data science and AI will soon be inseparable from cybersecurity. Embrace it.”
Rachel James will expand on these ideas at the AI & Big Data Expo Europe in Amsterdam on 24–25 September 2025. Catch her day two session on ‘From Principle to Practice – Embedding AI Ethics at Scale.’