"AI is a double-edged sword," Iain Paterson stated, his expression serious as he surveyed the room of technology executives. "We're using AI to automate threat detection and response. But attackers are doing the same, leveraging AI to find vulnerabilities at unprecedented speed."
As Chief Information Security Officer at WELL Health Technologies, Paterson brought a sobering perspective to our CIO ThinkTank Panel on AI & Business Transformation. The security landscape he described for 2025 is defined by an accelerating arms race where both defenders and attackers continually enhance their AI capabilities.
The pace continues to accelerate across the industry and many organizations are not keeping up. SoSafe’s 2025 Cybercrime Trends report noted 91% of all security experts anticipate a survey in AI-driven threats over the next three years, while at the same time their survey found 55% of businesses have not fully implemented controls to manage the risks associated with their own in-house AI solutions. For security leaders, this may not merely be a technical challenge, but an existential one. It’s clear that successful organizations will need to combine rapid AI adoption with robust, forward-thinking governance frameworks. Companies like Paragon Micro are leading the way by helping businesses balance these priorities—offering solutions that enable innovation while also ensuring that the necessary safeguards are in place to protect against evolving threats. Their expertise in IT infrastructure and security solutions provides a blueprint for organizations looking to integrate AI with strong governance from the outset.
For those interested in diving into the other articles in this CIO ThinkTank series they have been listed here for your convenience:
Read on if you would like to learn more about getting the balance right between experimentation and strategic governance.
The room grew notably quieter as Paterson shared the complexity of modern-day challenges a CISO faces, whether it be in just how convincing AI-generated phishing emails have become or assessing an increasingly challenging landscape of AI tools and capabilities.
Gone are the days of obvious grammatical errors and clumsy formatting—today's AI-crafted attacks are nearly indistinguishable from legitimate communications. The need to harness AI for our own business acceleration is critical, requiring a pace of response to drive down threats while increasing opportunities through effective AI adoption.
Malicious actors are expected to leverage multimodal AI to streamline attacks and exploit generative AI's growing capabilities to infiltrate networks and access sensitive information. Targeted campaigns are not new, but the way these attacks are carried out and the accessibility these tools provide makes the threat much more concerning.
Attackers could use AI agents to analyze public LinkedIn profiles of executives, then generate personalized phishing emails referencing real projects and industry-specific terminology. Five years ago, this kind of research would have taken days per target. Now it happens in minutes.
Even more concerning, sophisticated threat actors now employ "AI fuzzers" to discover zero-day exploits, compressing months of manual vulnerability research into mere hours.
"If you're patching vulnerabilities on a monthly cycle, you're already behind," Paterson warned, his tone urgent. "Attackers can weaponize newly found exploits within days, sometimes hours."
For many IT executives, this revelation prompted anxious note-taking and sideways glances at colleagues.
Despite these looming threats, Paterson's message wasn't one of despair but of strategic opportunity. By training machine learning models on network logs, endpoint data, and threat intelligence, security teams can dramatically reduce "dwell time"—the critical interval between an infiltration and its discovery.
"We integrated AI-driven tools in our Security Operations Center to handle repetitive triage steps," Paterson shared, his energy rising as he described their defensive innovations. "We cut incident handling time from 15 minutes to 6 minutes—and we're pushing for 2 or 3."
These efficiency gains translate directly to improved security posture. Research from IBM and the Ponemon Institute indicates that organizations using AI agents in security systems can save $2–3 million per data breach by containing incidents faster.
"AI Security Copilot’s can transform how analysts work," Paterson continued. "Rather than replacing humans, it might handle Level 1 tasks while our skilled analysts perform deeper threat hunting and complex investigations."
This still needs experts. Take the pattern of false positives—a perennial concern with automated security tools. Paterson acknowledged the challenge but shared how their approach combines AI detection with human validation for high-confidence outcomes.
"Our false positive rate continues to drop after we implement AI models that learn from analyst feedback," he noted. "The key is building systems where humans and AI each do what they do best."
As a CISO who must lead responsible advancements for employee AI adoption systems, Iain also highlighted the increasingly common challenge he and others must juggle: advancing in-house AI agents security in parallel. "In many ways, proactively leading governance with AI adoption needs to have even more rigor when it comes to implementing controls with our own in-house AI agents that must be secure, safe, and effective," Iain shared.
For more insights on a solid data foundation or tactics on effective AI governance, be sure to check out fellow panelist Joseph Martino’s insights in Building AI on a Solid Data Foundation: Governance and Value or the insights from the most recent CIO ThinkTank Building Success Through Strong AI Governance roundtables.
With AI tools proliferating across organizations, security leaders face a new challenge: maintaining AI governance without stifling innovation. Paterson shared how WELL Health's AI Application Review Panel, which evaluates third-party risk, privacy concerns, and compliance alignment for new AI tools, works and why it was so important to prioritize early on.
Iain and the team at WELL Health don’t just manage the application review process well, they are always working to accelerate how quickly they can responsibly turn around a thorough and comprehensive review. "We have a one-week SLA to evaluate each AI product request," he explained. "That keeps pace with business demands without sacrificing security."
This approach recognizes that shadow AI—like shadow IT before it—emerges when security processes become bottlenecks. By creating streamlined governance processes, security leaders can remain relevant partners rather than obstacles to progress.
With healthcare data at stake, WELL Health also invests heavily in industry-leading encryption and privacy-preservation capabilities such as homomorphic encryption. This ensures that even if an attacker gains access to data repositories, the information remains encrypted and meaningless to an attacker.
"Layered defenses plus new cryptographic techniques make it harder for criminals to monetize stolen data," Paterson explained. "It's not just about preventing breaches—it's about making them worthless to attackers when they do occur."
Looking to the future, Paterson envisions AI adoption evolving from an "analyst's helper" to a fully autonomous security engine, capable of making real-time adjustments to network configurations and identity policies without human intervention.
"The biggest hurdle isn't technical—it's psychological," he observed. "Executives worry about an overzealous AI agent that locks out half the workforce by mistake. We'll have a human-in-the-loop for a while, but with each successful test, trust grows."
As validation, Paterson shared examples that echo industry findings where autonomous security systems detect, isolate, and remediate threats such as a ransomware attempt within seconds—far faster than any human team could respond.
"Eventually, we'll rely on AI agents to patch or quarantine threats without waiting for permission," he predicted. "The speed of attacks simply doesn't allow for human deliberation at every step."
A spirited discussion at the event and at roundtables followed about the balance between autonomy and control in security operations—a tension that many executives recognized from their own organizations. The consensus that emerged reflected Paterson's pragmatic approach: start with human oversight, prove value incrementally, and expand autonomy as confidence grows.
As our CIO ThinkTank CIO, CISO, and CTO panel concluded, Paterson left fellow executives with an urgent reminder that resonated throughout the room:
"This is an arms race," he emphasized. "Either we embrace AI adoption to defend faster, or we'll be outmatched by AI-enabled attackers. Our job is to ensure our organization—and our customers—stay secure at the speed of AI agents."
The executives filing out of the event carried not just new insights but a heightened sense of urgency. In the world Paterson described, standing still means falling behind—and the consequences of falling behind in cybersecurity can be catastrophic.
This concludes our panel insights from our recent CIO ThinkTank discussions. Be sure to explore our roundtable insights that cover topics from strategic governance to rapid prototyping, data architecture to security transformation, and more.
Leading organizations are navigating AI adoption's unprecedented challenges and opportunities. We encourage you to apply these insights as you chart your own course through the AI revolution and stay in touch by subscribing for more insights from our newsletter.
As AI continues to evolve, gaining a comprehensive understanding of how it can be deployed securely is critical. To deepen your understanding, join us for our upcoming virtual CIO ThinkTank session "AI Agents at Work: Driving Value While Governing Risk." Don’t miss the opportunity to hear from experts in the field as they delve into the most pressing topics around AI governance and security.
Join Our Mailing List