There's a version of the AI-in-cybersecurity story that is straightforwardly optimistic: AI enables faster threat detection, better anomaly analysis, and security operations that can scale to meet modern attack volumes without proportionally scaling headcount. There's another version that is straightforwardly alarming: AI lowers the barrier to entry for sophisticated cyberattacks, enables novel social engineering at unprecedented scale, and creates new attack surfaces through the AI systems themselves.

Both versions are true. The question that matters is which side is gaining more ground, and what the implications are for how we build and operate systems. My conclusion, after looking carefully at the evidence from 2025, is that the defensive applications are real and progressing — but that the threat landscape has expanded faster than most organizations have prepared for, and the asymmetry of AI-powered attacks may be more serious than the industry is admitting publicly.

Where AI Is Strengthening Defense

Threat detection and SOC automation is the category where AI has delivered the most concrete defensive value. Modern security operations centers are drowning in alert volume — a mid-size enterprise can generate millions of security events daily, and the signal-to-noise ratio in most SIEM environments is dismal. AI-powered detection systems from vendors like CrowdStrike (Falcon AI), Palo Alto Networks (Cortex XSIAM), and Darktrace have demonstrated real reductions in mean time to detect and mean time to respond.

The technical approach has evolved significantly. Early ML-based security tools relied on supervised learning on labeled attack datasets, which meant they worked well on known attack patterns and poorly on novel techniques. More recent approaches use unsupervised and self-supervised learning to model "normal" behavior for users, devices, and network flows, then flag deviations — a fundamentally more robust approach for detecting novel attacks that don't match known signatures. This behavioral baselining is where AI genuinely outperforms rule-based systems, because the baseline is continuously updated and can be nuanced enough to distinguish "developer running unusual queries at 2am" (normal) from "user accessing systems they've never touched before" (anomalous).

Vulnerability discovery and patch prioritization is another area where AI is providing measurable value. Tools like Microsoft's Security Copilot, Google's Big Sleep project (which has discovered zero-day vulnerabilities in production software using LLM-based automated analysis), and specialized vulnerability research tools are accelerating the identification of security weaknesses before attackers find them. Google's Big Sleep discovery of a stack buffer underflow vulnerability in SQLite in 2024 was a landmark demonstration of AI-assisted security research with real-world impact.

Code security analysis has improved dramatically. AI-powered SAST (static application security testing) tools now catch a meaningfully higher fraction of security vulnerabilities than rule-based approaches, with lower false positive rates. GitHub Advanced Security's AI features, Snyk's DeepCode, and Semgrep's AI-assisted rules have all shown progress on real codebases. AI coding assistants themselves are becoming security-aware — GitHub Copilot now flags when it's generating code patterns known to be vulnerable, though this feature is imperfect and shouldn't be relied on as a primary security control.

The Attacker's AI Toolkit

The offensive side of the AI-in-security ledger is where I think the industry is underreacting. Let me be specific about what has changed.

Phishing and social engineering at scale. The primitive era of Nigerian prince scams is over. AI-powered phishing campaigns can now generate contextually relevant, grammatically flawless emails that reference real professional details (scraped from LinkedIn, company websites, and social media), impersonate specific individuals convincingly, and adapt their approach based on target responses. What previously required significant manual effort per target can now be automated at scale. The observed increase in business email compromise (BEC) attacks in 2025 correlates with AI tooling becoming accessible to criminal organizations.

Voice cloning and deepfake fraud. Real-time voice cloning — generating audio that convincingly mimics a specific person's voice — has moved from research demonstrations to operational attack capability. The $25 million deepfake CFO impersonation fraud at a Hong Kong firm in early 2024 was not an isolated incident; it was a preview. Financial institutions and enterprises are scrambling to implement verification protocols that don't depend on voice recognition, but change management in this area is slow.

AI-accelerated exploit development. LLMs have lowered the barrier to developing functional exploits from vulnerability descriptions. A security researcher who previously needed weeks to develop a proof-of-concept exploit for a newly disclosed vulnerability can now do it significantly faster with AI assistance. This accelerates the exploit-vs-patch race that defenders are always running. It also means that nation-state-level technical capability in offensive security is becoming more accessible to less sophisticated threat actors.

LLM-specific attacks on AI systems. As organizations deploy LLM-based applications, they create new attack surfaces. Prompt injection — providing inputs designed to override an AI system's instructions and cause it to take unintended actions — is a class of attack with no clean defense. An AI agent with access to email, calendar, and file systems is a significantly more dangerous prompt injection target than a simple chatbot. OWASP has published an LLM Top 10 vulnerability list, and the security research community is actively studying these attack classes, but the ecosystem of defensive tooling is still immature.

Automated vulnerability scanning at scale. AI lowers the cost of reconnaissance and target identification for attackers. Automated tools can now scan large target populations, identify likely vulnerable configurations, and prioritize targets with unprecedented efficiency. The economics of opportunistic attack have shifted dramatically in attackers' favor.

The Asymmetry Problem

Here's the structural challenge: in cybersecurity, the attacker has inherent advantages. They need to find one way in; defenders need to close all ways in. AI amplifies both sides, but it amplifies the attacker's position-of-advantage more than defenders like to admit.

The defender's AI advantage requires significant upfront investment: enterprise security tooling, skilled personnel who can operate and tune AI security systems, data infrastructure, and ongoing operational effort. The attacker's AI advantage is increasingly accessible via commodity services. Criminal-as-a-service offerings now include AI-powered phishing toolkits available for rental. The democratization of AI capability is not symmetrically beneficial.

This doesn't lead me to fatalism — it leads me to urgency about raising the floor of baseline security practices across the industry. The organizations most vulnerable to AI-powered attacks are those still running on security fundamentals from a decade ago: phishing-vulnerable authentication, unpatched systems, absent network segmentation, and security teams that are reactive rather than proactive.

Practical Implications for 2026

For security practitioners, the implications are fairly concrete:

Assume phishing will defeat email filtering. Multi-factor authentication that is phishing-resistant (FIDO2/passkeys, hardware security keys) should be standard for any system with significant access. AI-generated phishing will defeat most conventional email filters and a meaningful fraction of user awareness training.

Treat AI systems as high-value attack targets. Any LLM deployment that has access to sensitive data or the ability to take actions needs to be treated with the same security rigor as any other high-value system — penetration tested, monitored, and designed with the assumption that prompt injection attempts will occur.

Invest in behavioral monitoring, not just perimeter security. AI-powered behavioral baselines are your best early warning system for novel attack techniques. Zero-trust architecture combined with continuous behavioral monitoring is the right long-term security posture.

AI for AI security. The most interesting emerging category is using AI specifically to test and probe AI systems — red-teaming LLM deployments at scale, generating adversarial inputs, and identifying failure modes before attackers do. This is an area where investment is urgently needed and currently undersupplied.

The cybersecurity arms race has always been adversarial. AI has made both sides more capable, but the threat landscape has expanded faster than the defensive posture of most organizations. 2026 is a year to close that gap with urgency.