Featured image of post Why Human cybersecurity experts still matter in the age of AI

Why Human cybersecurity experts still matter in the age of AI

Exploring the essential role of human expertise alongside AI in modern cybersecurity, examining how human creativity, contextual understanding, and strategic thinking remain irreplaceable in zero-day vulnerability hunting.

Introduction

The cybersecurity landscape has witnessed a remarkable transformation with the integration of artificial intelligence into vulnerability research. Google’s Project Zero team (Google’s elite security research team dedicated to finding zero-day vulnerabilities) has demonstrated that AI can boost vulnerability detection scores by up to 20 times on benchmarks like CyberSecEval 2 (Meta’s comprehensive cybersecurity evaluation suite for large language models).

The framework Naptime (an AI system developed by Google that allows LLM to systematically investigate vulnerabilities using specialized security tools) allows AI to mimic human security experts’ methodical approach.

Sean Heelan, a security researcher discovered critical zero-day vulnerability, CVE-2025-37899, in the Linux kernel’s SMB implementation (ksmbd) using OpenAI's o3, showcasing AI’s growing sophistication.

Yet despite these impressive advances, human expertise remains not just relevant but absolutely essential in the zero-day hunting ecosystem.

The question isn’t whether AI will replace human researchers, it’s how to optimize the partnership between human creativity and artificial intelligence to create the most effective cybersecurity defenses.

How AI is changing cybersecurity

What AI does really well

Modern AI systems have achieved remarkable capabilities in automated threat detection. Machine learning algorithms can analyze vast amounts of data instantly, scanning networks and detecting vulnerabilities in milliseconds. AI-powered frameworks like Google’s Naptime have revolutionized how we approach systematic vulnerability research, allowing LLMs to use specialized tools like debuggers and scripting environments to perform iterative security analysis.

The technical achievements are undeniable. AI excels at :

  • pattern recognition (automated identification of recurring patterns) across massive datasets,
  • automated responses (machine-driven actions that security systems take immediately upon detecting threats or anomalies) to common threats
  • predictive analytics that can anticipate potential vulnerabilities before they’re exploited.

Google's Naptime framework demonstrates this capability with remarkable results: it improved performance by up to 20-fold on CyberSecEval2 benchmark tests, achieving 1.00 score on buffer overflow tests (from 0.05) and 0.76 on advanced memory corruption tests (from 0.24).

Real-World impact

Think of AI as having a security assistant that can read every security report published worldwide, monitor all your company’s computers simultaneously, and never get tired or distracted. This technology can scan your entire network in minutes and identify potential problems before they become serious threats.

Ai is changing cyber

Where Humans remain irreplaceable

Business logic and contextual intelligence

Rather than replacing human experts, AI acts as a super-powered assistant that makes human cybersecurity professionals much more effective. Here’s how it works:

  • AI handles the routine work by scanning millions of files, monitoring network traffic, and filtering through thousands of security alerts

  • Humans focus on strategy by investigating on complex threats, making important decisions, and developing long-term security plans

Google's Naptime system demonstrates this perfectly, it can perform basic security research like a junior analyst, but still needs human experts to guide it and interpret the results.

Creative and intuitive problem-solving

AI operates on predefined algorithms and pattern-based recognition, but human hackers think outside the box. This creative thinking enables security researchers to discover entirely new attack vectors and understand how attackers might chain multiple minor issues into significant security breaches.

Human researchers can adapt their approach mid-investigation, following hunches and exploring unexpected paths that lead to breakthrough discoveries. This intuitive problem-solving capability is particularly valuable when dealing with sophisticated adversaries who actively work to evade AI-based detection systems.

Ethical judgment and strategic decision-making

When cybersecurity experts discover serious security flaws, they face important ethical decisions: Should they immediately warn the public? Give the software company time to fix it first? How long is too long to wait?

These decisions require human judgment about balancing public safety with responsible disclosure, something that requires empathy, ethics, and strategic thinking that AI cannot provide.

Understanding people and psychology

Many cyber attacks don’t target technology directly, they target people. Social engineering attacks like phishing emails, fake phone calls, and impersonation scams rely on manipulating human psychology.

Human cybersecurity experts understand how these psychological tricks work because they understand human nature. They can train employees to recognize manipulation tactics and design security policies that account for human behavior.

Zero-Day exploitation and custom attack development

While AI can detect known vulnerabilities and suspicious patterns, human hackers actively discover zero-day exploits, unknown security flaws that AI hasn’t encountered before. Human researchers can adjust their tactics on the fly, exploiting weaknesses that AI cannot predict or adapt to in real-time.

The recent discovery of CVE-2025-37899 by OpenAI’s o3 model, while impressive, required human guidance and interpretation to understand the significance and develop appropriate mitigations.

Testing security like real attackers

Red teams (groups of ethical hackers who test company security by simulating real attacks) use human creativity, social skills, and adaptability to test security systems comprehensively. They might combine technical attacks with social engineering, physical security breaches, and creative problem-solving that AI cannot replicate.

The Perfect Partnership: AI + Human Expertise

Augmentation, not replacement

Rather than replacing human researchers, AI serves as a force multiplier that amplifies human capabilities. The most effective approach involves AI handling the “technical heavy lifting” by processing vast amounts of data and flagging potential issues, while humans focus on strategic analysis and creative problem-solving.

Google’s Naptime framework exemplifies this hybrid approach. The system allows LLMs to perform vulnerability research using specialized tools, closely mimicking the iterative, hypothesis-driven approach of human security experts. However, Project Zero researchers emphasize that current LLMs can only perform “rather basic” vulnerability research, highlighting the continued need for human expertise.

Managing information overload

A major challenge in cybersecurity is alert fatigue. Security systems produce such a high volume of warnings that human analysts struggle to keep up, with many of these alerts ultimately proving to be false alarms.

AI excels at sorting through these alerts, filtering out the false alarms, and prioritizing the real threats. This allows human experts to focus their time and energy on genuine security incidents that require creative thinking and strategic response.

Partnership AI Human

The Future of Human-AI collaboration

Elevated soft skills and strategic thinking

As AI handles more technical analysis, human skills like communication, collaboration, and strategic thinking become even more valuable. Security teams must effectively convey complex findings to stakeholders and make critical decisions that require human judgment.

The ability to translate technical vulnerabilities into business risk and communicate effectively with executive leadership becomes increasingly important as AI handles more routine technical tasks.

AI-Augmented research teams

The future points toward AI-augmented threat hunting teams where human analysts collaborate with AI systems, combining human intuition with machine precision to create the most effective defense strategies. This partnership leverages AI’s speed and scalability while preserving the creative problem-solving and ethical judgment that only humans can provide.

Microsoft's recent Zero Day Quest 2025, which distributed $1.6 million for over 600 vulnerability submissions, demonstrates the continued value placed on human-driven security research, even as AI capabilities advance.

Continuous learning and adaptation

The most successful organizations will foster environments where AI systems continuously learn from new data while human experts provide insights and feedback, creating a dynamic learning environment that adapts to emerging threats.

This collaborative approach ensures that AI systems remain effective against evolving threats while human expertise guides strategic decision-making and ethical considerations.

Conclusion

The future of cybersecurity isn’t about choosing between artificial intelligence and human expertise, it’s about combining them effectively. AI provides incredible speed, analytical power, and the ability to process vast amounts of information. Humans provide creativity, contextual understanding, ethical judgment, and strategic thinking.

The discovery of CVE-2025-37899 using OpenAI’s o3 model perfectly illustrates this partnership: AI provided the analytical capability to examine thousands of lines of code and identify subtle patterns, while human expertise guided the search and validated the findings.

Organizations that successfully combine AI efficiency with human creativity and judgment will have the strongest defenses against cyber threats. The goal isn’t to replace human cybersecurity experts with AI, but to give them AI-powered tools that make them more effective at protecting what matters most.

At Senthorus, we believe it’s essential to build hybrid systems that leverage the speed and precision of AI while keeping human experts at the center. By combining the best of both worlds, we help our clients stay ahead of evolving threats and ensure that security decisions remain thoughtful, strategic, and effective.

References

  1. Google’s Naptime Framework to Boost Vulnerability Research
  2. Microsoft launches Zero Day Quest hacking event with $4 million in rewards
  3. Linux Kernel SMB 0-Day Vulnerability CVE-2025-37899 Uncovered using ChatGPT o3
  4. Google framework helps LLMs perform basic vulnerability research
  5. Microsoft Zero Day Quest
  6. Sean Heelan Used OpenAI o3 to Uncover CVE-2025-37899
  7. Google Introduces Project Naptime for AI-Powered Vulnerability Discovery