Who would win? Human or AI?

Alright, I admit it. My title was peak clickbait journalism. I’m not planning to write a sci-fi epic about the battle between flesh and metal to the bitter end.

My goal is to examine the capabilities of artificial intelligence and computer-based intelligence in the field of cybersecurity and compare them to human-led security measures in similar use cases. More specifically, I will focus on Microsoft’s security stack, meaning the suite of security technologies offered by Microsoft.

What does matter? What Should Be Protected and Monitored?

In a large IT environment, there are hundreds of moving parts. There’s plenty to secure. Just vulnerability management alone can involve at least a three-digit number of operational targets. And what about cybersecurity’s classic weakest link? No, I’m not talking about Active Directory, though as a standalone technological component, it could very well fit the description. I’m referring to people—users. Those mobile, busy, and often thoughtless little penetration testers.

Securing data requires preparedness. We must ensure that systems are more or less up to date and correctly configured. This is what’s known as cyber hygiene, or proactive cyber defense. On the other hand, we must observe systems and their behavior—identify unusual activities and anomalies. This falls under reactive security.

Traditionally, security monitoring service providers have emphasized the importance of reactive security. The idea is to monitor the environment even in the [expensive] early hours of the morning. The often-heard mantra, “Cybercriminals don’t work office hours,” is used to justify why monitoring should continue outside regular working hours. This statement is likely true. I find it hard to believe that a criminal trade union is actively lobbying for working hour protections for outlaws. But does it really matter? And more importantly, is this argument still relevant in the cybersecurity mindset of 2025?

The answer is both yes and no. Yes, in the sense that a severe vulnerability in an e-commerce system, for example, could be exploited in the early hours of the morning, leading to a data breach. And no, because research suggests that the vast majority of threats (depending on the source, anywhere from 2/3 to 80–90%) are coming through cybersecurity’s weakest link: the user. And very few employees work 24/7 under a slave contract. As far as I know, such practices are even legally prohibited in civilised countries (not in US, I think 😉).

Therefore, monitoring resources should be focused 70–90% on the hours when the weakest link (the user) is active and working.

Human vs. Robot as a security monitoring worker?

It took a while to get to the main topic. Am I getting old and rambling? Well, maybe the background was useful, especially when considering the aspect of monitoring and comparing human intelligence to artificial intelligence.

Monitoring has two key aspects: response speed and detection capability. The first indicates how well and quickly a system reacts to risks or anomalies. The second determines how many anomalies can actually be detected. Neither aspect is useful on its own. If we recognize every single anomaly but only investigate them a month later, the attack has likely already achieved its goal, making our reaction too late and therefore useless. The thief came, saw, and conquered. On the other hand, if our response time is within a second but our detection capability only covers half of the systems, the thief could have come, seen, and conquered without us even knowing.

It’s an undeniable fact that a microprocessor’s response to stimuli is significantly faster than that of a human. A computer reacts to a command in milliseconds, while a human takes seconds at best—thousands of times slower. In terms of reaction speed, the robot wins hands down.

What about detection capability? A computer brain can scan a thousand lines of log data in the blink of an eye. Again, it vastly outperforms a human. Artificial intelligence doesn’t get tired or perform worse due to illness. Consistency is one of the most important measures of detection capability. And once again, the robot takes the win.

No matter how we look at it, humans need automation—computer intelligence—to support them. Both for detection capabilities, since humans are too inefficient and slow to process all relevant data, and for reaction speed, since human processing speed is inadequate. The real question isn’t human vs. robot—it’s about how much human involvement is needed. And why? Job preservation? Ethical contributions? There may be reasons. In some cases, it makes sense for a human to approve or make a decision based on AI-processed data and proposed outcomes. However, most counterarguments I’ve encountered stem from emotional reactions—statements like, “But that’s not how it’s always been” and “That just won’t do.”

Where Do Human Brains win the game?

Where do I see human intelligence as superior? At its best, a sharp and skilled cybersecurity expert possesses innovation that AI cannot yet replicate. We often talk about intuition or a gut feeling that something isn’t right. In reality, this is intuitive reasoning. Even a tech-enthusiast nerd like me doesn’t believe AI will match human intuition anytime soon.

How does this manifest in security operations? AI produces more false positives. For example, it may classify it as a security risk when the meticulous accountant, Paolo, suddenly starts making typos in his password on a Friday night. A human analyst might suspect fatigue or drunkness as the cause of the typing errors.

Higher-level tasks, such as architecture and strategic assessments aligned with business requirements, remain beyond AI’s reach. Artificial intelligence performs best when dealing with predefined models and a limited number of variables.

This has been proven in games. In chess, you will never beat AI. The best human player, making zero mistakes, might achieve a draw. That’s because chess has a limited set of possible moves. AI calculates every single move. The outcome is predetermined. But in a more complex mathematical game like Go, the situation changes. When the number of variables is no longer limited, human players can still compete (at least against AI with limited computational power). And my colleague Massimo insists that AI also outperforms humans in No Limit Hold’em poker too. Go figure – or should I ask from Copilot? 😉

Conclusion

AI is better suited for monitoring tasks that require fast responses. It doesn’t tire or make human errors. The more complex the task, or if it requires “psychological or political insight,” the human operator is unbeatable.

Perhaps traditional first-line monitoring tasks should be assigned to robots, with the most critical decisions escalated to human supervisors.

Ultimately, it depends on the industry and use case. In some operations, it makes sense to have human monitors. In others, where constant vigilance and perfectly consistent execution are required, assigning the task to a human isn’t worthwhile.

However, I see less and less need for humans to handle repetitive routines or basic monitoring tasks—especially during hours when users aren’t actively introducing risk vectors through human errors. AI is a perfectly good security guard for an e-commerce server in the early morning hours!

Leave a comment