
AI is transforming cyberattacks by supercharging long-standing human vulnerabilities with speed, scale, and believability.
While artificial intelligence won’t cause all problems — nor solve them — it is making existing risks more effective and harder to detect. Human error remains the root cause of most breaches, with 84% involving manipulation or mistakes, and AI is increasingly being used to exploit this weak link.
In phishing, smishing (SMS-based phishing), business email compromise, and wire fraud, the attacker still needs a human target to fall for the trap.
But AI is making these attacks more sophisticated and harder to spot. Tools like AI-generated emails, cloned voices, and deepfake videos are boosting the speed of influence attacks —flooding inboxes with believable messages faster than ever.
Some reports show users receive around 88 emails a day, many carrying links or attachments that appear trustworthy thanks to AI-assisted writing and formatting.
AI also enables scale. In recent attacks on Australia’s superannuation funds, compromised credentials, sourced from the dark web or elsewhere, were rapidly used to access systems en-masse.
Threat actors can now process vast datasets to identify vulnerable targets more efficiently. The same machine learning techniques used by security teams for risk analysis are being leveraged by attackers to plan high-return operations.
Perhaps most alarming is how AI enhances believability. It can craft messages and interactions that mimic trusted colleagues, exploiting emotional and psychological triggers.
One example involved a target transferring more than a million dollars of funds to an attacker after her trust in him was gradually won using AI-generated information. These tactics build on social engineering principles honed in influence operations and are now being automated.
Looking ahead, AI’s growing ability to find vulnerabilities in code, including "zero-day exploits", poses an even greater technical threat. These are flaws that haven't yet been discovered by software makers, and AI's speed in locating them could outpace human-led patching.
The most concerning future use? AI agents that autonomously engage in back-and-forth social engineering conversations, adapting their approach in real-time to manipulate victims. This would mark a shift from human-led to machine-driven persuasion.
Ultimately, AI isn’t replacing human risk — it’s amplifying it. As attackers harness AI to increase trust in false messages, organisations must double down on awareness, behavioural training, and adaptive cybersecurity controls.
Strengthen your organisation’s "human firewall" through tailored, ongoing social engineering training—not just generic phishing simulations. Training should evolve beyond "don't click links" to helping staff, executives, and boards recognise psychological manipulation and influence tactics.
Other tactics include
- Role-based access controls (RBAC) are critical to prevent escalation.
- Move beyond static plans to dynamic playbooks for specific attack types and test regularly including executives.
- Implement layered tools like multi-factor authentication (MFA), security operations centres (SOCs), and managed detection and response (MDR)
- Organisations should implement dual approvals, slow down fund transfers, and require secondary verifications — especially for account changes.
- Attackers now move faster and smarter. The goal isn’t to be unbreachable —i t's to avoid being the slowest target and to recover quickly.
- Culture plays a key role. Including HR, communications, and marketing leaders in cyber discussions helps embed security into everyday business.
Attributable to Dan Elliot, Head of Cyber Resilience for Australia and New Zealand, Zurich Reinsurance Solutions
Comments
Remove Comment
Are you sure you want to delete your comment?
This cannot be undone.