AI: Australia’s Best Friend or Biggest Cyber Threat?
- Warwick Brown
- 10 hours ago
- 8 min read
Explore Warwick Brown’s analysis of 2026 AI trends, covering the $202,700 cost of Australian cybercrime, deepfake fraud, and critical board-level governance strategies for agentic AI

Artificial intelligence is no longer a side conversation in the boardroom. It is the conversation. In January 2026, the World Economic Forum reported that 94% of global leaders identify AI as the single most significant driver of change in cybersecurity. Fewer than half are confident they can govern the exposure it creates. Australian organisations will either close this gap or bleed capital.
The question isn't whether AI belongs in your strategy. The question is whether you're treating it as a governed capability or an unmanaged experiment running at production speed.
The Signal: What Actually Changed
If 2024 was the year of AI curiosity, 2025 was the year it became infrastructure. The Australian Signals Directorate's Annual Cyber Threat Report 2024-25 paints a clear domestic picture: over 1,200 cyber incidents (up 11%), notifications to critical infrastructure entities of potential malicious activity up 83%, and average cybercrime costs for large businesses hitting $202,700 (a 219% increase year on year). Across 84,700 cybercrime reports, one was lodged every six minutes.
Globally, AI has compressed the attacker's advantage. Moody's 2026 Cyber Risk Outlook warns of adaptive malware capable of rewriting its own code, AI agents that help attackers compress the kill chain from days to minutes, and early indications of autonomous attack campaigns. Moody's elevated autonomous cyber threats to the same level of credit risk as natural disasters. This is a financial assessment, not a technical one.
Meanwhile, AI adoption across Australian businesses is accelerating. The CSIRO reports AI adoption across Australian businesses at 68%, though this uses a broad definition covering any AI or machine learning integration, with adoption among SMEs substantially lower at 29-37%. Separately, Deloitte Access Economics modelling suggests that if one in ten SMBs from both basic and intermediate AI adoption groups advanced one rung on the maturity ladder, $44 billion could be added to GDP annually. The adoption curve is outpacing the governance curve by a wide margin.
The Attack Surface: Where AI Makes Things Worse
Social Engineering at Scale. The days of poorly written phishing emails are over. Microsoft's 2025 Digital Defense Report found that AI-automated phishing emails achieved a 54% click-through rate, compared to 12% for traditional campaigns. AI phishing is 4.5 times more effective. A peer-reviewed academic study independently confirmed the same figures. Microsoft estimated that AI could enhance phishing profitability by up to 50 times.
In parallel, Cyble's Executive Threat Monitoring report found that AI-powered deepfakes were involved in over 30% of high-impact corporate impersonation attacks in 2025, fuelled by the mainstreaming of Deepfake-as-a-Service platforms.
The Arup case remains illustrative. A Hong Kong-based finance employee was tricked into transferring $25 million after a video call where every other participant, including the "CFO", was an AI-generated deepfake. As Arup's Global Chief Digital Information Officer Rob Greig later noted, none of their systems were compromised; it was "technology-enhanced social engineering". The WEF used it as a case study for AI-enabled corporate fraud.
Your authentication and authorisation controls need to account for the fact that seeing and hearing a colleague is no longer proof of identity.
Open-Source AI Supply Chain: The OpenClaw Wake-Up Call
The rapid rise of OpenClaw, an open-source personal AI agent that gained widespread adoption in late 2025, has become the defining case study for AI supply chain risk in early 2026. Its security score from ZeroLeaks? Two out of 100.
The findings matter not because OpenClaw is uniquely bad, but because it illustrates what happens when capability ships without security. An 84% data extraction success rate and 91% prompt injection success rate. CVE-2026-25253: a critical remote code execution vulnerability rated CVSS 8.8 by SentinelOne, exploitable via a single malicious link. Roughly a third of the 17,000+ internet-facing gateways identified by Hunt.io remain unpatched. Cisco's AI Threat and Security Research team found that 26% of the 31,000 agent "skills" analysed contained at least one vulnerability. One tested skill was functionally malware, enabling silent data exfiltration.
HiddenLayer demonstrated a full command-and-control attack via indirect prompt injection, achieving persistent remote code execution without user awareness. Malicious skills distributed macOS malware (Atomic Stealer) through OpenClaw's community marketplace. China's Ministry of Industry issued a formal security advisory on OpenClaw in February 2026.
HiddenLayer's conclusion:
"OpenClaw does not fail because agentic AI is inherently insecure. It fails because security is treated as optional in a system that has full autonomy, persistent memory, and unrestricted access to the host environment".
Cisco's enterprise warning:
"AI agents with system access can become covert data-leak channels that bypass traditional data loss prevention, proxies, and endpoint monitoring".
The lesson maps directly to a familiar problem: shadow IT.
Employees adopt capable tools that bypass enterprise controls. The difference is that these tools can now execute code, access files, and make network calls autonomously.
Foreign Model Risks: Trust, but Verify (and Then Verify Again)
The DeepSeek episode reinforced why model provenance matters. NIST's AI Safety Institute found that DeepSeek R1 responded to 94% of malicious requests when a common jailbreaking technique was used, compared with 8% for US frontier models. DeepSeek agents were 12 times more likely to follow malicious instructions designed to hijack them. Separately, Wiz Research discovered a publicly accessible ClickHouse database containing over a million lines of log streams with chat history, API keys, and backend details, completely open and unauthenticated.
Running a model locally versus relying on a foreign cloud-hosted service are fundamentally different risk profiles. Your AI procurement process needs to distinguish between the two.
The Defence Dividend: Where AI Is Paying Off
It would be misleading to frame AI purely as a threat. The WEF reports that 77% of organisations have now adopted AI for cybersecurity, primarily for phishing detection (52%), intrusion and anomaly response (46%), and user-behaviour analytics (40%).
The shift in Security Operations Centres is real. Organisations processing vast volumes of daily security events are moving from reactive alert handling to continuous, data-driven threat anticipation by integrating deterministic, generative, and agentic AI. The operational benefit is measurable: faster triage, reduced analyst fatigue, and the ability to surface patterns that manual review simply cannot match at scale.
On the fraud prevention side, Experian reports that its identity verification and fraud prevention solutions helped clients avoid an estimated $19 billion in losses globally in 2025. That is a vendor's self-reported figure, but it signals the scale of AI-enabled fraud interception now in play.
But here is the qualifier that every CEO and board member needs to hear: AI compounds existing weaknesses at speed. If your identity controls are poor, AI will execute poorly governed processes faster. If your data classification is weak, AI will process and expose sensitive information more efficiently. The technology is an amplifier, not a substitute for engineering discipline.
Moody's framed it precisely: "AI-powered defense solutions are not a silver bullet; they introduce new risks and require strong governance. In an era of AI-enabled cybercrime, however, firms that solely rely on manual processes will fall behind, increasing their exposure to costly breaches".
The Agentic Shift: Claude Opus 4.6 and the New Operating Model
On February 5, 2026, Anthropic released Claude Opus 4.6, its most capable model for sustained, multi-step agentic work. It now ranks first on the Finance Agent benchmark and introduces "agent teams": multiple AI agents working in parallel, coordinating autonomously. With a one million token context window, adaptive thinking, and deep integration into coding and enterprise workflows, it represents a material step from "AI as chatbot" to "AI as worker".
This is not a product endorsement. It is a governance signal.
When AI models can plan complex workflows, spin up sub-agents, execute code, navigate software interfaces, and sustain tasks over long horizons with minimal human oversight, the question for your organisation becomes: where do we allow AI to propose actions? Where can it execute within guardrails? And where must a human sign off, with full logging and accountability?
The dual-use implications are stark. The same capabilities that make Opus 4.6 valuable for code review, financial analysis, and security operations also make equivalent tools valuable for reconnaissance, exploit development, and campaign automation. The governance framework inside your organisation matters more than the safety features inside any single model.
The Governance Gap: Australia's "Light Touch" and What It Means for Boards
On December 2, 2025, the Australian Government released its National AI Plan, its most comprehensive statement on managing AI's expansion. It is built around three goals: capture the opportunity, spread the benefits, and keep Australians safe.
The critical pivot: the Government abandoned its earlier proposal for "mandatory guardrails" and adopted a "regulation where necessary" posture. Instead of a standalone AI act, it will rely on existing, technology-neutral laws, with existing regulators responsible for identifying and addressing AI-related harms. A new $30 million AI Safety Institute will monitor emerging risks and advise where stronger responses may be needed.
In January 2026, the ACSC published new guidance on managing cybersecurity risks of cloud-based AI, covering data leaks, privacy breaches, unreliable outputs, and supply chain dependencies. The practical advice: implement internal AI usage policies, anonymise data before uploading, maintain human oversight for high-stakes decisions, verify vendor compliance, and establish AI incident response mechanisms.
Governance snapshot
For comparison:
South Korea Has the first comprehensive AI framework law (the AI Basic Act), with risk‑based obligations for high‑impact and generative AI, including services that affect Korean users. Enforcement begins in 2026, with regulators already issuing guidance.
European Union Implementing the AI Act, a risk‑based regime with strict rules for high‑risk and prohibited AI and obligations for general‑purpose models. The law is in force, with most key requirements phasing in between 2025 and 2027.
United States No single federal AI law, instead a mix of light‑touch federal and sectoral rules, while states experiment with their own frameworks. Industry self‑regulation still dominates, and some state rules are being tested against federal limits.
Australia Taking a "regulate where necessary" approach, relying on existing technology‑neutral laws, targeted reforms, and a new AI Safety Institute. The National AI Plan was released in December 2025, with guidance and oversight now ramping up.
Canberra is not going to tell you exactly how to govern AI. It is telling you that you are already responsible under existing law. If you are waiting for regulation to prescribe your governance framework, you are already behind.
Here's the uncomfortable question: why are we surprised that the government chose a light-touch approach? Australian business has spent three years asking for exactly this. Now we have it. The test is whether we genuinely know how to govern AI, or whether we just didn't want to be told how to do it.
What You Should Do This Quarter
This is not a comprehensive transformation program. It is five concrete actions that a CEO, board, CTO, or CIO can mandate before the end of Q1 2026:
Build an AI asset register. Identify every model, agent, and AI-enabled tool operating inside your environment. Include shadow AI. If you cannot answer "what AI are we running, where did it come from, and who controls it?", you have a blind spot that attackers will find before you do.
Apply supply chain discipline to AI. Treat AI models like any other critical supplier: provenance, attestation, isolation, and audit rights. The ACSC's AI supply chain guidance and OWASP's LLM Top 10 (particularly LLM03: Supply Chain and LLM06: Excessive Agency) provide ready-made frameworks. Use them.
Define human-in-the-loop boundaries. For every AI use case, decide explicitly where AI proposes, where it executes within guardrails, and where it must not act without human approval. Document it. Log it. Review it quarterly.
Stress-test your governance in dollar terms. Can your board articulate its AI risk appetite? Not in vague principles, but in terms of financial exposure, operational impact, and recovery time? If the answer is no, that is the first conversation to have.
Treat AI security as enterprise risk. AI governance belongs alongside cyber, privacy, and operational risk on the enterprise risk register. It is not a side project for the CISO or an innovation team experiment. It is a board accountability item, and the National AI Plan makes that expectation explicit.
The organisations that will win the AI race are not the ones that adopt fastest. They are the ones that adopt with the most discipline. AI is Australia's best friend when it is governed, measured, and tied to business outcomes. It is our biggest cyber threat when it is not.
The choice, as always, sits with leadership.