Google’s Threat Intelligence Group just published one of the most detailed reports to date on how adversaries are using AI to accelerate attacks. For Australian CISOs, five findings demand immediate attention.
The GTIG AI Threat Tracker, published in early 2026 and based on Q4 2025 observations, moves the conversation beyond hypothetical AI threats. This is not speculation about what attackers might do with AI. It is documented evidence of what state-backed and financially motivated threat actors are already doing โ and the patterns map directly to risks that mid-market Australian organisations face today.
Risk 1: AI-Augmented Phishing Has Eliminated the Obvious Tells
For years, defenders relied on grammar mistakes, awkward phrasing, and cultural missteps to help users identify phishing attempts. That detection method is now effectively dead.
Google’s report documents state-backed actors โ including Iran’s APT42 and North Korea’s UNC2970 โ using large language models to generate hyper-personalised phishing lures that mirror the professional tone of target organisations. APT42 used Gemini to research targets’ biographies and craft credible personas for social engineering. UNC2970 used it to profile defence sector targets, map organisational hierarchies, and synthesise open-source intelligence for high-fidelity phishing campaigns.
The critical shift is from single-shot phishing to rapport-building phishing, where AI maintains multi-turn conversations to build trust before delivering a payload. This makes traditional email filtering and user awareness training substantially less effective.
What to do now: Update phishing simulation programs to include AI-quality lures. Assume that phishing emails will be grammatically perfect, culturally appropriate, and contextually relevant. Train staff to verify identity through out-of-band channels rather than relying on content quality as an indicator.
Risk 2: Model Extraction Attacks Are Real and Growing
Google documented a surge in “distillation attacks” โ where adversaries use legitimate API access to systematically probe AI models and extract their reasoning capabilities to train competing models. One campaign involved over 100,000 prompts designed to coerce Gemini into revealing its internal reasoning traces.
This matters for any organisation that has built custom AI models or fine-tuned commercial models with proprietary data. If those models are accessible via API, they are targets for extraction. A competitor could potentially replicate your model’s specialised capabilities at a fraction of the cost.
What to do now: If your organisation operates custom AI models or fine-tuned deployments, monitor API access patterns for systematic querying that suggests extraction attempts. Implement rate limiting and anomaly detection on model endpoints. Review terms of service compliance for any third-party models being used internally.
Risk 3: AI-Integrated Malware Is No Longer Theoretical
The report documents HONESTCUE, a malware family that calls Google’s Gemini API to generate functional code at runtime โ specifically, code that downloads and executes second-stage payloads. The malware compiles and runs the AI-generated code directly in memory, leaving no artifacts on disk.
This represents a meaningful evolution. Traditional malware carries its payload. AI-integrated malware generates its payload on demand, making signature-based detection significantly harder. The malware itself looks functionally innocent until it receives AI-generated instructions.
Separately, the report identified COINBAIT, a phishing kit built using the AI-powered platform Lovable AI, masquerading as a cryptocurrency exchange. The kit was constructed as a full React application with complex state management โ a level of sophistication that AI code generation made accessible to actors with limited technical skill.
What to do now: Ensure endpoint detection and response (EDR) solutions can detect in-memory code execution and runtime compilation. Monitor for unexpected outbound API calls to AI service endpoints from production systems. Review network rules for traffic to backend-as-a-service platforms from uncategorised or newly registered domains.
Risk 4: Legitimate AI Platforms Are Being Used to Host Attacks
Google documented threat actors using the public sharing features of AI platforms โ including Gemini, ChatGPT, Copilot, DeepSeek, and Grok โ to host malicious instructions. The attack leverages the “ClickFix” social engineering technique, where users are tricked into copying and pasting malicious commands into their terminals.
Because the instructions appear on trusted AI platform domains, they bypass many network security filters. The campaign distributed ATOMIC, an information stealer targeting macOS environments that captures browser data, cryptocurrency wallets, and system files.
This is a particularly insidious vector because it exploits the trust that organisations and users place in established AI platforms. A malicious instruction hosted on a Google or OpenAI domain looks legitimate.
What to do now: Add AI platform shared content URLs to web filtering review. Implement controls that restrict terminal paste operations from browser sources on managed endpoints. Update security awareness training to cover AI platform abuse scenarios.
Risk 5: The Underground AI Ecosystem Is Growing
The report documents Xanthorox, an underground toolkit that advertised itself as a custom AI for offensive cyber operations โ autonomous malware generation, phishing campaign development, and ransomware creation. Investigation revealed it was not a custom model at all, but rather a wrapper around jailbroken commercial APIs and open-source Model Context Protocol (MCP) servers, including Gemini.
This matters because it demonstrates how accessible offensive AI capabilities are becoming. Threat actors do not need to build custom models. They can chain together jailbroken commercial services, stolen API keys, and open-source tooling to create offensive AI platforms. The barrier to entry for sophisticated attack tooling has dropped materially.
What to do now: Audit and secure all AI API keys across the organisation. Implement key rotation policies for any production AI integrations. Monitor underground forums and threat intelligence feeds for tools targeting your industry vertical or technology stack.
The Australian Context
These five risks are not theoretical future threats. They are documented activities from state-backed actors in China, Iran, North Korea, and Russia, plus financially motivated cybercriminals operating globally.
For Australian organisations, three factors amplify the urgency. First, the ACSC has identified AI-enabled threats as an emerging priority area, and organisations that cannot demonstrate governance over AI-related attack vectors face increasing regulatory scrutiny. Second, Essential 8 controls around application control, user application hardening, and restricting administrative privileges need to be evaluated specifically against AI-integrated attack vectors. Third, Australia’s role in Five Eyes intelligence sharing means that Australian organisations are already on the radar of the state-backed actors documented in this report.
What Comes Next
The Google GTIG report concludes that while AI has not yet created “breakthrough capabilities” for attackers, the integration of AI across every phase of the attack lifecycle is accelerating. The organisations that update their threat models, detection capabilities, and staff training now will be materially better positioned than those that wait for the breakthrough moment.
Our team works with mid-market Australian organisations to assess AI-specific threat exposure, update detection and response capabilities, and build governance frameworks that account for the evolving AI threat landscape.
If your security posture has not been updated to account for AI-augmented threats, this report makes the case that the time to act is now.
CloudProInc is a Microsoft Partner and Wiz Security Integrator, working with Australian organisations on cloud, AI, and cybersecurity strategy.