The attack surface for mid-market organisations has expanded faster than most security strategies have adapted. AI is not just a tool for defenders. It is now an operational accelerator for attackers — and the techniques are not science fiction.

Google’s Threat Intelligence Group, Mandiant, and multiple cybersecurity vendors have documented a measurable shift in the threat landscape over the past twelve months. Attackers are using AI to generate convincing voice clones, craft personalised phishing at scale, poison training data, and build malware that evades traditional detection. The organisations most exposed are not the largest enterprises with dedicated security operations centres. They are mid-market organisations — typically 50 to 500 employees — that sit at the intersection of valuable data and constrained security resources.

Vishing Has Become Dangerously Effective

Voice phishing — vishing — has existed for years. What has changed is the quality. AI-powered voice cloning can now produce synthetic speech that is indistinguishable from a real person’s voice, using as little as a few seconds of sample audio.

Attackers are combining voice clones with OSINT gathered by AI to execute highly targeted calls. They impersonate executives, vendors, or IT support staff, using contextually accurate details — project names, organisational jargon, recent events — that make the call feel legitimate. The rapport-building phishing approach documented in Google’s GTIG AI Threat Tracker shows that AI enables multi-turn social engineering interactions that were previously too resource-intensive for attackers to sustain at scale.

For mid-market organisations without dedicated voice channel security or real-time call authentication, this represents a significant gap. The traditional advice — “verify the caller’s identity” — is substantially harder when the voice on the other end sounds exactly like the CFO.

AI-Generated Phishing Eliminates the Skill Barrier

Previously, high-quality phishing campaigns required native-language fluency, cultural awareness, and enough understanding of the target to craft a believable pretext. AI removes all three barriers.

Google documented APT42 using Gemini to research target biographies and craft engagement personas. UNC2970 used AI to profile defence sector employees, map salary bands for specific technical roles, and identify soft targets based on organisational structure. These are not hypothetical capabilities. They are documented operational uses by state-backed threat actors.

The downstream effect is that phishing lures now look identical to legitimate business communication. Grammar is flawless. Tone matches the organisation’s culture. References to real projects and colleagues increase credibility. Email filters that rely on content scoring have to adapt to a reality where malicious emails are indistinguishable from legitimate ones on a content basis.

Mid-market organisations that rely on email security gateways and annual phishing awareness training as their primary defences are operating on assumptions that no longer hold.

Data Poisoning Is a Quiet Escalation

As more organisations adopt AI tools internally — for customer service, decision support, content generation, and operational forecasting — the training data and operational data those tools rely on become attack surfaces.

Data poisoning involves introducing deliberately corrupted or biased data into a dataset used to train or fine-tune an AI model. The effect can be subtle: a customer service model that consistently recommends a competitor’s product, a forecasting model that systematically underestimates risk in a specific category, or a security model that fails to flag a particular class of threat.

The challenge for mid-market organisations is detection. Data poisoning attacks are difficult to identify because the model continues to function — it just produces slightly wrong outputs that may not be noticed until the cumulative effect becomes significant.

Organisations using third-party AI tools should understand where the training data comes from, how it is validated, and whether there are mechanisms to detect downstream drift that could indicate poisoning.

AI-Integrated Malware Changes the Detection Game

The HONESTCUE malware family documented by Google represents an evolution in attack tooling. Instead of carrying a static payload, the malware calls a commercial AI API at runtime to generate functional code — specifically, downloaders and in-memory execution payloads. The code is compiled and executed without touching disk, rendering traditional signature-based detection ineffective.

Separately, the COINBAIT phishing kit was built using an AI code generation platform, producing a sophisticated React application that impersonated a cryptocurrency exchange. The level of frontend complexity — state management, routing, analytics — would normally require experienced developers. AI made it accessible to actors with limited technical skill.

For mid-market organisations, both examples underscore the same point: the sophistication threshold for cyberattacks has dropped materially. Tools and techniques that previously required specialist knowledge are now available to a broader set of threat actors.

The Mid-Market Is Disproportionately Exposed

Large enterprises have layered defences — security operations centres, dedicated threat intelligence teams, behavioural analytics platforms, and specialised AI security governance. Mid-market organisations typically have fewer layers and rely more heavily on perimeter defences, endpoint protection, and staff awareness.

The AI-expanded attack surface creates three specific pressure points for mid-market organisations.

Speed mismatch. AI enables attackers to operate faster — faster reconnaissance, faster phishing, faster adaptation. Organisations with limited incident response capacity are more likely to be overwhelmed by the pace of an AI-augmented attack.

Detection gaps. In-memory malware execution, AI-generated phishing that passes content filters, and voice clones that defeat verbal verification all exploit gaps in standard mid-market security stacks.

Governance blind spots. Many mid-market organisations have adopted AI tools without assessing the attack surface those tools create. Shadow AI — employees using ChatGPT, Copilot, or other tools without IT oversight — introduces data exposure risks that traditional security frameworks do not cover.

Five Actions for Mid-Market Security Leaders

Implement voice verification protocols. Establish out-of-band verification for any request that involves financial transactions, credential changes, or sensitive data access — regardless of who the caller sounds like. Consider callback procedures to known numbers.

Upgrade phishing defences beyond content scoring. Invest in behavioural email analysis that evaluates sender patterns, communication frequency, and contextual anomalies rather than relying on content-based filtering alone.

Audit AI tool adoption and data exposure. Map every AI tool in use across the organisation, including shadow AI. Document what data flows into those tools and what outputs they produce. Evaluate each against your data classification policy.

Update EDR for AI-era threats. Ensure endpoint detection can identify in-memory code execution, runtime compilation, and unexpected outbound API calls. If your current EDR solution relies primarily on signature-based detection, it is not equipped for AI-integrated malware.

Include AI attack vectors in your next threat model update. If your last threat assessment did not account for AI-generated phishing, voice cloning, data poisoning, or AI-integrated malware, it needs to be updated. These are not emerging threats. They are current threats.

The Organisations That Adapt Early Will Be Better Positioned

The AI-expanded attack surface is not a future risk. It is the current operating environment. The organisations that update their security posture, detection capabilities, and governance frameworks now will be materially better positioned than those that wait for a breach to force the conversation.

Our team works with mid-market Australian organisations to assess AI-specific threat exposure, close detection gaps, and build proportionate security governance that accounts for the realities of AI-augmented attacks — without requiring a large enterprise security budget.

If your organisation has not assessed how AI has changed its threat landscape, this is a conversation worth having now.