“Organisations are not lacking awareness of risk; they’re lacking the conditions to manage it,” Rachel Jin, Chief Platform & Business Officer and Head of TrendAI, the enterprise AI security division of Trend Micro, said in March 2026 when releasing the company’s global AI study of 3,700 business and IT decision-makers. “When deployment is driven by competitive pressure rather than governance maturity, you create a situation where AI is embedded into critical systems without the controls needed to manage it safely.”

The numbers behind that statement are stark. The TrendAI study, conducted by SAPIO Research across 23 countries including Australia, found that 67% of decision-makers had felt pressured to approve AI projects despite security concerns. One in seven described those concerns as “extreme” but approved the deployment anyway to keep pace with competitors or internal demand. The pressure is real, and dismissing it as irrational misses the point: in many sectors, falling behind on AI is also a strategic risk. The problem is that the speed of approval is not matched by the speed of governance.

The numbers behind the governance gap

Only 38% of organisations have comprehensive AI policies in place, according to TrendAI. The rest are still drafting them or have no formal policy at all. Meanwhile, 41% cite unclear regulation or compliance standards as the barrier to building stronger governance. In practice, AI is being operationalised before the rules governing its use are established.

The speed mismatch is measurable. 57% of respondents said AI is advancing more quickly than they can secure it. More than half (55%) reported only moderate confidence in their understanding of the legal frameworks governing AI. For security teams, this creates a reactive posture: they are assessing AI systems after deployment decisions have already been made higher up the organisation.

These findings are not isolated. Deloitte’s 2026 State of AI in the Enterprise report, based on a separate survey of 3,235 director to C-suite leaders across 24 countries, found the same pattern from a different angle. While nearly three-quarters (74%) of organisations plan to deploy agentic AI within two years, only 21% report having a mature governance model for those autonomous systems. Two large independent surveys, with nearly 7,000 leaders between them, are saying the same thing: AI deployment is outrunning the controls around it.

ModelOp’s 2026 AI Governance Benchmark Report, based on a smaller sample of 100 senior AI leaders, shows the operational shape of the gap. 67% of enterprises now report 101 to 250 proposed AI use cases, but 94% have fewer than 25 actually in production. More than two-thirds rely on manual or projected ROI tracking, even for systems that are already live. The portfolio is growing fast; the operational maturity is not keeping pace.

ModelOp CEO Dave Trier framed the shift in priorities: “Last year, the question was, ‘How fast can we deploy AI?’ This year, the questions are, ‘Which AI investments are delivering value? How do we move fast enough to sustain a competitive advantage? And how do we scale our AI delivery processes to maximise the value?’” The implicit answer to all three questions involves governance the organisation does not yet have.

Australian organisations are not immune

The Australian findings mirror the global trend with some local colour. 66% of Australian business decision-makers reported pressure to approve AI initiatives despite security or compliance risks. Almost one in five (19%) described those concerns as extreme. 68% said AI is advancing more quickly than they can secure it, slightly above the global average.

Srujan Talakokkula, Managing Director ANZ & Pacific Islands of TrendAI, described the local pattern as a paradox of confidence and capability. “While many organisations across Australia and New Zealand report strong confidence in AI preparedness and strong recognition of AI’s role in combating AI-driven threats, there is a clear gap in understanding of legal frameworks governing AI and differing views on accountability and human oversight across both business and IT leadership,” he told media in March 2026.

That confidence-capability disconnect is where risk accumulates. 80% of Australian business decision-makers said they felt “prepared” for AI adoption, yet 44% lacked confidence in their understanding of AI legal and governance frameworks. Only 26% said they had completed formal, mandatory AI training. Andrew Philp, Field CISO ANZ at TrendAI, also flagged a 35% increase in publicly disclosed AI vulnerabilities over the past year. Australian organisations are not behind the global curve; they are exposed at roughly the same rate, with the same governance gaps.

Agentic AI: the observability problem

The TrendAI study examined attitudes toward agentic AI, autonomous systems that can take actions, call tools, and make decisions without continuous human direction. Confidence remains low. Less than half (44%) of respondents globally believe agentic AI will significantly improve cyber defence in the short term.

The concerns are specific. 44% identified AI agents accessing sensitive data as the biggest risk. 36% pointed to malicious prompts compromising security. A third cited the growing attack surface for cybercriminals. And 31% of business decision-makers admitted they lack observability or auditability over autonomous AI systems already in use. Roughly 40% of respondents supported AI “kill switch” mechanisms, the ability to shut systems down immediately if they fail or are compromised.

The Deloitte data tells a complementary story. 73% of leaders cite data privacy and security as their top AI risk. Half flag legal, intellectual property and regulatory compliance. Nearly half (46%) flag governance oversight specifically. Yet only 21% have a mature governance model for autonomous agents. The deployment intent is high, the risk awareness is high, the actual controls are not.

Consider the kind of agentic deployment that is becoming common. A financial services firm uses an AI agent to automatically capture meeting actions from video conferences, draft follow-up communications, and track participant commitments. An air carrier deploys an agent that helps customers rebook flights and reroute bags without human intervention. A manufacturer runs an agent that supports product development decisions involving cost and time-to-market trade-offs. Each of these is a real example Deloitte cites in its 2026 report. Each involves an AI system taking actions, accessing data, and making decisions that affect customers or business outcomes. And in 79% of similar deployments, by Deloitte’s own data, the governance model around those agents is not mature.

What should security and compliance teams do?

Stop treating AI governance as a future project. If 67% of organisations are approving AI under pressure now, governance frameworks need to be in place now, not after the next planning cycle. The minimum viable programme is an AI inventory, a data classification scheme for AI use, and an acceptable use policy with teeth.

Embed controls before deployment, not after. Security teams working reactively to top-down AI decisions will always be behind. The TrendAI study explicitly links reactive security postures to increased shadow AI use: when formal channels are slow, staff find informal ones. The answer is to build security review into the AI approval process from the start, not bolt it on after go-live.

Demand observability for agentic systems. If your organisation is deploying AI agents with tool-calling, memory, or autonomous decision-making capabilities, insist on runtime logging, per-action audit trails, and defined escalation triggers before production. A policy PDF is not observability.

Use the regulatory uncertainty as an argument, not an excuse. The 41% who cite unclear regulation as a barrier are framing governance as a compliance checkbox. Reframe it as risk management. Whether or not a specific regulation requires an AI inventory or impact assessment today, the organisation will need one when it does. Building it now is cheaper than building it under regulatory pressure.

Acknowledge that competitive pressure is genuine. The pressure described in the TrendAI study is not irrational rush. In many sectors, organisations face real consequences for falling behind on AI adoption. The right response is not to ignore the pressure but to channel it through governance that can move at speed. Slow, formal approval processes are part of why staff route around them. Faster governance with real teeth beats slower governance that gets ignored.

The bottom line

AI value is increasingly constrained by governance, not algorithms. Boards will fund the next wave of AI experiments only if they can see evidence of control, auditability, and compliance readiness. Organisations that treat governance as a drag on innovation will find it becomes the bottleneck when regulators, auditors, or customers start asking questions they cannot answer.

For Australian organisations, that bottleneck is no longer theoretical. The Privacy and Other Legislation Amendment Act 2024 brings an automated decision-making transparency requirement into force on 10 December 2026, with civil penalties for serious breaches reaching up to AUD $50 million. NSW has just passed the Work Health and Safety Amendment (Digital Work Systems) Act 2026, the first Australian law to put AI inside the WHS framework. Both regimes apply across the full spectrum of automated decision-making, not just agentic systems, but agentic AI is where the governance gap is sharpest because the systems can act, escalate and combine outputs without continuous human direction. Both deadlines hit the same operational AI systems the TrendAI and Deloitte studies describe. Eight months from now, organisations that have not closed the governance gap will be operating under regulatory exposure they currently do not have. The window to build the inventory, the policies and the oversight is open. It will not stay open much longer.

Sources

  • TrendAI/Trend Micro, “Organizations Overlook AI Risk as Governance Fails to Keep Up,” 25 March 2026 (global study of 3,700 decision-makers across 23 countries, conducted by SAPIO Research). prnewswire.com

  • TrendAI ANZ press release, 26 March 2026. trendmicro.com

  • Deloitte AI Institute, “State of AI in the Enterprise 2026: The Untapped Edge,” 21 January 2026 (survey of 3,235 leaders across 24 countries, conducted August–September 2025). deloitte.com

  • Unite.AI, “State of AI in the Enterprise 2026: Deloitte Maps the Untapped Edge,” January 2026. unite.ai

  • ModelOp, “2026 AI Governance Benchmark Report: The AI Portfolio Explosion,” 11 March 2026 (survey of 100 senior AI leaders). globenewswire.com

  • SecurityBrief Australia, “TrendAI flags agentic AI risks in enterprise deployment,” 26 March 2026. securitybrief.com.au

  • IT Brief NZ, “Australian boards pressured to rush AI despite risks,” 26 March 2026. itbrief.co.nz

  • Deloitte Global, State of AI in the Enterprise 2026 full report. deloitte.com