The AI Oversight Gap
- Katarzyna Celińska

- Nov 9
- 2 min read
When we look at the KPMG & University of Melbourne “Trust, Attitudes and Use of AI 2025” report together with the IBM Cost of a Data Breach 2025 – AI Oversight Gap, a clear picture emerges: The biggest risk in AI is not the technology itself — it’s the lack of understanding and oversight behind it.
The KPMG global study paints a striking picture of how people and organizations are using AI:
➡️ 66% of people regularly use AI — but 61% have no AI training, and half admit they don’t understand how AI works.
➡️ 83% of respondents want to learn more about AI, showing high interest but low capability.
➡️ Only 43% believe current AI laws and regulations are adequate, while 70% demand stronger oversight.
➡️ 79% are concerned about cybersecurity, misinformation, and data privacy risks.
➡️ In parallel, IBM’s report shows that organizations with limited AI governance experience, on average, 21% higher breach costs due to shadow AI use — unapproved tools, unsecured models, and hallucinating outputs left unchecked.

Photo: https://pl.freepik.com/
The link between AI literacy gaps and security incidents is undeniable.
Shadow AI
The study reveals that in many organizations, AI adoption has outpaced security governance.
➡️ 58% of employees now use AI at work regularly.
➡️ Half use public AI tools like ChatGPT or Gemini for sensitive data — including financial or customer information — often in violation of corporate policy.
➡️ Two-thirds rely on AI outputs without verifying accuracy, and over half have made errors because of AI-generated content.
When I read the AI Trust Report 2025, I see the same patterns as in Cost of a Data Breach 2025:
Lack of education → poor decisions → shadow AI → incidents. It’s a straight line from ignorance to impact.
The Deloitte case (Deloitte was forced to refund AUD 290,000 to the Australian government after delivering a report generated by AI that hallucinated data and fabricated references) is another example of this problem. It’s not about AI being “wrong” — it’s about people using it without proper controls or critical review.
Just like in cybersecurity, you can’t outsource responsibility to a machine.
What worries me most is the explosion of self-proclaimed “AI experts.”
Just like in previous waves of hype — whistleblowing, cybersecurity, privacy, compliance, audit, or fraud — we suddenly have “AI specialists” with little or no technical grounding. They talk about AI as if it’s almost conscious (seriously?), while being unable to explain how transformers, embeddings, or prompt injections actually work. So let’s stop producing “AI experts” who don’t understand technology.
Author: Sebastian Burgemejster







Comments