Stanton Chase
The Hidden Risk in AI Adoption

The Hidden Risk in AI Adoption

May 2026

Share:

Video cover
Summary:

Stanton Chase’s Q1 2026 Executive Survey Panel found that 66% of respondents across all sectors have reported no AI incidents, but only 39% of technology sector respondents could say the same, far below every other industry in the survey. The technology sector is further along in AI adoption, and its incident rate is a preview of what other sectors are about to experience. As more sectors move AI out of pilots and into production, they will face similar incidents, and many will discover they have no governance framework, no clear owner, and no practiced response.

Two thirds of the executives in our Q1 Executive Survey Panel have never experienced a significant AI incident. That may be the most concerning finding in the entire report.

We have previously argued that cyber resilience is a governance and leadership mandate, that AI has changed the nature of cyber risk, and that leadership capability is itself a form of resilience infrastructure. Our survey data now puts numbers behind those arguments, and the picture that emerges is not one of safety. It is one of exposure that has not yet been triggered. 

Three Conditions That Precede a Crisis

A cyber incident does not begin with a piece of malware. It begins with the organizational conditions that allow a vulnerability to go unnoticed, unowned, and unaddressed until it is too late. Our survey data shows that three of those conditions are present in the majority of the organizations we surveyed. 

1. Fragmented Ownership: Nobody Owns the Risk

29% of the executives we surveyed say the CEO owns AI transformation outcomes. 20% assign it to the CTO or CIO. 17% say ownership is still unclear or under discussion, and the CHRO owns AI outcomes in 2% of organizations. At the same time, 56% say their C-suite lacks AI literacy and 62% say their organization lacks vision beyond tactical AI implementation. 

This matters for cybersecurity because ownership determines governance. When nobody is clearly accountable for how AI is being used across the organization, nobody is accountable for the risk that use creates. The IBM Cost of a Data Breach Report 2025 found that 63% of breached organizations had no AI governance policy in place, and 97% of organizations that experienced an AI-related breach lacked proper AI access controls. Fragmented ownership at the top produces a governance vacuum, and that vacuum is where breaches take hold. 

2. Uninformed Employees: Operating Without Guidance

Our survey asked executives how transparent their organizations have been with employees about which jobs or roles AI might eliminate or change. 43% said they had not communicated at all. An additional 8% described their communication as intentionally vague. 

That question is about workforce planning, not cybersecurity. But it points to something that matters for both. If an organization has not had the basic conversation with its employees about whether their roles are safe, it is fair to ask what other conversations it has not had. Has it told employees which AI tools are approved and which are not? Has it set boundaries around what data can be shared with AI applications? Has it trained anyone to evaluate whether an AI output is reliable before acting on it? 

When employees receive no guidance, they fill the gap themselves. They adopt AI tools their organization has not approved, share data with applications nobody in IT has vetted, and trust outputs nobody has taught them to question. IBM calls this shadow AI, and their 2025 report found it was already a factor in 20% of all breaches, adding an average of $670,000 to breach costs and causing operational disruption in 31% of cases. Only 37% of the organizations IBM studied had any process for detecting it. 

The vulnerability is not the AI tool. It is the silence around it. When organizations are not even telling employees whether their jobs are safe, the likelihood that they have built a clear, communicated framework for responsible AI use is also low. 

3. Untested Confidence: Belief Without Preparation

When we asked executives about the AI risks that concern them most, 35% pointed to confidential data being shared with AI tools. 19% cited the possibility of decisions being made on incorrect AI outputs, and 13% said they were worried about employees using AI without oversight. Executives can see where the exposure is. They are naming the right risks. 

The problem is that 66% of them have not yet encountered any of those risks in practice. Their confidence in AI is growing. 59% say they are more confident than a year ago. But that confidence has been built during a period when most organizations are still running AI in low-risk, low-stakes applications. The risks executives identify, data leaks, flawed outputs, and ungoverned employee use, are risks that surface as AI moves into more embedded, higher-stakes operations. For most organizations, that transition has not happened yet. 

For some, it has. Among Technology sector respondents in our survey, only 39% reported no incidents, far below every other industry. The Technology sector is further along in deploying AI into core operations, and the problems it is encountering are the same ones other industries are still only anticipating. As more sectors move AI out of pilots and into production, they will face similar incidents, and many will discover in that moment that they have no governance framework, no clear owner, and no practiced response. That is the real cost of untested confidence. It is not that executives are wrong to believe in AI. It is that believing in AI without building the infrastructure to govern it means the first serious incident will also be the first time the organization learns it was not prepared. 

What This Means for Boards and Leadership Teams

These three conditions, fragmented ownership, uninformed employees, and untested confidence, do not cause cyber incidents on their own. But together, they create the organizational environment in which a single incident becomes a full-blown crisis. An employee shares confidential data with an unapproved AI tool. Nobody in the C-suite has clear authority to respond. The leadership team has never rehearsed what to do. And the board, which assumed the absence of incidents meant the presence of preparedness, discovers too late that those are not the same thing. 

The 66% of organizations that have experienced no AI incidents may be well governed and well prepared. Or they may simply not have been tested yet. If your organization is among them, the question worth asking is not whether your technology is ready for an incident. It is whether your people, your governance, and your leadership team are. 

About the Authors

Anette Roll Richardsen is a Partner at Stanton Chase Oslo, with more than 20 years of leadership experience as CFO, CEO, Director of Cybersecurity, and Sales Director, and 25 years across cybersecurity, sales, IT, and finance. Anette is the President of Women in Cyber Security (WiCyS) Norway, Treasurer of ISACA’s Norway chapter, and a member of the Board of Directors at the Center for Cyber and Information Security (CCIS). She has been recognized among Europe’s 50 Most Influential Women in Cybersecurity by SC Media. 

Jan-Bart Smits is a Managing Partner at Stanton Chase Amsterdam. He began his career in executive search in 1990. At Stanton Chase, he has held several leadership roles, including Chair of the Board, Global Sector Leader for Technology, and Global Sector Leader for Professional Services. He currently serves as Stanton Chase’s Global Subsector Leader for the Semiconductor industry. He holds an M.Sc. in Astrophysics from Leiden University in the Netherlands.   

AI & Technology

How Can We Help?

At Stanton Chase, we're more than just an executive search and leadership consulting firm. We're your partner in leadership.

Our approach is different. We believe in customized and personal executive search, executive assessment, board services, succession planning, and leadership onboarding support.

We believe in your potential to achieve greatness and we'll do everything we can to help you get there.

View All Services