Shadow AI in Delivery: Risk, reality and the right path

delivery frameworks leadership systems & setup 23 Feb 2026
Illustration of a large cruise ship labelled “Enterprise AI Transformation” moving slowly while a smaller speedboat labelled “Delivery Lead” speeds ahead, highlighting the tension between enterprise AI planning and fast-paced delivery deadlines.

Right now, organisations are trying to turn a cruise ship. Not because they are slow. Not because they are resistant. But because when an organisation is responsible for client data, contractual obligations, commercial exposure and internal risk, it cannot move at the speed of hype. And if you sit inside that organisation, you live that tension every day.

Leadership teams are fielding a steady stream of requests to “just try this AI tool". IT is assessing security implications and compatibility with existing systems. Procurement needs commercial clarity before negotiations even begin. Governance frameworks cannot be finalised until a tool is selected, configured and its usage boundaries properly defined. Then comes documentation, training and change communication. You are expected to make it work within whatever guardrails exist.

It is slow. Deliberate. Occasionally frustrating. But necessary.

Enterprise environments, whether global corporations or digital agencies handling sensitive client data, carry weight. When they change direction, they do it carefully.

And while the ship turns, you are still delivering.

Deadlines have not paused. Clients have not relaxed expectations. Your workload has not become lighter simply because AI policy is under review.

So the real question is not what your organisation is doing. It is what you are doing.

 

Three types of Delivery Leader

Most delivery leaders instinctively fall into one of three patterns.

 

 

1. The Policy Purist

You have decided not to engage with AI until your organisation releases an approved solution. You are waiting for the official tooling, the governance sign-off, the internal training.

This approach is principled. It protects you from accidental exposure and keeps you comfortably aligned with policy.

However, there is a quiet trade-off. While you are waiting, others are building literacy. They are learning how prompts influence output, where AI adds value and where it confidently invents nonsense. When enterprise tools do arrive, those who have developed that understanding will adapt more quickly.

Over time, the delivery function becomes reactive rather than influential. Compliance is preserved, but momentum quietly erodes.

 

2. The Shadow AI Operator

You are experimenting. Not recklessly. Not carelessly. Simply pragmatically.

You might use AI to help structure a client update, refine the tone or sense-check a complex explanation. You intend to sanitise the data first. But without structured, repeatable exports, good intentions collapse under time pressure. You copy and paste. You press enter. Only afterwards do you realise an extra column or sensitive detail came along for the ride.

This is rarely intentional. It is what happens when intelligent professionals operate under time pressure.

But time pressure does not remove responsibility. Even small oversights can create exposure if the wrong information ends up in the wrong place.

It works in the short term, until it doesn't. A small oversight becomes a serious exposure. And when that happens, trust evaporates quickly.

 

3. The Strategic Delivery Architect

There is a third type. One who accepts that AI is here. Respects that governance takes time. And focuses on what you can control in the meantime.

Having worked through this properly, I believe there is a structured path that helps you tackle your biggest time drains in a way that is aligned with governance, protective of client data and genuinely effective.

The advantage comes from respecting sequence, doing the unglamorous work first so that AI becomes an amplifier, not a risk.

 

A structured path to using AI properly

This model is for the Strategic Delivery Architect. The delivery lead who wants to use AI properly, within organisational boundaries, and be able to defend that decision if challenged.

 

 

Step 1: Discover where your time actually goes

Before introducing any new technology, you need visibility.

Most delivery leaders underestimate how much of their time disappears into repeatable admin. The first step is to audit your workload over a defined period, ideally two weeks.

Ask yourself:

  • Which tasks do I complete every week without fail?
  • How long does each one genuinely take?
  • Which outputs require manual copying, pasting or reformatting?
  • Where do I repeatedly rebuild the same narrative from slightly different inputs?
  • How often do I fix incomplete or inconsistent data before reporting?
  • Which of these tasks are rules-based and predictable?
  • Which require genuine judgement and interpretation?
  • What level of data sensitivity is involved in each task?

If you can capture this in a structured format and analyse it properly, you will be in a far stronger position to decide what should be automated, what should remain human, and where AI can genuinely help.

To make this easier, I’ve built a simple Google Sheet template you can download and use to document your audit and analyse the results. It helps you quantify where effort is leaking and highlights what is genuinely suitable for automation and AI support.

 At the end of the day, you cannot optimise what you have not measured.

 

Step 2: Stabilise and cleanse your data

Once you understand where the time drains are, resist the urge to jump straight to AI.

If your underlying data is inconsistent, incomplete or loosely structured, any layer placed on top of it will magnify the instability. Automation will produce unreliable outputs. AI will generate polished summaries based on flawed inputs.

To stabilise your system, focus on fundamentals like:

  • Standardising status definitions across teams
  • Removing duplicate or redundant custom fields
  • Replacing free-text reporting fields with controlled dropdowns
  • Aligning issue hierarchy structures
  • Enforcing consistent naming conventions
  • Cleaning incomplete or stale records

The goal is not to tick every possible box. It is to remove ambiguity so your reporting becomes reliable. Without this foundation, everything else becomes fragile.This is about reliability. Leverage comes next.

 

Step 3: Maximise deterministic automation

Before introducing AI, exhaust what your existing delivery tools can already do. Once your data is structured, automate what should never require judgement.

Most delivery teams dramatically underuse the rule engines already built into tools like Jira or Azure DevOps. In practice, this often means configuring your system to handle predictable events automatically, such as:

  • Trigger reminders for overdue work
  • Enforce mandatory fields through validation rules
  • Schedule recurring stakeholder reports
  • Auto-assign issues based on team or component
  • Create one-click export of anonymised, AI-ready reporting views

Deterministic automation behaves predictably. It performs the same action every time the same conditions are met. It reduces human error and enforces discipline within your system.

When configured properly, this layer removes a significant proportion of repetitive effort. It also protects the integrity of your data, ensuring that what flows upward into reporting and analysis is consistent and dependable.

Often, this step alone eliminates much of the frustration that pushes people toward AI in the first place.

At this point, you are building a system that can produce clean, anonymised extracts quickly and repeatedly. Structured outputs that are safe to use for higher-level synthesis.

 

Step 4: Introduce AI as an assistant, not a shortcut

Only once your data is structured and your automation is doing the heavy lifting should AI enter the equation.

At this stage, AI sits on top of clean, non-sensitive or deliberately anonymised information. It supports synthesis and communication. It helps you refine updates, sense-check explanations and spot emerging themes in already structured summaries.

It does not replace your judgement. It does not override governance obligations. It does not receive raw confidential data.

Used this way, AI becomes an assistant operating within clearly defined boundaries. It enhances thinking without compromising control.

The temptation is to reverse this sequence and use AI as a patch for messy systems. But AI amplifies whatever foundation you provide. If your inputs are chaotic, it will amplify chaos. If your data is structured and sanitised, it will amplify clarity.

 

The real decision

AI is going to enter your delivery environment one way or another. It might arrive through a polished enterprise rollout with a slide deck and a steering committee. Or it slips in during a deadline crunch, when you think, “I just need something to get through this call”. The real question for you is not whether AI appears. It is whether you introduce it deliberately or allow it to seep in accidentally.

If you have stabilised your data, tightened your workflows and created controlled reporting exports, you do not need to sit still while the cruise ship turns. You can begin using AI now, carefully and confidently, assuming you are operating within your organisation’s rules. Structure is what makes that possible. Without it, experimentation feels risky. With it, experimentation becomes controlled.

A deliberately designed one-click export of anonymised delivery data changes the equation. Clean status summaries, ticket counts and trend data are operational signals, not confidential disclosures. When you know exactly what is leaving your system, and why, the fear of accidental data exposure disappears. You are no longer pasting raw issue logs into a chatbot. You are sharing structured, sanitised outputs that were designed for analysis.

That is the difference between shadow AI and structured AI use. One is reactive and slightly nervous. The other is intentional and calm.

You are not cutting corners. You are building guardrails. You are not waiting for enterprise tooling to rescue you. You are designing your delivery environment so that when AI is used, it is used properly. And when the official AI stack finally arrives, you will not be scrambling to catch up. You will already have the discipline, the exports and the clarity in place.

Speed might look impressive in the short term. Structure is what compounds.

Want to hear when new stuff drops?

Courses, tips and tools. I’ll only email when it’s genuinely useful. 

Pop your name on the list and I’ll keep you in the loop.Â