Background
AI is now a competitive imperative for organisations. Microsoft estimates that generative AI is already saving UK employees 12 billion hours of work annually, equating to £207 billion in economic value. And adoption is accelerating; in McKinsey’s most recent State of AI report, nearly 90% of businesses reported using AI in at least one area.
But rapid adoption brings new risks. Employees often gravitate toward easy‑to‑use public tools like ChatGPT or Claude that introduce security concerns for businesses, resulting in widespread “shadow AI”. Fifty nine percent of employees in the US use unapproved AI tools at work, and 75% of them upload sensitive company or customer data in the process. This is costing businesses dearly. In 2025, 20% of all data breaches were caused by shadow AI, and companies that experienced high levels of shadow AI forked out an extra $670k in average breach costs, according to IBM.
So, how can businesses empower employees with AI while preventing unsafe or non‑compliant usage?
Challenge
Our client - one of the world’s largest FMCG organisations with a global workforce - has made a strategic investment in AI to enhance market responsiveness, efficiency, and long‑term competitive advantage. This includes embedding AI directly into core business tools and enabling employees to use conversational AI solutions such as Microsoft Copilot.
The business took a responsible and holistic approach to implementation:
Rolled out sanctioned tools including Microsoft Copilot and a custom GPT securely connected to company data,
Defined clear AI usage policies,
Established guardrails for secure behaviour.
However, to reduce shadow AI and unlock the value of these investments, the business needed to ensure:
Awareness of approved tools,
Ease of use and familiarity to drive adoption,
Adherence to policy and prevention of risky actions.
Solution
Charlton House teamed up with the customer’s WalkMe Centre of Excellence to deliver a series of lightweight, targeted interventions that guided safe AI use across the organisation.
Redirecting employees away from shadow AI
The CoE used WalkMe to track when users visited non‑approved AI sites such as ChatGPT, Claude, and Perplexity, and created a launcher to redirect traffic. When users land on one of the sites, a message notifies them that the tool is not approved for business use and provides direct links to the sanctioned alternatives. This creates an immediate and scalable safeguard for reducing shadow AI behaviour.
Enabling correct and compliant use of approved tools
To complement the redirection strategy, Charlton House built in‑context experiences within Microsoft Copilot to promote confident adoption of the approved tools - while reinforcing policy.
When Copilot launched its Custom Agents feature, a WalkMe SmartTip (like a tool tip) drew visibility to the new capability directly within the Copilot chat interface. Users can then launch a Smart Walk-Thru which provides a step‑by‑step guide to create a custom agent with ease.
While the company encouraged individuals to create custom agents, the AI policy prohibits sharing them due to privacy and security concerns. To enforce this policy, another tip was placed next to the share button notifying users that sharing is not permitted. An invisible launcher was also placed over the button, preventing the action entirely - even if users attempted to bypass the message.
A similar safeguard was put in place for the company's custom GPT, to stop users from uploading documents to the GPT – preventing confidential files from being stored insecurely or used to train underlying models.
Results
Together, these solutions created a secure, scalable, and user‑friendly AI ecosystem - supporting the client’s vision for responsible AI adoption.
Shadow AI reduction
The ChatGPT WalkMe launcher reached 73,000+ unique users.
80% of those were successfully redirected to sanctioned tools - significantly reducing unapproved AI usage and associated risk.
Safe use of Copilot Custom Agents
2,870 users completed the Smart Walk‑Thru to build their own agent.
377 attempted to share agents and were safely blocked - preventing a major compliance risk.
Protected use of the custom GPT
Over 40,000 employees were made aware of the tool.
1,835 attempted document uploads were blocked, preventing potential data exposure.
These interventions enabled the client to:
Empower staff with practical, high‑value AI tools
Drive adoption of sanctioned solutions over shadow AI
Embed responsible AI use at scale
Avoid significant custom development work, thanks to lightweight WalkMe implementations built and deployed in weeks.
By combining technology, behavioural guidance, and embedded guardrails, the organisation now benefits from widespread AI enablement with reduced risk and increased confidence.
73k
User reached
80%
redirected to approved tools
2,212
unapproved actions prevented
By combining technology, behavioural guidance, and embedded guardrails, the organisation now benefits from widespread AI enablement with reduced risk and increased confidence.

