There is a pattern emerging across L&D teams experimenting with AI in L&D, and it is worth naming clearly.

They can list loads of tools in the market. However, when you ask what has actually changed in how their people learn, the answer is rarely convincing.

That gap is wider than most organisations admit. Because AI in L&D has genuine potential, yet most teams are layering new technology onto learning design that was already broken. The result is faster delivery of content that still does not change behaviour or add value.

Although the tools are often treated as the problem, they rarely are. The real issue is the questions being asked before anything gets built.

Why most rollouts underdeliver: an AI in L&D perspective

Most L&D teams believe in AI in L&D. However, believing in something and designing for it are very different things.

What we see consistently is this: organisations start with the right intentions. Then deadlines hit, stakeholder pressure builds, and the elements that would make the learning actually work get stripped out first. As a result, the final product looks good in a demo, passes a sign-off review, and gets ignored the moment it goes live.

Some organisations, though, get it right. They protect what matters: psychological safety, space to think, a clear link to the real business problem. They treat those things as non-negotiable, not nice-to-haves.

So ask yourself honestly: does your current design process protect those things? Or does it quietly design them out under the pressure of delivery?

What the evidence actually says

The evidence on AI in L&D is not complicated. It does, however, contradict the way most organisations commission learning.

Behaviour change requires three things: relevance to the person's real role, space to practise safely, and feedback that is timely and specific. Although these are well understood in learning science, most corporate programmes struggle to deliver even one of them consistently.

The problem is not awareness. Every Head of L&D I speak to knows this already. The challenge is that most design processes were never built to deliver it, they were built to deliver content at scale, on time, within budget.

As a result, teams measure what is easy: completion rates, satisfaction scores, time on platform. These feel safe. They are, however, poor indicators of whether anything has actually changed. Rewriting your success criteria is therefore the most important design decision you can make.

The three things that actually work

When AI in L&D is working well, you do not need the dashboard to tell you. You can see it in how people work and feel.

Managers hold different conversations. Teams approach new challenges with more confidence. Problems surface earlier, because people feel safe enough to raise them before they escalate. Furthermore, learning stops feeling like a separate activity and starts feeling like part of the job itself.

That is the outcome worth designing for. Not completion rates or satisfaction scores, but actual behaviour change, visible in the work itself.

In my experience, three things consistently make the difference. First, genuine relevance, not to a job description, but to the challenges someone faces today. Second, structured space to practise, not just knowledge transfer. Third, a clear connection between the learning and real work, so that what is learned does not stay in a training bubble. Ask how to change Monday morning behaviour, and the brief changes entirely.

How I approach this differently

A straightforward reframe can change how your team approaches AI in L&D entirely.

Instead of starting with content, start with the business and peoples genuine problem. Before writing a single learning objective, ask the business leader one question: what does this person need to do differently? Not know — do.

That question surfaces two things quickly. First, the performance gap is usually smaller and more specific than the original brief suggested. Second, it often turns out not to be a training problem at all. It is a process problem, a management problem, or a culture problem. Although that can be uncomfortable to hear, no amount of content fixes those things.

Because of this, the most effective L&D teams operate like consultants rather than order-takers. They push back on vague briefs, ask harder questions, and decline to build content they know will not change anything. That approach takes confidence. It also builds credibility — and it comes directly from delivering work that demonstrably works.

What the industry is saying

The conversation in L&D is shifting. Here's what we're tracking right now:

→ AI in 2026: The Turning Point for Learning & Development: AI isn't replacing L&D — it's reshaping it. From adaptive learning platforms to workflow-embedded AI agents.

→ AI in L&D: AI is definately transforming L&D, but is it transforming people?

→ How AI Will Reshape L&D and HR in 2026: As AI continues to transform L&D and HR, the organisations that will thrive in 2026 will be those that embrace AI

The fundamentals haven't changed. But the pace has. Organisations that haven't started asking better questions are already falling behind.

Three things to do this quarter

If any of this resonates, here are three practical moves worth making:

→ Rewrite your success metrics. Replace completion rates with a behaviour you can observe in the workplace. That shift alone changes how you design everything.

→ Rewrite your next brief. Before commissioning content, ask: what do we need people to do differently? Not know. Do. Then design backwards from that.

→ Have a harder conversation with your stakeholders. Push back on generic content requests. Ask about the business problem underneath them. That's where the real work starts.

I help L&D teams build learning augmented by AI, that changes how people work, not just what content they can access. If you want to explore what that looks like for your organisation, get in touch: https://calebfoster.ai