You’ve seen the stats on how few AI and ML experiments make it into production – hovering around 30%. Is that a bad thing when a significant percentage of innovation experiments fail to deliver business value?

There’s a good chance your CFO and Board will see it that way. However, production deployment has a narrow view of success and the benefits of experimenting. Beyond deployment are key success factors around end-user adoption and delivering business value.
But the first objective of any experiment is learning. Digital Trailblazers who apply smart experimentation techniques apply their learnings to challenge assumptions, formulate new approaches, and iterate on implementations.
But what happens when experiments flounder? One sprint leads to another, and three implementation strategies lead to three more. When is it time to unplug an experiment and shift everyone’s focus to other opportunities?

We tackled this question at a recent Coffee With Digital Trailblazers, where we discussed Unplugging Experiments, Products, and Businesses That Aren’t Working. Some best practices we captured include eradicating “failure is not an option” from the culture and that CEOs must drive that it’s ok for experiments to fail. The group recognized that “killing the darling” and being swallowed into sunken cost mindsets make it challenging to unplug one experiment and begin another.
Five steps to define experiments and drive an AI learning culture
The best way to address floundering experiments is to set up innovations, proofs of concepts, and pilots up for success. Here’s my process.
1. Declare your experiment’s vision with success and unplug criteria

Every initiative – especially experiments – should start by drafting a one-page vision statement. My one-page vision statement template makes it easy to articulate customer personas, value propositions, business rationale, targeted business value, and expected timeframe.
One key field in the templates sets targeted KPIs/OKRs, but I recommend using this field differently for R&D and experiments. Define the success criteria for the experiment and clearly establish criteria for killing it. Setting these expectations upfront requires experimental teams to reset their vision at the end of the timeframe, especially if the unplug criteria come to fruition.
2. Prioritize experiments based on targeted value
Having too many ideas and experiments is a good thing. Choice brings debate – and debates on lightweight vision statements focus on business value.
Why? Because we know very little about costs, real timelines, or risks during the early stages of an experiment. We should have some ideas on business value, and part of our experiments should include validating assumptions.
I have other suggestions, including these ten questions to ask before starting an ML POC. A good vision needs to be battle-tested against a smart plan.
3. Establish review checkpoints
I treat experiments just like any other initiative; they are run as agile POCs following agile planning, delivery, and transformation standards. There are features, sprints, agile estimates, and sprint reviews. Teams must state their deliverables in a defined release cadence.
Why? Because every initiative is an experiment, and every experiment is an initiative! I manage and track them the same way, even though experiments and initiatives likely have different types of business objectives.
Sprint reviews and release cadences are checkpoints for evaluating whether experiments are on track, require pivots, or have early indicators for killing them.
4. Centralize documentation on learning
Learning is important. Making decisions is very important. But, how experimental teams document their key learnings and decisions is even more critical.
Many of us have this picture of the scientific logbook where experiments are tracked. But the AI, ML, and data equivalent can’t be overly scientific. The learnings must be accessible, transparent, and easy to understand by non-scientist stakeholders and by AI.
Why AI? Because as much as we need human-in-the-middle AI agents, we also need AI-in-the-middle experiments. We want people to experiment to ask the AI, “Analyze the last four iterations of our experiment and suggest what approaches we should try next.”
5. Champion team leaders who share their learnings
Organizations are good at scheduling launch parties and celebrating big wins. We market teams’ successes and award high potentials.
However, business leaders rarely reward lifelong learners who experiment diligently and contribute to the organization’s knowledge.
This approach to rewarding performance has to evolve, especially in the genAI era. Tribal knowledge and having a small number of go-to people are anti-patterns in AI cultures.
Don’t just fail fast – amplify learning. If executives aren’t championing an AI learning culture, then stagnation will precede disruption.
Experiment, pivot, double down, or kill? An agile learning process drives the answers.




















Leave a Reply