How to identify & run CRM experiments that drive business impact?
Highlights from My Live Session on Experimentation for Lifecycle Marketers
This week, I ran a 1 hour session on running smarter & better Experimentation as Lifecycle Marketing & CRM teams. If you want to make sure you know about the next session, you can sign up here. Here’s a brief summary of what we discussed.

It turned into one of my favorite sessions yet. Below are some takeaways and themes that stood out — especially if you're a lifecycle marketer, CRM professional and often find yourself wondering: Are we actually getting better, or are we just staying busy with all this experimentation?
Key themes from the session
“We’re testing a lot, but have no business impact to show for it.”
This quote sums up what I hear again and again — and what this session was all about - How do you experiment smarter, for better results.
We discussed how as CRM teams, we often prioritise velocity of testing. But with velocity, we tend to always focus on shallower tests. We never get to the big bets (if there are any). This trend isn’t just limited to startups or small teams, this is a core issue for teams of all shapes & sizes.
A lot of CRM teams are
Confusing optimisation with experimentation
Forgetting past results and re-testing the same ideas
Running shallow “safe” tests just to show activity
Borrowing experimentation “best practices” from Product world which don’t translate to CRM teams - ICE scoring, 1% holdout groups, etc.
The 3 Principles That Make CRM Experimentation Actually Work
I shared client-examples, stories and practical frameworks that can help you increase your likelihood to create business impact by following 3 principles:
Identifying High Leverage Opportunities for Experimentation.
High-volume user segments (e.g. dormant or new users)
Steep drop-off moments (e.g. onboarding exits, expired trials)
High-intent actions (e.g. setup completion, feature use)
Prioritise the Right Variables
Not all test levers are created equal. Choose the right one based on user intent and moment pressure
Design for Learning, Not Just Lift
This is where most tests fail. The setup is off — control groups too small, not clean, success metrics too soft, or results called too early. We discussed an example to demonstrate how the combination of Holdout Groups and Campaign controls can be applied for accurate results.
Some takeaways from this part of the talk were:
Don’t use a 1% master control group forever. Variance will kill your read.
You don’t need 99% statistical significance to learn something useful.
Test with clean entry logic: only include users who can actually be impacted.
“Split your control group from the moment CRM starts to matter. Not before.”
Favorite Tactical Wins Shared
These tests sparked good discussion and showed how simple ideas — done right — can drive serious impact:
Framing discounts in $ value vs % off → +24% conversions
Sending 3 push notifications instead of 1 on day zero → 30% lift, no extra uninstalls
Testing countdown timers in post-onboarding in-app messages → +27% conversion on high-volume segments
Free trial offer to 365-day dormant users → 1100 incremental trials/month
👂 Reflections from the Room
What made this session special was the openness from attendees:
“We run 100+ tests but don’t know what’s actually working.”
“We track 30% lifts in small samples, but it’s 10 users — does it even matter?”
“I feel like I’m just throwing things at the wall to see what sticks.”
You’re not alone. And this session reminded us that experimentation isn’t about being perfect — it’s about being intentional, curious, and willing to learn.
✍️
Thank you to everyone who joined — and if you're interested, I’ll be doing more of these sessions, you can check out the calendar here. Subscribe if you haven’t already. Let’s get better at this — together.