Really, there aren't many things I'm part of on a week-to-week basis more turbulent and humbling than running growth experiments.
First, the things you're working on are usually very close to revenue, which means the implications and expectations across the organization are elevated. As a result, securing buy-in, nailing the hypothesis, and working quickly to develop a prototype to test against it are essentially like having pitch meetings, and a subsequent product release, every other week or so.
To top it off, roughly one out of ten tests you run will actually show positive lift, meaning you're left analyzing the data of failed hypotheses much more often than you are celebrating the results of a winning one.
It's work, and it takes time, the latter of which is in short supply in most startups and high-growth organizations.
Which is why I'm always surprised when I connect with other marketers working through experiment roadmaps that span >6 months, when in most organizations, 6 weeks is a reasonable expectation for lessons learned in a specific focus area.
Protracted roadmaps are usually due to the insistence for marketers to test changes in isolation. In many cases, particularly for companies with large sample sizes, and therefore a higher risk profile, this is important.
Airbnb, for example, is afforded the ability to test even the smallest of changes in isolation and reach statistical significance extremely quickly. Further, due to the amount of daily active users, sweeping changes could be disastrous to bookings and overall revenue. Small changes, big potential.
For many startups, this isn't a viable approach.
Rather than obsessing so much about clarity of attribution from your tests, the focus should be on moving the needle, and increasing the credibility of your team across the organization, as quickly as possible. This won't happen through months of small, incremental website or app changes.
So what should you do?
Take big swings
Combine multiple changes and iterations into single treatments, enabling you to test radically different treatments against one another which can accelerate potential learnings and progress toward your goals.
You're going to be trading some clarity around attribution of the specific changes you're testing, however you're greatly increasing your chances of accelerating growth quicker. If you're just starting out, the latter is more important.
Serve to 100%
Second, if you're in the early stages of building audience and/or product adoption, consider serving your tests to 100% of visitors/users, as running a split test with a small sample size is going to greatly increase your time to statistical significance.
**Note: Exposure is separate. You'll still want to consider separating out different cohorts by geography, new vs. returning, etc.
Every single experiment you run requires so much sweat and spends so many calories. Make sure you're setting you and your team up for success and not nickel-and-diming your efforts in the name of attribution.
Attribution matters, but results matter more.
You can always back into attribution analysis after the fact through user testing, or even just a deeper analysis of experiment behavior through tools like FullStory, Hotjar, or other preferred tools of choice.