Forum Discussion

Parin's avatar
Parin
Active Member II
6 months ago

Do you use control groups in your Canvas set up?

Hi all,

Just wanted to get a sense of how many people use control groups in their marketing comms campaigns. 

  • Do you use control groups? Why? Why not?
  • Are there exceptions where you don't use control groups?
  • How big do you make your control groups?
  • Does having a control group sometimes lead to puzzling incremental uplift numbers?
  • Is it worth it to use incremental uplift as a metric when you have to spend a few minutes just explaining to stakeholders what incremental uplift is? (As opposed to absolute metrics which is easily understood)
  • I usually don't use control groups and the main reason is I usually forget. Having said that, our email sends aren't huge so I do feel that a control group is a bit on the small side and therefore won't give statistically significant results - not that it won't provide any insight. I've typically worked with companies that have conversion objectives that are longer term, so I think things like sales or very fast paced industries will probably benefit from control groups a little more (e.g. promotions, sales etc).

    I am thinking about setting up a global control group moving forward though as I think this gives a better overall view of effectiveness at a longer term lifecycle level.

  • DavidO's avatar
    DavidO
    Strategist II

    I tend to run controls on most canvases except those where it is critical for a user to get that information such as a transactional email.

    On topic of a true control group, I tend to use 10% as they can be a great measure of how effective the campaign is compared to not sending it all, taking into account all the great points mentioned above. We also run a global control group to see the impact of Braze vs no-Braze at all.

    Not to throw a spanner in the works, but if you have been running a campaign for a while then you create a new variant, in theory, the old variant now becomes your control group. We often say this is an A/B test but you are really using the old version as a control vs the new version. So, instead of testing if it is better than not sending the message at all, you are now testing if it is improved over the previous version which is often a better control for campaigns that are ongoing. 

    On measurement, I come from a science background and I do believe statistical significance is really important, but I have learnt that even indicators or outliers that are not significant should not be ignored. Not that marketing is like a medication trial, but imagine you run a medication trial that is statistically significant in helping most people vs the control, but you also see 10 people got hospitalised as a result of the new medication but that is not statistically significant, you can't just ignore that figure. Sure, no one is probably going to hospital over your 20% off voucher but I still believe we need to think beyond statistical significancy sometimes. 😊

  • inespais's avatar
    inespais
    Practitioner II

    Short answer - yes. However, I do agree with alextoh1 that if you're dealing with really marginal volumes, you won't get much insight from a control group. It comes down to your goals and strategy as well - i.e. whether you're focusing on test-learn-iterate and if you need to measure/report on the lift/impact of your CRM efforts, for example, for decision-making, prioritization, reporting to stakeholders, fine-tuning your stratgey etc.

    Control groups are good to understand:

    • the effectiveness of a specific campaign (campaign / canvas level) at driving your KPI --> campaign/canvas level control group (I typically go for 10% but it would depend on how large your base is, as you'd want to ensure you reach statistical significance)
    • measure an experiment's variant performance --> A/B or multi-variate (depends on test, target size, MDE, etc - recommend to use a calculator like abtestguide or optimizely but there are a number; also great to work with your analytics team on this!).
    • overall performance of your CRM strategy --> Universal Control Group (UCG) set up at the account level (really depends on your userbase size - Braze has some good guidelines here)

    The exceptions  is typically Transactional/Service communications. But there could be other reasons why you choose to exclude the control/UCG - dependant on your own strategy/needs.For example, I once had a client testing different onboarding flows through Braze (using a sophisticated IAM flow setup) due to lack of dev resources at the time to test more natively in the product or a better Product testing tool - they chose to exclude the UCG since they deemed necessary to ensure 100% of the base got exposure to some variant of an onboarding.

    On your question about the puzzling incremental lifts - could be, but you need to understand whether your audience sizes and test duration are enough to make your results stat sig; whether you're confident the variable you're testing can directly impact your KPI or if there are other campaigns and external factors that could be playing a role; ... Again, it's always good to work closely with an analyst in planning and measuring your control group strategy and specific experiment results!

    Lastly, I do think it's worth using incremental uplift when talking about performance - if you have a clean, significant test, with a high confidence that the difference observed is affected by the variable changed, you will have a very clear case for applying it and optimizing your strategy. You will be making data-driven decisions, and optimizing your strategy and resource allocation based on concrete evidence. In my experience, stakeholders can easily understand this and the concept of incremental lift (it might require some education at first, but it will ultimately open up more interest and awareness around the importance of your strategy and, consequently, make it easier with resources).