Google Tag Manager

Forum Discussion

  • Hello, thanks for having me here today.
    At BlaBlaCar we want to make sure as much as possible that everything we send to our members has a positive impact on activity and is the most engaging version we could come up with. For this reason we both test different content to maximize member's engagement towards the comms and we test the incremental impact of our programs vs a control group that does not receive anything.

  • Hi Maggie & Guendalina, 

    Guendalina, What are the most important elements to consider in Braze to begin effectively testing? 

    • dylancarpe's avatar
      dylancarpe
      Braze Employee

      Hi ashley_rosen great question - you should consider the following:

      • 1) Determine what to test
        • Examples: Subject lines, Personalization, Copy, Timing, Channels
      • 2) Set a hypothesis
        • Make a clear hypothesis about what you’re testing. 
        • Example hypothesis: The use of emojis will have a higher open rate
      • 3) Be mindful of statistical significance
        • Use the analytics - be sure to evaluate the significance of the uplift
      • 4) Be thoughtful about the timing of your test
        • Consider testing over longer periods of time to increase reliability
      • 5) Use control groups
        • Control groups are essential in testing a hypothesis and setting a benchmark for engagement to compare marketing efforts
  • BLT's avatar
    BLT
    Active Member

    dylancarpe & Guendalina 2 questions! what would you say is the most common misconception about testing and what do you feel is often the most overlooked factor? 

    • Guendalina's avatar
      Guendalina
      Practitioner

      From my experience the most common overlooked factor is not considering statistical significance of a test and draw conclusions (that may be wrong) too early. 
      As for the misconception, not reconfirming/re-checking the test results regularly is for sure quite dangerous since users' behaviour can change quite quickly and what worked one year ago may not work anymore. 

      • dylancarpe's avatar
        dylancarpe
        Braze Employee

        Definitely agree with Guendalina on this. Testing over a longer period of time will help make more reliable results to ensure that you're analysis is as accurate as possible. And to her second point, your customers are always changing how they engage - stay in front of it by having a continuous testing mindset!

  • dylancarpe's avatar
    dylancarpe
    Braze Employee

    Hi Arso thanks for asking! I can jump in from the Braze perspective. It depends on the use case, but in general:

    • Ad-hoc testing
      • Ad hoc testing is performed without any predefined schedule and is often used for quick checks or one-off tests (e.g. testing copy before sending a message to a bigger audience). It can help identify potential issues or opportunities for improvement quickly and efficiently, without requiring significant time/resources.
    • Scheduled testing
      • Scheduled testing happened on a regular cadence (e.g. weekly, monthly), often to support ongoing optimizations. It allows brands to track progress, identify trends, and make data-driven decisions over the long haul.
    • Ongoing testing
      • Ongoing testing is performed continuously, often to identify opportunities for improvement as they arise. It requires a culture of experimentation, but can help organizations stay agile and responsive to changing customer needs and market conditions.
  • vbaliga's avatar
    vbaliga
    Active Member

    Hi dylancarpe ! In your experience as a CSM, what are some of the Braze features that tend to lead to successful testing?

    • dylancarpe's avatar
      dylancarpe
      Braze Employee

      great question vbaliga ! Some of the features that lead to the most effective testing:

      • Experiment Path Step in Canvas: Winning Path
        • This is similar to a winning variant, but for path-level testing. This tool automatically places customers into the winning customer journey path after a test is complete. This way, not a moment is lost in terms of connecting to customers and driving revenue.
      • Winning Variant
        • The classic optimization choice for a standard A/B/n test. Once the initial test is complete, remaining customers automatically receive the variant that won the test.
      • Personalized Variant
        • Instead of sending remaining customers the overall winning variant, this tool sends the message variant most likely to engage each individual customer. Think 1:1 at the scale of 1:many.
      • Intelligent Selection
        • Automatically and continuously adjust the percentage of customers receiving the best-performing journey variant over time to maximize journey performance.
  • tpowe's avatar
    tpowe
    Active Member

    @Guendalina You mentioned the importance of re-evaluating your efforts post testing. How do you manage the tracking of all your testing & results?

    • Guendalina's avatar
      Guendalina
      Practitioner

      At BlaBlaCar we keep track of all tests, results, learnings and iterations in the CRM dedicated team workspace of Confluence, the shared tool used to keep track of projects at company level. 

  • MaggieBrennan's avatar
    MaggieBrennan
    Icon for Community Manager rankCommunity Manager

    Thanks for all the great questions so far! Feel free to start a new post in the Forum by clicking "Start a Conversation" so that we keep track of all of the different questions as individual posts 😄