Follow us and stay on top of everything CRO
Webinar

How to Run a Cost-Efficient Optimization Program With a Limited Budget

Duration - 40 minutes

Key Takeaways

  • Use professional tools: To avoid wasting time and resources, it's crucial to use professional tools that allow for efficient project management and optimization.
  • Run concurrent experiments: Utilize tools like VWO to run multiple experiments at the same time. This can help identify winning strategies more quickly and effectively.
  • Implement winning strategies promptly: Don't let IT bottlenecks delay the implementation of successful strategies. Use tools that allow you to direct 100% of traffic to winning variants promptly.
  • Leverage tools for implementation: Use tools like VWO Deploy to implement winning campaigns directly, skipping the need for dev support and speeding up the process.
  • Gain experience and test frequently: Good ideas come from experience and frequent testing. The more you test, the better your results will be.

Summary of the session

The webinar, hosted by Deepak from VWO, features Jan Marks and David Otero from Multiplica, discussing the challenges and solutions in implementing effective optimization programs, particularly in the context of limited IT resources and budgets. They share their experiences of projects where the lack of professional tools led to time-consuming revisions and the inability to implement winning campaigns due to IT bottlenecks.

They highlight VWO’s solution, VWO Deploy, which allows for the direct implementation of successful variations to the audience, bypassing the need for IT approval. The speakers, with their extensive industry experience, offer valuable insights into creating high-converting user experiences and encourage audience interaction.

Webinar Video

Webinar Deck

Top questions asked by the audience

  • What percentage of experiments are inherent wins versus failures? How many times are we proven wrong when we make hypotheses? Are these failures a loss?

    I would have to take the mask off. I don't know. Well, first of all, we all love winners, right? To give you a better idea of what I've seen, in projects we've been involved in around 70 to 90% winner ...s. Is that correct, more or less? That is my last number that I have seen. It's not true, well, first of all, you're safe from implementing wrong stuff. You know? If you would not have tested it, you would have implemented it, and you've wasted things, added code to your page, and it wouldn't have delivered the result. That's one thing. The other thing is that you learn from every experiment, whatever the result is, you learn for the next ones. The example that I gave you earlier, so you see that, for instance, markets are acting completely differently. I would say a loser is not a loser. It's not a loss.
  • What are your thoughts on how to deal with the VP? We have 2 VPs on the call who get in the weeds on approving test creative for every effort, slowing down program velocity.

    Well, yeah, good point. We've seen that, you've seen that. I think it can be, to a certain extent, complicated at the C-level. That's for sure. But I think it depends very much on the cases. If you ... have the support of somebody on the board and the buy-in of the related stakeholders from the very beginning, then you're much better prepared. The worst thing you can do is hear something about conversion rate optimization, then run a little trial and put it in a niche, and so on. Until at some point, the chief technology officer finds out that you're playing around the site. So it's better to make sure that you have buy-in. Once you have the general idea, once you or your agency explain the huge potential of testing, then you'll easily get the buy-in and then the general consent from managers and senior vice presidents to do these experiments and have less trouble afterwards. I don't know if this answers the question.
  • How do we test the hypothesis?

    - by John
    Okay. So, now the way we do it at Multiplica is we have what we call a prioritization framework. In which we use different variables to define the complexity and the kind of the return of investment o ...f an experiment. So we take into account if we need to do custom graphics or pictures or we need to write content or we need custom HTML or JavaScript. So all the parameters that can make an experience or an experiment more complex. And with that, we get a score of complexity, and then we run them based on that. And that's what we use to validate what makes sense. So what are quick wins versus the rest?

Transcription

Disclaimer- Please be aware that the content below is computer-generated, so kindly disregard any potential errors or shortcomings.

Deepak from VWO: Hi, good morning, everybody. This is Deepak from VWO. Welcome to yet another webinar from VWO.  So today, we’re gonna talk about how to run effective optimization programs today. Along with me, I have two special guests from Multiplica – Jan ...