“Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.” - Albert EinsteinHey don't take my word for it... listen to my boy Al, he knows what he's talking about. Test your ideas, question the success of your control, and always strive for more.
Before I discuss more testing tips lets talk validity.
Your tests must be statistically valid. If the test group is too small or the results are too few your test will not be accurate.
All tests have some element of random statistical noise. Two randomly split panels with the same creative & offer may show that one panel by chance will have more orders, and thus a higher response rate. Does this mean you now have a "control?"
There is no winner here. (you can have no winner) It's just happenstance that A beat B in this case. It's just "white noise."
The good marketer is one who can distinguish a marketing signal from marketplace noise. To go to the next level you must understand basic statistical calculations.
So... how do I do that? How do I know if my test data is valid?
1. Make the sample panels (A/B) as large as possible. For online testing I think groups of 25,000 is a good start. If this is not possible, and sometimes it is not, just do the best you can. Get the most impressions or names or opens or clicks that you can... knowing that the validity of ALL data is based on both the results of the test (the difference between A & B) AND the sample size.
Thus, if you only get a small sample size, it will result in a larger than acceptable margin of error. A test that is not statistically valid.
2. The results MUST double that of the control to have a clear winner. Some say a winner can be as low as 20%, some say 40% - I want 100% difference! That being said, to get a 100% difference you really must test big.
To me... that means if you're changing only one word on a page that you won't get an increase that pays for the time spent. See the CVoD issue Karma & Doubling Your Results for more on BIG tests...
Let's look at some results of a landing page conversion test:
Copy Visits Leads Conversion
Landing Page A - 1,205, 32, 2.66%
Landing Page B - 1,145, 94, 8.21%
In this example, the difference between the number of leads is significant. It's the 100% difference you aim for. Yea, we have a winner!
Look at the visits... not enough data has been collected for this to be trustworthy. In other words, since the data for both landing pages is still relatively small, there is too large of a possible margin of error for you to rely on these results now. I advise waiting until the visits increase thereby, reducing the possibility of error.
The test is not statistically valid, although with time it may be.
Does that make sense?
If you are ever unsure... just ask. I'm certain there is someone in your firm who understands these concepts. If not, dear reader, email your friendly CVoD editor.
Oh, and for no real reason... here's one more picture from Courtomer. Did I mention the red wine was wonderful.
Shh... Karma is still listening.