I have plenty of evidence that I’m not that bright, and so I’ve worshipped at the altar of A/B and multivariate testing to validate my wild guesses since my days at Amazon. (Interestingly, when you Google for “weblab Amazon”, all you get is this old job description. I may have just revealed a huge secret.)
So I have a project where we tossed up some best-guess UI around the project, all focused on driving conversions to a third-party form. Over the last 16 months, we’ve run 24 different tests, each with 1-5 different treatments of one of the five conversion paths.
Not a single test produced a meaningful improvement. (I noted this a few months ago on Scott Porad’s blog post about redesign testing.) Our best-guess UI has either outperformed or shown no statistical difference. That’s kind of like a review at work where your manager has only positive things to say: it sounds nice, but you wanted something meatier than that.
Then I got a simple idea based on conversion data from a smaller site, tossed up a new test, and in <1 day, voila: a new winner with massive improvement – with one test the upper limit of what I believed was possible was surpassed.
(BTW, that test on the bottom? That’s the one I thought was going to win. Lesson learned: Always bet on the big button.)
So even if that testing regimen isn’t showing anything – you don’t feel like you have enough traffic, nothing is changing, it doesn’t seem worth the pain – keep it up: the 25th time might be the charm for you, too.