I’m a big believer in A/B testing. You can significantly improve marketing results by using the right headline, the right color, the right image, the right guarantee language, and so on. I’ve done a lot of such tests in my life and I’m often amazed at what wins.
I’m less sanguine about testing other things.
For example, Jaguar recently released a new logo. Almost all the reaction I’ve seen has been negative, which might lead people to wonder if the marketing geniuses tested it before rolling it out.
Let’s think about that. How would they have tested a logo?
A typical A/B test manipulates variables to optimize for one action, like clicking on the buy button. A/B tests rely on having a discreet, clear, and measurable goal. It’s very hard to test for multiple things at the same time.
A split test is very reductionist. “The purpose of this page is to get people to buy the product.” That single focus can override other things, like beauty.
When I was first in publishing, I found a box of marketing materials for one of our publications. The brochure was ugly. It looked stuffy and dated, like something done by a talentless and unimaginative artist 25 years before.
I asked the marketing manager why they used that ugly thing, and she said it gets more orders. Not only does that ugly marketing piece work, she told me, but they’d tried to test out of it many times with better-looking packages, and the ugly package always won.
That’s when a split test works — when you’re willing to forgo other considerations in pursuit of a clear and measurable goal like orders. (What you can’t measure is what people think of your company when they get that ugly package!)
I don’t think that same method should be used for a logo because a logo doesn’t correspond or relate to a single, measurable action. The logo serves several different purposes that can’t be measured that simply. A logo is supposed to express and nurture a view of the company as modern vs. traditional, playful vs. serious, luxury vs. accessible, etc. It’s not just a question of whether people like it, or if they’d pick it out of a lineup, because the logo is a long-term play. You don’t change logos willy nilly. They’re expected to last for a while. You can’t reduce all of the things that a logo is supposed to do to a simple A/B split test.
Then how do you pick one? That’s the frustrating thing. We want marketing to be measurable and data driven and responsive and all that fun and wonderful stuff. In this case, we have to rely on something else.
Art.
Somebody has to have the right sense of it. The style. The eye. And, to some degree, the ability to predict where the culture is moving.
It’s a very scary thing because you can’t quantify any of that. (Well, maybe some of it, but it’s very hard to do.)
This same logic might apply to other things, like the font for your newsletter, or the style in which you write. An A/B test isn’t going to capture all the variables that you need to consider.
The A/B test excels in situations where a measurable and consequential decision is made quickly. When those factors don’t apply, don’t rely on split tests to get your answer.
The bottom line is that “data-driven” has its place, but we shouldn’t try to impose it where it doesn’t fit.