3 Ways to Improves Your Website With a/b Testing

Is your website working? Yes? Excellent! But is it working well?

 

Determining if your website accomplishes your goals can be harder than it sounds.  Sure, you’re getting conversions from your “Contact Us” page, but could you get even more if you changed something? Even something small?

 

One of the ways to answer the question of whether your website can be doing better is to conduct A/B testing. The term “A/B testing,” also called “split testing,” means comparing two versions of something (A and B) by releasing them at the same time and seeing which does better.

 

You can A/B test almost anything you put your mind to, but I’ll mostly be talking about A/B testing landing page features since that’s what’s most relevant to small business websites.

A/B testing can help convince visitors to a website to convert into customers by making small but carefully thought-out changes to landing pages. The problem comes when trying to figure out what works well and what doesn’t. The use of science, specifically the scientific method, is an excellent way to see what small changes produce the best results.

The trouble is that science is too time-consuming for most businesses to implement well. Mega-corporations like Google and Apple have gads of data to make decent sample sizes to analyze, plus the capability to hire statisticians and psychologists to do the analyzing. They might even have the resources to train bonobos to do all the science for them.

But the rest of us do not, and thus true science stays firmly out of our grasp. (For a great talk about this very subject, read this post by Will Kurt at KISSmetrics.) But that doesn’t mean that you can’t incorporate the structure of science into your business’s website.

 

Scientific thought can turn A/B testing into a truly powerful tool that will help you make your website work as well as possible.

 

What Most A/B Testing Looks Like

 

A/B testing in web marketing, when simplified to its essence, involves these steps:

 

  1. Choose a webpage to test.
  2. Choose a variable to alter.
  3. Make a distinct change to that variable.
  4. Randomly show the control (unaltered) and altered versions of that page to visitors.

The objective is to see which of the two versions achieves a goal more consistently, often the goal of obtaining information from visitors.

A hyper-simple example of A/B testing, (one I used to describe the process to my scientist of a sister, who was dubious about the whole thing), would be that a landscaping business’s website has a contact page encouraging people to call them to attain their services.

 

Whoever’s in charge of marketing has decided that the color of their CTA (call-to-action) button on the contact form, “Call now!”, will be their variable. The original button was yellow, but now they test it by creating a duplicate page where the only difference is that the button is now green. Both versions of the page are released at random to the page’s visitors.

Now they sit and watch and wait to see which button gets more clicks.

 

It’s thrilling stuff. (You only hear as much sarcasm as you want.)

 

If all that sounds painfully abstract, it’s because it’s actually quite similar to science, which more or less amounts to: “We’re going to design an experiment to gather and analyze data to see what stuff does stuff.” Of course, science is a little more complicated than that.

 

A Brief Overview of the Scientific Method

 

 

website

How does one shoehorn any part of the scientific method into the process of A/B testing? Well, one can’t claim to use or even be inspired by the scientific method if one doesn’t know what it is. So let me infuriate my many science teachers over the years by oversimplifying one of our species’ most cherished schools of thought. (Sorry, Mr. Gustke.)

 

The scientific method aims to use observation and the collection of data to study phenomena and attain knowledge. In its most simplified model, the one usually taught to elementary school students, the scientific method can be broken down into the following steps (with accompanying examples):

 

  1. Ask a question: How many licks does it take to get to the center of a Tootsie Pop?
  2. State your hypothesis: It will take more than 50 but less than 100 licks to get to the center of a Tootsie Pop.
  3. Make predictions: If someone licks a Tootsie Pop 0-49 times, he or she will still not reach the center of the Tootsie Pop.
  4. Design an experiment: Get some person to lick a Tootsie Pop until they reach the center and count each lick. Have them repeat this process several times to increase the sample size.
  5. Analyze the data: It took between 55 and 75 licks to get to the center of a Tootsie Pop.
  6. Interpret the data: It takes an average of 65 licks to get to the center of a Tootsie Pop.
  7. Retest the experiment: Do the whole thing over the exact same way to see if these results can be recreated.*

In marketing, that ends up looking more like this:

 

  1. Ask a question: How can we get people to click on our “buy now” button?
  2. State your hypothesis: I bet we could get them to click on it if it were red.
  3. Make predictions: Yeah, I bet we’ll see, like, a 20% increase in conversions with a red button.
  4. Design an experiment: Start that A/B test, stat.
  5. Analyze the data: We saw a 2% decrease in conversions with the red button.
  6. Interpret the data: Ok, so that didn’t work.
  7. Retest the experiment: We shall never revisit this issue again. Tell everyone that red buttons are banned from this day forth.
  8. You may have noticed that there’s one gigantic difference between science and A/B testing (aside from the fact that one is how humans advance as a species and the other is how people make money more efficiently on the internet): Science REPEATS experiments over and over again to verify that their hypothesis can be confirmed consistently. A/B testing tends to be a mite less exhaustive.
  9. This reflects the fact that science has a general goal of attaining truth and establishing what we may call “fact” by studying measurable results. By contrast, A/B testing tends to focus solely on the measurable results because they’re useful in an immediate sense, which is a comparatively superficial goal. It comes down to this: Scientists seek truths. Marketers seek conversions.
  10. But that doesn’t mean that marketing can’t borrow some of science’s truth-finding methods.

Using the Scientific Method in A/B Testing

It’s important to think of A/B testing as an exercise in general web design, not just as a way to attain conversions. As our web designer, Curtis told me, quite patiently, “You should use A/B testing to develop your web philosophy.”

 

Effectively, good A/B testing needs to take the larger picture into account: You want visitors to your website to feel comfortable enough to convert. Changing fonts and colors is really about helping them reach that comfort and trust.

 

But how do you know whether those font and color choices affect conversions at all? At times, A/B testing can seem like fumbling in the dark. The solution is to take some cues from science.

 

Here are three such cues, lifted from the scientific method, that can be incorporated into A/B testing to make it less of a crapshoot:

 

  • Choose one variable to test.

 

A variable is NOT something like a CTA button. A variable is a specific thing ABOUT the CTA button, like its font or color or copy. If you’re fiddling with more than one of these elements at a time, you won’t know which changes lead to what results, and that’s just chaos.

 

Try to test elements that stand to help you achieve your most important goal. Most small businesses rely heavily on their websites to convince people to get in contact with them, so they often prioritize testing variables on their contact pages. But remember that you’re a snowflake: Your needs will be different from anyone else’s, so don’t follow someone else’s plan for variables just because it sounds good.

 

  • Choose a date range.

Set a specific date range as your testing period and then stick to it — don’t stop gathering data until it’s over. Why? Because you need standardization. Gathering information over a four week period will have drastically different results than if you’d gathered it over eight years, so comparing data from tests that ran for different lengths of time can get really wonky really quickly. Consider setting up a testing calendar so that you can stay consistent and productive.

Especially if your website has low traffic, (and therefore small amounts of data to analyze), you should pay attention to when and how long you run your tests. To compensate for low traffic, you can simply run your tests for longer so that you have the opportunity to gather more data points. But also remember that, through the magic of statistics, small amounts of data can still tell you quite a bit.

  • Gather Data Consistently

 

The way you gather and analyze data also needs to be consistent. Don’t redefine “conversion” partway through a test and don’t switch analytics programs at random times. Be intentional with how you handle your data. It’s the most precious thing you have!

 

The process of data-collection takes forethought. If you skip the “Predictions” step of the scientific method, your A/B test might dump piles of data into your lap that are mystifying and hard to correlate to anything you set out to determine. Think about your goals to discern what kind of data you want to see before you think about how to collect it. Analyzing data is where the real value of A/B testing comes from, but you’ll never reach that stage if you aren’t giving yourself the right information to analyze. No amount of trained bonobos can change that.

 

________________________________________

 

Small business A/B testers often look less like scientists and more like witch doctors divining fortunes from runes (not to knock rune-reading). And that’s okay. As long as you’re approaching things with a data-driven mindset, you’re headed in the right direction. And with just a little bit of focus, you can put a little more science into your day-to-day business.

 

In order for A/B testing to look and function as much as science as possible, remember to keep these things in mind:

 

  • Choose one variable to fiddle with.
  • Choose a date range and stick to it.
  • Gather data in a consistent and manageable manner.

 

To put these three tips in perspective, here’s a list of mistakes that some really smart folks have made when A/B testing. Many of these mistakes could have been avoided if they had stuck a little more closely to the tenets of scientific testing.

 

To make the process of A/B testing worthwhile, you should at very least heed the wisdom that the scientific method offers. Without its guidance, you’re more or less just throwing spaghetti at a wall to see what sticks. And that’s a waste of time and money.

 

So buckle down, grab a clipboard, and toss a little science into your web design. You’ll be glad you did.

 

 

 

Special note of thanks: In the process of writing this, I pestered a great many people for their know-how, including our WTC experts: Curtis, our web designer (the guy who designs websites); Tom, our web developer (the guy who builds websites); and Trevor, our Project Coordinator (the guy who wears many hats in our office).

 

 

*I have no idea how many licks it really takes to get to the center of a Tootsie Pop. Tootsie Roll Industries has this page on their website addressing the subject, which leads me to believe that I was way, way off.

Leave a Reply

Your email address will not be published. Required fields are marked *