What Is Eye Tracking?
How a subject views a book page, a store display, an advertisement or other visual stimuli is measured using sophisticated tools that track eye scan, also called eye movement. These tools measure which design elements capture visitors’ attention and which don’t.
Eye tracking is used in virtually every
kind of marketing – TV ads, billboards, product packaging and web
sites – to determine what works and what doesn’t with consumers.
What Does a Visitor See on Your Site?
The layout of a site page is scanned
differently by each visitor based on individual perception, interest,
need, age, education level, computer monitor, browser settings and
other variables that can be tracked in empirical, eye tracking
studies.
The results of numerous eye tracking
studies have been quantified, enabling web site designers and owners
to optimize site pages for maximum impact and “stickiness.”
Single- and Multi-Variant Testing
Single-variant testing involves
changing one site element and measuring the impact on conversion
rate, for instance. Multi-variant testing employs a series of simple
A/B comparisons conducted simultaneously or sequentially depending on
what’s being tested.
Using statistical analysis, and eye
tracking data across broad-spectrum demographics provides numerical
sums based on number of observations and length of observations of
different elements on any site page. That’s something you want to
know. What captures the attention of site visitors? What is ignored?
Single-variant testing is the simplest
to initiate and track. However it’s time-consuming and may lead to
unsubstantiated conclusions. Multi-variant testing is a more
efficient means of determining which site appearances and features
deliver optimum results, i.e. the highest conversion rate.
However, multi-variant testing is more
complex than changing a single variable and waiting to gather the A/B
test results. It could take months to optimize a site for conversion.
Further, single-variant testing often requires the tester to make
certain assumptions that may or may not be true.
For example, a change in type font
shows a boost in conversion ratio. Is it logical to assume the change
in font style is responsible for the improvement? No. In fact, this
fallacy is called “post hoc ergo propter hoc” in the world of
statistical analysis. Roughly translated, it means “after this
therefore because of this.”
Simply because something occurs (an
improvement in conversion rate, for example) after a
single-variable change has been made (the change in font) does not
mean that the improvement in conversion rate is due to the
font change. The improvement could be based on another factor
entirely.
Planning Your Test Model
“If you don’t know where you’re
going, any road will take you there.”
If you blindly (or wildly) change
design elements without a thought to site improvements, all you’ve
done is collect a lot of data. In order to determine which changes to
a site improve conversion rates, it’s important to first define
what you’re looking for – your test metric. What site element or
elements will be compared?
Next, in order to develop useful data,
you must determine how you’ll measure and compare functionality.
What methodology or “conventions” will you employ to determine a
reliable outcome?
And finally, you must be able to
develop a strategy that optimizes site success, however that success
is defined by you. Here’s an example.
Let’s say you want to determine which checkout software is better for your bottom line.
Before you can conduct your test, you
must first create a test metric – a measurement that defines the
term “better” in your query: which checkout software is better?
You might determine the test metric to
simply be the number of visitors who convert. That’s easy to
measure, but it may not provide the complete picture. Perhaps a more
useful measurement of which checkout software is better is the dollar
amount each visitor spends. Or the number of repeat buyers you see.
An increase in the number of page views, number of unique visitors or
a jump in bandwidth, indicating an increase in downloads from your
site – all of these are reasonable test metrics depending on your
mission.
This leads to the next step in
developing accurate statistical analyses: how will comparisons
between the A/B elements be measured or quantified. What test
“conventions” or methods will be employed? Will you count all
site visitors in the study – even those that bounce – or will you
limit the test pool to those who actually put something in their
cart? Or actually reach the checkout but abandon the shopping cart?
Or actually complete a transaction? Determining the methodology of
your single-variant or multi-variant testing prevents jumping to
unsubstantiated conclusions.
And finally, what steps can be taken
based on the test results you develop? If you can’t answer this
last question, why are you going to all the trouble to conduct the
test and collate the data? If you get result Y, what can you do with
that information versus result Z? This is where statistical analysis
is turned into a practical, organized strategy for improving
conversion ratios.
Once the test metric(s) and conventions
are established, you run an A/B comparison test using the two
different checkout models.
Checkout A requires two clicks to
complete a transaction. Checkout B requires six clicks to complete
the same transaction. Your test results reveal that the more
complicated checkout model leads to a higher percentage of shopping
cart abandonments. So can you assume that checkout Software A is
better than Software B?
If your test metric was a simple count
of software usability, Software A is the clear winner. But what if
your test metric was to determine which checkout software led to the
highest “per visitor” purchase amounts? And test results reveal
that checkout Software B delivers fewer purchases but purchases of
higher value. In this case, Software B would be the better choice.
That’s why it’s essential to determine each test’s metrics and
conventions.
Measurement Tools
There are a lot of software packages to help in gathering test data. One, called Crazy Egg
provides different GUIs of site
activity – an overlay view, a list summary and even a heat map
showing what’s hot and what’s not on your site. Easy and
effective analysis.
Another popular conversion rate
analysis software is Click
Density, which provides real-time visitor data to
help improve everything from content architecture to link placements.
Click
Tale tracks every movement of visitors as they
move through your site. This data is then translated into animated
graphics to help you understand visitor behaviors from the time they
arrive until they leave.
Finally, consider using Google
Analytics – the simplest statistical analysis
tool available. And it’s free. Google Analytics provides snapshot
views of your site’s activity, allowing you to perform tests and
analyze data in seconds instead of spending hours poring through
report after report.
The point is this: to improve site
conversion rates requires an understanding of eye tracking and
statistical analysis to produce a useful optimization strategy. The
hit-or-miss approach is simply too time consuming. So, if statistical
analysis makes you light-headed, hire a professional who can design
and validate test metrics and translate those findings into
actionable strategies.
That’s how you improve site performance systemically and efficiently.