blog logo
[ultimatesocial count="true" networks="linkedin,facebook,twitter" url="https://www.lyonscg.com/2017/01/31/first-click-testing/" skin="minimal"]
thumbnail

First-Click Testing: Freedom From New-Feature Fear

Laura Elsner • January 31, 2017

Striking a balance between intuitive functionality and differentiated design is a considerable challenge in the world of eCommerce. Ideas for innovative site features often wilt on the vine, struck down by the fears and risks associated with unconventional website behavior. After all, why invest time and money into ideas that are not guaranteed to improve the site? These justified objections meet their match when we utilize first-click testing.

First-click testing is aptly named: it examines what test participants click on first. Requiring significantly less investment than pushing a risky change live and gathering results, first-click testing merely requires low-fidelity wireframes and pre-determined tasks that participants are asked to complete.

Requiring significantly less investment than pushing a risky change live and gathering results, first-click testing merely requires low-fidelity wireframes and pre-determined tasks that participants are asked to complete.

When testing the efficacy of new features, first-click testing reigns supreme. Did you know that users who click down the right path on their first click will complete their desired task 87% of the time? Compared to a 46% success rate for those who click incorrectly, the first click can be critical when it comes to conversion – after all,

Compared to a 46% success rate for those who click incorrectly, the first click can be critical when it comes to conversion – after all, conversion is the reason why we continue to innovate the eCommerce experience.

These tests can be facilitated in person or online through a 3rd party. Most online tools assist in recruitment and data validation. Helpful visualizers will even graph and heat map data for you.

So, here are five simple steps to guide you through a sample first-click test procedure and get you on your way to creatively driving conversion.


Step 1: What’s the Point? Set Your Goals!

What exactly are we trying to validate?  Ensure that your focus throughout the test never waivers from the objective, and your data and insights will remain aligned to validating this objective

In this example, our goal is to understand if including icons in mobile navigation will positively influence the consumer experience and simplify site navigation

First-Click Testing
New Mobile Site Design – Visual Icon Menu

Step 2: Build the Test

Once the subject of the first-click test has been identified, we need to build out a prototype. When creating the test, the goal is to gather high-level user feedback that allows us to identify efficiencies and red flags.

As to cost concerns, the visual fidelity for the experiment is truly up to you. Test areas such as content hierarchy or navigation simply require a quick sketch or wireframe to test effectively.

More granular variables such as font legibility or a new primary CTA color will require a higher fidelity design. In any case, this process costs considerably less time and money than rolling out an untested change live.

Back to our sample test: we’ve prepared two versions: a “Traditional List Menu” and a “Visual Icon Menu”. Our goal is to compare first-click behavior using the same test script while presenting different users either menu.

First-Click Testing
Left: Traditional List Menu | Right: New Visual Icon Menu

Step 3: Draft a Script and Test the Test

Traditionally, first click test scripts have three distinct sections:

Pre-Test Questions

These warm-up questions are where we try to put the users in a “helpful” frame of mind. Questions around broad topics such as shopping habits, favorite stores, and personal devices work well to condition test-takers for a first click test.

Task-Based Questions

These are the bread and butter of first-click testing. Present the user with a scenario, ask them to complete a task, and record where they click first.

Simple, right? Think again.

Make sure that the tasks your users are being asked to complete align with the goals of the test. For example, it does no good to ask users to search for new running shoes if the point of the test is to get visitors to interact with the stores’ blog. Always stay focused!

Post-Test Questions

Almost as critical as the task-based questions, post-test questions allow you ask the user about the test itself. This is a phenomenal opportunity to collect post-test feed-back and qualitative input that test-takers have not had the opportunity to provide.

After drafting a number of scenarios and questions, grab a colleague for a dry run to work out any kinks and deliver objective feedback. Depending on the testing variable, tests can run for 5 minutes or 45 minutes: just remember that your users are humans with limited attention spans

In order to test the new “Visual Icon Menu”, we need to draw up some questions that show us how, where, and why visitors interact with the menu. Let’s see if they can find a specific item within our categories.

 

First-Click Testing
First-Click Task #1: Notice the clear direction to find a “nice wool cardigan”. Specificity helps!

Step 4: Test Day!

With a clear goal in sight, prototypes developed, and questions ironed out, all that’s left is to launch the test. Again, this can be done either in person or online through a 3rd party.

We highly recommend recording the sessions, so that you can refer to the process and results in the future.

For our example test, we used the 3rd party tool: Chalkmark. Using a constant script for both versions (Traditional and Visual Icons), we can see how the icons changed visitor first click behavior. You can check them out via the links below:

Version 1 – List Menu

Version 2 – Icon Menu


Step 5: Collect Results, Listen to Feedback, and Draw Conclusions 

Once the test is complete, analyze results and feedback. What was consistent between the two tests? What differentiated? What surprised you? The number of insights and questions you can draw post-test can be almost endless.

Here are some of our favorites for generating useful, high-level conclusions:

Were users able to successfully complete the tasks?

Did the users click right away, or did they have to stop and think before completing a task?

Did users indicate that they liked the feature?

Always remember to find outliers and try to understand what happened to remove these data points from the pack. Putting these conclusions down into a report can further help to define exactly how and why users interacted with a new feature, and how to successfully implement it moving forward 


Implementing new features and tools can scare the dickens out of eCommerce site managers. Instead of wasting time and money blindly rolling out unproven functions and features, first click testing can provide fantastic insight into visitor behavior, and just how successful (or not) these new features might be.


Laura Elsner

About the author

Laura Elsner

Subscribe to our blog

Let's discuss the next step in your commerce journey.

XSchedule a meeting