Qualitative analysis is very important, especially in our world where we Web Analysts lean perhaps a bit too much towards clickstream data as the source of almost all reporting and analysis.
In my post on importance of Qualitative data it was discussed why understanding the Why was critical (if you have not read that post it has foundational material that you might read before reading this one, please click here).
When you think Usability perhaps the thought of Lab Usability Testing comes to mind. Or maybe Follow Me Homes or Eye Tracking or other fancier methodologies come to mind. I am sure Surveys don’t, and but surveys, done right, can be a powerful method of gleaning key learnings and a great complement to other traditional Usability methodologies.
Yet if surveys are great at collecting qualitative data their impact on decision making is usually sub optimal. My hope in this post is to cover lessons learned over the course of a few years, lessons that might help you in your quest in moving from a survey monkey to a survey homo sapien (as most of you know survey monkey is a product and this reference is just tongue in cheek).
My top five recommendations for surveying success :
- Partner with an expert: Surveys are extremely easy to do (just buy a $19.95 per month license) and even easier to get wrong. Surveying not a art but a science, well maybe 20% art and 80% science. If possible partner with an outside company that can bring a full complement of expertise to the table. This has two great benefits:
- You have to do less work, can’t beat that. Seriously though, you don’t have to create questions, you don’t have to master complex computations (often advanced statistics), you can benefit from best practices and all you have to bring to the table is your knowledge of your business.
- You can stay focussed on value add analysis rather than in distracting and time consuming tactical work.
- Benchmark against industry: Perhaps the single biggest reason that survey recommendations are not actioned by senior decision makers is because they don’t have context. We run our survey, we measure questions on a scale of 10 or 100 and we provide a rating. Question: Do you like our website? Answer: 7 out of 10. Now is that great? Does it suck?
External benchmarks are great because they give context, they shame us (when we are below) or give us joy and pride (when we beat the benchmark) but most of all they drive action.
The ACSI, which ForeSee Results (see Disclaimers & Disclosures) has licensed from the University of Michigan, is one such benchmark. The American Customer Satisfaction Index measures half of the US economy and its various indices form an awesome global benchmark (available for free at www.theacsi.org, for example check out this one and see how your industry is doing). You can compare yourself for aggregate Customer Satisfaction but also for future predictive behavior (for example: likelihood to recommend website).
Another company that is developing its own methodology and benchmarks by industry is iperceptions. They have developed benchmarks for Hospitality, Auto, Media, Financial Services, Retail and B2B. iperceptions also has a great built in slice and dice framework to “play” with data to find insights. I find this latter feature quite innovative.
- The real golden insights are in raw VOC: Any good survey consists of most questions that respondents rate on a scale and sometimes a question or two that is open ended. This leads to a proportional amount of attention to be paid during analysis on computing Averages and Medians and Totals. The greatest nuggets of insights are in open ended questions because it is Voice of the Customer speaking directly to you (not cookies and shopper_ids but customers).
Questions such as: What task were you not able to complete today on our website? If you came to purchase but did not, why not?
Use the quantitative analysis to find pockets of “customer discontent”, but read the open ended responses to add color to the numbers. Remember your Director’s and VP’s can argue with numbers and brush them aside, but few can ignore the actual words of our customers. Deploy this weapon.
- Target survey participants carefully: Randomness is perhaps the most commonly used tactic in inviting customer to take the survey. I, controversially, believe that that is sub-optimal in many cases. When trying to understand what people think I have come to believe that surveying people who have had the chance to engage with the site is optimal. It shows a commitment on their part and we will get better quality answers.
Look at your clickstream data, find out Average Page Views Per Visitor and then set a trigger, Loyalty Factor if you will, for just under that number. The aim is to capture most survey respondents before an “average” visitor exits. You’ll get some that stay longer, that’s a bonus.
You also don’t have to spam everyone with a survey. Usually around 1,300 survey responses are statistically significant. So for most high traffic websites if the survey is shown to around 3 to 5% of visitors who meet our criteria and if we get a 5% response rate then we are cooking. You can tweak the percentages to get 1,300 but the point is that you will only “bother” a very small percent of your site visitors. Trust me if you give them a chance to talk, they will talk. :~}
- Integrate with clickstream data: You can turbocharge your survey analysis if you can figure out how to tie your survey responses to your clickstream data. If your survey technology vendor is a good one they will accept external variables that you can pass into the survey. Simply pass the 100% anonymous shopper_id and a session_id’s (or equivalent values from your website platform). These two values are also being captured in your clickstream data.
When you do find “interesting” groups of responses in your survey response you can go back and put those anonymous shopper_id’s and session_id’s into your clickstream tool and see what the referring url’s were of these unhappy people, where did they click, what pages they see, where did they exit etc etc. (I can see that you dear blog reader are hyper ventilating at this point just thinking of the awesome power of such analysis, just imagine the insights that will fall magically into your lap, huge salary bonus here I come !!! )
This was supposed to be a top five but here is one bonus recommendation…….
- Continuous not discreet: Most surveys are done as a pulse read, we have a thought in mind, a question we want answered, a project we are doing and we do a survey. That is great. But surveys can be hugely powerful as a continuous and ongoing measurement system. Having surveys continuously deployed on your website will mean that you will always have your finger on the pulse of your visitors. More importantly it will mean that you can account for seasonality, external influences (like press releases or company events), marketing campaigns, God’s power etc. All things that can mean that a discreet measure in time might not be the cleanest read.
Advanced surveying technologies are now quite affordable, specially after you consider how much revenue your website is making or how many customers are being upset a sub optimal experience. Other than clickstream data this is perhaps the cheapest continuous method, none of the other Usability methodologies can even come close (either in their cost or in the number of actual customers you can hear from).
Surveys are not the be all and end all of qualitative data, like all methods they have their pro’s and con’s. But they are one of the best ways to drag the customer’s voice to the table to help us make better decisions.
If you don’t do them at all then all you are measuring on your websites is the what and not the why, and actionable insights come from understanding the why.
Agree? Disagree? Am I going overboard with this? Please share your feedback via comments.
Couple other related posts you might find interesting: