Setting the Stage to Use ForeSee Model Results
The initial results of your ForeSee research are in and you have some great new information! You finally know how your website (or mobile site, mobile app, store, contact center, or other customer touch point) is doing overall, and you know which elements should be a top priority for improvement. You long to use this information to increase your audience’s satisfaction and, ultimately, improve the bottom line for your company.
If you are a ForeSee client, perhaps you’ve had an experience like this recently, or perhaps it was a while ago. Soon after the moment when model results are revealed, the next logical questions often focus on what to do to work toward improvement. How do I use this new knowledge to dig deeper into the data, plan and make improvements to the experience, and determine whether the improvements make a difference to the audience? Today’s post will set the stage for the “dig deeper” question by looking at the key pieces of a ForeSee model. In two forthcoming posts, I’ll address the “dig deeper” and “making a difference” questions.
What Your Model Results Contain
ForeSee models contain Elements (drivers of satisfaction), a Satisfaction Index (The American Customer Satisfaction Index, or ACSI), and Future Behavior questions (key customer outcomes). The underlying theoretical framework we use supports the cause-and-effect relationships between these moving parts. Experiences predict satisfaction, which, in turn, predicts future behavior. In order to make improvements, we focus on the left side of the model – the elements’ scores and impacts.
Element scores are performance metrics. They tell you how you’re doing on various aspects of the experience. For a website, elements might measure concepts like Navigation, Look and Feel, and Site Performance. The Element scores are optimally-weighted averages of responses to two to four individual 1-10 scale survey questions that produce a single 0-100 score.
Element impacts are numeric representations of the cause-and-effect relationship between an element and satisfaction. An impact represents the increase in customer satisfaction we would expect to see as a result of a 5-point increase in that element’s score. For example if the impact for Navigation is 1.5, then a 5-point increase in the Navigation score should lead to an increase in customer satisfaction of 1.5 points, holding all other things equal.
Why do we measure the elements the way we do?
We use a scientific approach to constructing your elements to make sure that you can trust your element scores and impacts to be valid, sensitive and reliable metrics. By asking multiple 1-10 scaled questions, we are able to create element scores (indexes) that are very reliable and sensitive. Reliability is the degree to which a measurement is free from random error and therefore can yield consistent results. Sensitivity is a metric’s power to detect differences (change over time or differences between segments).
ForeSee’s element question wording is constructed with several simultaneous goals in mind to ensure the quality of our clients’ model results.
- Measurement validity (are we measuring the concept we are trying to measure/do we have freedom from measurement error) is critical. Initial qualitative research, periodic element wording tests, and triangulation of information sources are used to ensure the validity of our measures.
- Asking multiple questions, as mentioned above, is only part of the story behind reliability and sensitivity. Question wording affects it because the questions within an element need to be highly correlated with each other, yet as distinct as possible from other elements, to ensure that model scores and impacts are reliable and sensitive.
- There are also assumptions about the data that must be met so that our engine can produce scores and impacts that can be trusted. (Remember the old saying, “Garbage in, garbage out?”) Question wording plays a critical role in meeting these assumptions. One example of an assumption we consider is to make sure that no element will be so related to satisfaction that it is almost like measuring the same thing twice, as both predictor and the thing you’re predicting.
So what’s next?
It might seem logical to take a close look at the questions within your Top Priority element as a next step. Although looking at the average ratings of the questions within it may give you some hints about what to improve, these questions are not designed to help you to make improvements.
If your jaw has dropped open, please close it and keep reading.
As you may have begun to figure out above, the main purpose of the element questions is simply to build good elements. The elements provide you with the reliable, sensitive big-picture information you need – including the prioritization information that can help you to decide which area to focus on first for improvement.
Never fear; we have many awesome ways to help you dig deeper into your model results to go beyond the big picture, and that’s what my next post will cover.