3 ways to improve your survey responses
Our clients use Luminoso to analyze unstructured text data they receive in many forms, including surveys, reviews, support tickets – and more. Unsurprisingly, we regularly see many surveys and gather insights about the questions our clients ask their respondents.
Read on to learn three quick things to keep in mind, and how each can vastly improve your survey responses.
1. Ask open-ended questions
Open-ended questions ask respondents to provide feedback with no given context:
How was your experience?
Open-ended questions don’t infer conditions or circumstances. For example, if a hotel is asking an open-ended question to its guests, it would be phrased as “How was your stay?” as opposed to “‘What was positive about your stay?” Eliminating context encourages additional comments about anything that comes to respondents’ minds and may elicit feedback on unexpected topics and issues not limited to what you’re expecting to receive.
You may have been advised against asking open-ended questions in the past, but advances in natural language processing (NLP) technology have made it possible to learn from unstructured text. Open-ended questions return the lengthiest responses, as they ask for detail behind answers. More text improves NLP analysis, making longer-form responses a perfect fit for the technology.
Another great aspect of open-ended questions? A better likelihood you’re getting the most raw, “truest” responses, as participants are not led to conclusions by question phrasing.
What to watch
Since what you’re asking for is mostly contextual, open-ended feedback requires real analysis to understand the true nature of the themes being discussed. Subject matter experts should be involved to facilitate this process. And because there are no predefined circumstances in the question, it’s difficult to get exact answers, especially about things like product- or offering-specific feedback.
2. Avoid leading questions
Leading questions ask respondents to provide feedback in a defined context:
What was positive about your experience?
Leading questions are centered around identifying themes, and usually result in direct answers to the question being asked. In the above example, the inferred context is that the respondent will narrow their answers to positive parts of their experience.
Asking respondents to give answers under predefined circumstances or in regard to specific scenarios means you don’t need to delve into the context around which concepts are being discussed, as it’s already specified.
What to watch
Leading questions can sometimes confound results, as they don’t necessarily capture holistic answers within the context of each respondent’s true experience.
3. Pair open-ended questions with ratings to uncover deeper insights
Here, present and define a rating scale, then follow with an open-ended question about the respondent’s score:
On a scale from 0 to 10, how would you rate your overall experience? In as much detail as possible, please describe why you gave us that score.
Having both open-ended responses and scores unleashes the potential to correlate valuable unstructured text feedback with quantitative information to show what topics and concerns are most important to respondents.
It’s time to stop being afraid of mixed datasets. With the right NLP software, you’ll not only have collected quantitative ratings, but will also understand what drives them. This is especially effective for metrics such as Net Promoter Score (NPS), Customer Effort Score (CES), and Customer Satisfaction Score (CSAT). Gaining this context allows you to surface and target problem areas, and segment answers by concept to take a deeper look at your respondents. It also ensures the fixes you’re making today are the right ones to boost your numbers – and more importantly, your customers’ happiness.
What to watch
We’ve all taken surveys where we’ve left open-ended questions blank. Sometimes, customers will not provide text responses after giving their rating. If you’re missing a bunch of text feedback, analysis on a mixed dataset may not be as insightful.