Online surveys have long reigned near the top of the ad testing toolkit by providing the fastest, most efficient way to gather valuable consumer feedback throughout the creative process. But realizing these benefits has generally required that researchers forfeit the open-ended questions that allow them to truly “hear” the consumer’s voice. It has simply been too time-consuming and resource-intensive to manually process all the raw, unorganized natural language data that is generated through the traditional online survey text box.
This is now changing. The use of machine learning (ML), natural language processing (NLP) and crowdsourcing techniques enables online survey platforms to automatically transform qualitative free-text responses into rich data sets that can supplement traditional quantitative analyses and metrics.
Besides adding depth to traditional survey data, these techniques also improve data quality. They are moving the respondent experience from tiresome box-checking toward free-flowing natural conversations, at speed and at scale. This new generation of online surveys is starting to have a significant impact not only on ad testing but on the entire spectrum of product development and marketing research, from consumer behavior analysis to pricing and brand tracking.
The Online Hearing Problem
Online surveys can, in theory, be just as effective at hearing the voice of the customer as focus groups or social listening tools. But traditional survey platforms have, in practice, made this kind of listening through open-ended questions difficult. It can take days to manually clean up data and find consistent themes that express the sentiment of the respondent group with confidence.
As a result, many researchers have gravitated to using closed-ended questions, but this severely limits their respondents’ ability to authentically “talk” to them. Data quality also suffers as respondents quickly tire of checking box after box of multiple-choice questions. They begin to lose focus, and many may even resort to “straight-lining” their answers (i.e. selecting the same option question after question).
Other researchers who understand the pitfalls of closed-ended questions often try to retain the text boxes and rely on text analytics to deliver the desired insights. But this is not an ideal solution either. It ignores the problem of respondents entering meaningless or gibberish answers, or the challenge of ensuring they answer all questions, since repetitive text boxes can be as just as tedious as the check boxes of long multiple-choice surveys. Too often, respondents simply skip over one or more of the text boxes altogether, dramatically reducing the depth of the resulting data.
These challenges can only be solved by making online surveys smart enough to automatically turn freely expressed thoughts from respondents into useful data for researchers, in real time. Today’s combination of NLP and ML technologies along with the respondents’ own “crowd intelligence” makes this possible, and creates a smarter, more interactive process of collecting and validating data that leads to more useful results for the researcher.
Making Surveys Better Listeners
Whereas traditional surveys only present a text box to respondents for providing their unaided answers, the new approach only starts with this step. As the natural language responses are entered, the platform immediately deploys NLP and ML algorithms to automatically perform a first-pass cleanup of the answers in real time. The system quickly processes the free-text answers, removes duplicates and noise, and only keeps those that are unique and meaningful. It then feeds these statements back to respondents in the form of an ad hoc dynamic mini-survey. Each respondent is asked to agree or disagree with the statements, and the process is repeated five to 10 times.
This approach offers three advantages. First, because respondents enjoy the more interactive and “gamified” experience, they provide higher-quality data, even if it is only an assessment of others’ answers. Second, enlisting respondents to interactively validate and quantify data in real time results in more robust “coded” answers from the study’s natural language input than is possible with ex-post free-text analytics tools. Third, themes in answers are automatically identified and grouped by an algorithm that is trained in real time based on the specific answers being provided by survey participants. This is a major time-saver for researchers.
There are many ways the resulting data can be used. One example is ad planning and testing, which can range from evaluating consumer behavior and informing creative strategy to assessing brand recall, messaging clarity, emotional response and overall effectiveness. Smarter online surveys enable researchers to better understand not only how well they delivered their message to their consumer, but also what impact it had on them, in their own words.
A large multinational video and social media platform, for instance, had struggled to understand its users’ shared values and social causes through the traditional closed-ended survey approach. The alternative of using open-ended questions would require too much time and cost to review, clean and organize the free-text data.
Moving to a smart survey platform enabled the company’s research team to give respondents an experience they said felt more like “talking to a friend.” This format, together with framing open-ended questions in a simple and straightforward way, encouraged honest and personal answers.
In a few short weeks, with data spanning several countries and multiple languages, the research team learned that “compassionate” and “diverse” were the top two values its users assign to themselves, and that human rights, the fight against homelessness and women’s rights were the top causes they supported. The company also learned from respondents that sharing on social media was a way of saying that you are thinking about your loved ones, which helps foster friendships and maintain well-being — insights that were fundamental to their business and branding initiatives.
The latest survey techniques also enable researchers to use this kind of statistically validated qualitative data in other types of analyses. Examples include net promoter score (NPS) and pricing studies, as well as in-depth product testing, where natural text data becomes categorical variables in quantitative models.
With the new generation of smart survey platforms, these types of studies now can take days rather than months. Product and brand managers can, for instance, complete a study exploring how changes in a product’s price impacts sales. They can quantify and track the value of their brand through a better understanding of the consumer trade-offs between products competing in the marketplace. In each case, consumer behavior insights are quantified and integrated into the researchers’ analyses, and the data can be fed into forward-looking models.
The combination of automating and integrating ML and NLP technologies into survey research is making lasting inroads into the traditional ad testing and market research toolkit. Combined with leveraging the crowd intelligence of survey respondents, these technologies create an opportunity to listen much more effectively to the customer voice. They also enable researchers to integrate the unstructured survey data directly into the statistical models that fuel more valuable insights.
Rasto Ivanic is a Co-founder and CEO of GroupSolver®, a market research tech company. GroupSolver has built an intelligent market research platform that helps businesses answer their burning why, how, and what questions. Before GroupSolver, Ivanic was a strategy consultant with McKinsey & Company, and later he led business development at Mendel Biotechnology. During his career, he helped companies make strategic decisions on developing and managing new businesses, pursuing market opportunities, and building partnerships and collaborations. Ivanic is a trained economist with a PhD in Agricultural Economics from Purdue University, where he also received his MBA.