Whether you rely on a market research firm to conduct your surveys or you complete them in-house, you should be aware of 10 hazards we’ve identified that can render your survey useless. Avoiding these pitfalls will help make life easier for both you and your supplier.
1. Setting Fuzzy Objectives
Successful studies begin with clear and specific objectives. Don’t stop with “to know more about our customers.” That vague objective leaves too much unknown. Research pioneer Sherwood Dodge said, “No research technique is so sharp that it can answer a fuzzy problem.” If, at the beginning of the project, you can look ahead and envision how you’ll use the results, you’re on the right track.
2. Not Telling the Researchers What You Plan to Do with the Results
Simply telling researchers what questions to ask may not meet your study’s objectives. Researchers do a better job of designing a project for you if they understand what you hope to accomplish. Resist the temptation to just give a researcher a list of questions since “that will make it clear what we’re after.” Odds are this technique will lead to useless data. Benefit from the research company’s expertise; they have probably faced situations similar to yours and can plan accordingly.
3. Excluding Stakeholders from the Design Process
If, for example, a research objective is to develop a profile of customers, the people who would most directly benefit from this data should be involved in the survey process. It has happened: A survey is completed and then the folks who should have been involved see the results and ask…”Why wasn’t this asked?” “Why didn’t you find out more about that?”
4. Designing the Questionnaire by Committee
By nature different stakeholder groups have different information needs, and it is wise to gather their input. It is best, though, to leave the committee at the door when designing the survey instrument or you could end up creating a sort of “Frankenstein’s monster” questionnaire. A good research partner and design agent will hone in on the questions that should be included in the survey, based on your objectives, and determine which can safely be eliminated. Although design time may increase, the result should be a concise questionnaire that meets your objectives without asking non-essential questions. Caution: Keep an eye on overall length of the survey, as longer surveys add to the overall cost of the project and can have a detrimental impact on response.
5. Assuming Respondents are as Interested in Your Survey as You Are
Respondents are doing you a huge favor by answering. (Think about having to pay them at their hourly rate to do it!) Remember, they’re not nearly as interested in supplying you with information as you are in receiving it. As a result, keep your survey short, to the point, and easy to complete.
6. Asking “Interesting” Questions
The “interesting” question (as in “I don’t know what we’ll use that data for, but I’ll bet it will be interesting”) contributes to lengthy questionnaires and low response rates. Common examples of “interesting” questions include standard demographic queries (age, gender, marital status, etc.) when there is no action plan for data use at the back end. Frivolous questions inflate the price of a survey and can sacrifice high response rates.
7. Not Following Sample Selection Instructions
This pitfall is one of the most costly, yet one of the most easily avoided. The research company should provide you with explicit instructions of how to generate the sample from your files, assuming you own the sample frame. If using an externally-supplied sample, the research company should work with the supplier to assure appropriate sample creation.
Creative Alternatives to Proper Sampling Procedures can Lead to Unusable Data
Some of these problems can be caught before fieldwork, but other problems aren’t identified until it’s too late. More than once, after the results of a survey were collected, the client realized the sample was pulled from a sub-set of the sampling frame. A significant, important portion of the original target was omitted, rendering the data useless. The only solution was to resample and go through the expensive exercise of collecting the data a second time.
8. Being Penny-Wise and Pound-Foolish
Attempts to reduce out-of-pocket expenditures by eliminating personalization techniques, avoiding the use of some kind of incentive, or offering the incentive only if a reply is received may result in savings, but generally cause big drops in response rates. Maximizing response should be an important aspect of the survey process, and cutting corners can have a detrimental impact on response.
9. Rejecting the Results When they Don’t Match Your Preconceived Notions
When research uncovers facts contrary to what clients think or want to hear, their first reaction may be to reject the research. Before tossing the report out the window, it would be prudent to make sure that there was no mistake in research design or execution. After confirming that the data is sound, it’s important to realize that the research really does reflect something that you ought to factor in with what you already know. One project director was told by a client that they were just “going to bury” their survey results even though the data showed there wasn’t as much of a market for or interest in a new product as they originally believed. This decision could prove to be costly.
10. Not Using the Results
For whatever reason—inconclusive results, “negative” results, personnel turnover—a disheartening proportion of research reports end up sitting on the shelf, collecting dust. Clearly, there is no surer way to squander your research investment than to consign it to oblivion.
To help ensure this doesn’t happen to your project, confirm why you’re conducting the project in the first place and share that knowledge with your research partner; involve those who are most likely to use the results; and keep the survey focused. Finally, don’t overlook the details such as sample selection, because it’s often the attention to minor details that can make or break a survey.