One of the important results of the modern world’s thirst for data-based decision-making is repeated requests for personal information to assemble those data. After every purchase of a car, many visits to websites, or a purchase of fast food, it is common that we as customers are given a set of questions to answer about the experience.
There are wonderful effects of these surveys. Organizations supplying goods and services to the public can be alerted to potential improvements they can launch. Customers are given a voice to express their reactions. Organizational decisions can be guided by real reactions of clients instead of hunches and “gut-feelings.”
There are also some harmful effects of this world. We are barraged by requests of measurement. Merely making the choice to respond or not becomes a burden at common volumes of requests. In addition, everyone appears to think they know how to design a survey and create a structured measurement scheme. (Indeed, since we all have asked questions before, what’s really complicated about constructing a questionnaire?) The result is that we are sometimes subjected to horrible survey requests, containing questions that are ambiguous, sometimes even meaningless. Finally, our limited discretionary time is taken up by these requests.
For these reasons, the participation rates in surveys are declining throughout the world. As the rates decline, the threat of bias in statistics from survey data increases. When the nonrespondents to a survey have different characteristics on the survey variables than do the respondents, then descriptive statistics from the survey respondents do not match what would have been obtained from the full population. This problem cannot be solved by having more people in the sample; the problem needs to be solved with higher response rates.
Coming to Georgetown and observing our own survey culture was an eye-opener to me. It appears that everyone is free to mount a survey for all populations in the University if they so choose. (At a past hackathon, undergraduate students mocked how many requests and pieces of information that they get daily from the university.)
Further, there seems to be a plethora of surveys that employ an entire census of the community of interest, not a scientific sample of the population. This implies that we’re not taking advantage of the economy and statistical inferential properties of carefully designed samples.
Finally, there seems to be no consistency in how much followup of requests for survey participation. As a result, many have very low response rates, sometimes a small minority of the survey sample.
These are common concerns in large organizations. For that reason, many organizations have created processes to evaluate proposed surveys of their members. (For Federal government agencies, they have “respondent burden budgets” that limit the number of hours of the US public time they can use for survey participation.)
Some universities have created an oversight group of technical experts in surveys, representatives of the survey populations, and administrators to evaluate survey proposals. The group has authority to support or reject the access to faculty, student, or staff lists used for the sample surveys. It attempts to coordinate and combine surveys whenever possible. It assures that the analyses from the survey would be available for wider university work. It would archive the data for wider, later uses consistent with the goals of the survey.
I believe we might better serve our institutional needs for information with such a group. In the coming days, we’ll launch discussions with key stakeholders to mount such an effort at Georgetown.
A question. Do you know how many people respond to your blogs and what their demographics are? Just curious with the topic of the current blog.
Great discussion. Great points. Lots of truth here. Know it’s just a sample of one but RIGHT ON!