Census Data Create Privacy Questions
I am indebted to my former students who provided ideas for this particular page.
Let's talk about how to write a question. Also, I will provide a checklist that may help you in the final stages of writing questions. You have to have a structure to write a questionnaire or an interview guide. You need the following for a questionnaire:
- Identification questions at the beginning for age, job title, or other demographic data
- Easy-to-answer questions at the beginning to make the respondent feel good
- Questions that pique the reader's curiosity to answer
- Organized format with ballot boxes of the same width and height for ease of answering
- A title that reflects the word, Survey, and no use of Questionnaire or Opinionnaire
- At least five well-written questions.
When you write I.D. or identification questions, you are saying to the respondent that you need some data about that person so you can properly write your results. Age bracket as defined may be an identification question. If you are studying about cellular telephone etiquette, for example, you may want to first ask if the respondent owns a cellular telephone. If you are surveying students, what class status is the person (Freshman-Sophomore-Junior-Senior-Graduate)? If you are surveying employees, what department does the person work in? How long (in years or months) has the person worked in that particular company? You need to identify your respondents to make sure you have the right sample for the research you are undertaking.
Be careful about being prone to say you don't need demographic data. Let's say you want to survey managers and their attitudes toward laying off people after the merger. You know all the managers have the title, "Department Manager." What you don't know is the service record of each manager. You don't know their experience in laying off people. You don't whether they served at other branches of the company or other companies before coming to the corporate offices. All this information through demographic questions will enhance the classification of data in your report. You don't want to refer to Manager A and Manager B. People have to trust your data because you have made a sincere effort to gather demographic information as backup for your report writing.
At least two of my students have asked for the criteria mentioned in class to be placed on the Web. David Robinson, before his untimely death, developed these criteria in one of his business communication textbooks. The criteria follow:
- Questions should be asked in some kind of logical order.
- Interesting questions (one that pique the reader's curiosity) should be asked advantageously.
- Difficult questions should be given proper emphasis.
Difficult questions usually contain more than one idea. One idea to a question, please.
- Questions should be easy to understand.
- Questions should only ask for needed information.
- Questions should request only information the respondent will be able to provide.
- Questions should ask for information the respondent will be willing to provide.
- Questions should not lead respondents to provide certain answers.
- Questions should provide for the various possible responses. For example, you should use No Opinion or Don't Know. Otherwise, you are writing a forced choice question.
- Questions should be tested for both content and format.
When you start to prepare a survey, either a questionnaire or an interview guide, you need to remember three important words: focus, brevity, and simplicity. These words are not my original ones; they are gleaned from the thinking of Pamela L. Alreck and Robert B. Settle, authors of The Survey Research Handbook: Guidelines and Strategies for Conducting a Survey, 2d ed. These authors believe we first need a focus. Your questions need a focus. Are your questions written in such a way that the respondent understands exactly what you are asking? The authors offer the following suggestions for well-written questions:
- Which of these brands are you most likely to buy?
- What time do you ordinarily leave home for work?
- Which candidate will you vote for on election day? (p. 88)
Second, you need brevity. Your questions should not run on forever. One idea to a question, please. You want to avoid double questions where too much is asked in the stem of the question. As the authors reiterate, the longer your question, the more difficult the time for the respondent to create an answer. The authors offer these suggestions for well-written, briefer questions:
- What's the age and sex of each individual?
- Please list the year and make of each car you own.
- How many months ago was your last physical exam? (p. 89)
Third, you need simplicity. You need to write simple sentences, not complex or compound ones that tie up the reader's thinking. The authors demand that in your clarity you make sure any reader interprets the question in the same way (p. 89). Clearness is established by precise wording. Let's look at the clearness of the examples Alreck and Settle offer:
- How much influence do you, yourself, have on which charities your church contributes to?
- Do you usually take aspirin as soon as you feel some discomfort, or only when you feel actual pain? (p. 90)
Should you use open-ended or closed-end questions? That depends on what data you want to gather. I would consider a combination where appropriate. Remember, in the closed-end questions to always have "Don't Know" or "No opinion" unless you want a forced choice.
Question preparers are prone to ask too many questions in the same question. Think of this admonition: One idea to a question, please. Let's take a glaring example:
- Do you think the solution is to go on strike or work overtime?
Think about: After reading the stem of that question, you have to answer whether you agree or you don't agree. Five choices on a rating scale are given to you.
How much better the phrasing of the question would have been if the preparer had written two questions:
- If you looked at solutions to our current work dilemma, would you advise going on strike?
[ ]Agree with idea. [ ]Disagree with idea. [ ]No opinion.
If you looked at solutions to our current work dilemma, would you advise more overtime?
[ ]Agree with idea. [ ]Disagree with idea. [ ]No opinion.
It is important to avoid confusing the respondent in a survey. Too many ideas to think about in our survey question may distort your results. Also, it would be helpful to have a five-point rating scale.
One of the greatest hazards you face as a question writer is the use of leading questions. If you are questioning college students, for example, you would not say: "Do you want a tuition increase in the next six months?" You are immediately leading the respondent to answer. The question implies that the college student respondent thinks a tuition increase is inevitable. A much better version of the question would include:
How would you react to a possible tuition increase on this campus?
_____ Support the increase. _____Oppose the increase. _____Don't know.
_____No opinion. _____Write letters to legislature. _____Other (please
state).
Here's an actual question from a previous report in an interview guide:
- Do you feel that the work in the office has been piling up since we lost our last employee?
Look carefully at the construction of the previous question. That phrase, "do you feel," already creates a leading question. Most people reply (especially if they are overburdened) that the work will continue to pile up. You have lost the value of your question. The words, "piling up," also create a negative impression. Wouldn't the question be better worded by saying:
- What situations have you noticed since we lost our last employee?
The word, Situations, provides a more neutral response. You still have your open-ended question. You still have to classify your data after the answers are received. You have not led the interviewee to answer a certain way. You have left the issue wide open; still, you expect answers about the work piling up. You could have also provided a closed-end question for this particular item.
After you have completed your interview guide, please consider these last-minute questions:
- Have you asked at least five well-written questions?
- Have you simply labeled the title as Interview Guide?
- Have you provided many open-ended questions so the interviewee will be allowed to talk?
- Have you provided some demographic questions (job title, years in the company, and so forth) to identify your interviewees?
- Have you vowed to take sufficient notes from the interview questions rather than simple phrases that mean little?
- Have you vowed to keep dates and pertinent data about each interviewee?
- Have you checked each question for any leading qualities?
Good luck in writing your questions. You will do fine if you remember that questions have to be composed carefully. You cannot afford to write any old questions.
You have been given the daunting task of writing a cover letter to accompany the questionnaire. You need this letter because you want the responses through the mail or some other suitable manner. What do you do? First, consider how the letter is composed:
- Paragraph One: Talk about why the respondent is important to the data. Sell the person on responding.
- Paragraph Two: Talk about that you are a college student. You need the data for such-and-such a report or whatever. You know it is an imposition to ask the person to respond.
- Paragraph Three: Give an exact date when you want the questionnaire returned. Ask if the respondent needs any other help from you.
Don't forget that last paragraph alerts you to whether the respondents are returning the instruments on time. You want to be able to write a follow-up letter immediately or make telephone calls if the date is overlooked. People tend to leave questionnaires around.
When one is viewing his or her own questions before submitting to the respondents, you need to consider these criteria as well:
- Always explain when you are writing the question stem.
Suppose you ask: What is the reason in the process? Think about: The question writer did not carefully say what "reason" was being referred to. The reader is left with more questions than answers to how to interpret the question. You could say: What is the main reason for the filing process we use? Still, more terms would have to be defined.
- Always define your terms.
Suppose you ask: What activity do you want to participate in? Think about: The reader is left wondering what activity is being discussed. You need to write: What work activity do you enjoy participating in? The question is becoming clearer.
- Be careful about the way you expect people to answer questions.
Suppose you ask: How many times a week are good enough for people in your office to participate in teamwork activities? Think about: "Good enough" leaves the question reader wondering what the questioner meant. Good enough in what circumstances? Why not ask the question in a straightforward manner? How many times a week should our people participate in teamwork activities? Then, give the choices, including other (please state).
- Be careful about asking for multiple answers. These answers are more difficult to tabulate.
Suppose you ask: Why do think people resign at our company? Check all that apply. You give five choices. Now, you have to tabulate all five responses. Could you have asked the question differently by saying: Check the one that most applies? You can now tabulate only one response. That may not be your intention, though. Just remember the difficulties you encounter in tabulating by providing more choices.
On television one of the public broadcasting channels featured a program called International Dispatch. In the program the Governor General of Gallup, Inc. working for Hong Kong before the turnover on July 1, 1997, reported he was asked to construct a question about the direct election of representatives to the Hong Kong parliament. The polling organization was trying to find out if the people of Hong Kong favored direct election of their representatives. When the question was finally constructed, it consisted of 150-250 words--too much for the average Hong Kong resident to understand.
The direct election of representatives was voted down, partly because of the confusing wording in a simple, straightforward response. The people of Hong Kong could not understand the question, and the referendum failed. The people revising the question did not follow the advice of Gallup, Inc. about planning for simple wording. When you write questions, always keep the admonition in mind that the question should be easy to understand.
BONUS QUESTION FOR WEEK 9
Suppose you encounter the following question when you are being polled during an interview: "Did you find the Business Club Association helpful?" What major criterion is being violated with the wording of the question? How would you improve the question?
Whenever you construct a question, you have to think about what data you want to obtain. Generally, it is best to construct a five-point rating scale than what one business communication achieved:
---- --- ---
/__/Yes. /__/Maybe. /__/ No.
As you look at the last part of the question, notice how difficult it will be for the respondent to distinguish between "maybe" and "no." The question would have read in a much better way with "no opinion" or "don't know" substituted for "maybe."
Granted, what we have seen is not a true rating scale. It does begin, however, the analysis of rating scales. You want your data to mean something. Suppose you have decided to research why your restaurant customers are not coming as frequently to your restaurant. You design your survey instrument with "Agree" and "Disagree." Don't you want to know the degrees of agreement or disagreement? How much does a patron disagree with your service? How much does a patron think your waiters and waitresses are friendly? It takes just a little more time to construct a five-point scale instead of relying on a two- or three-point scale that may not give you the data you want.
In marketing research, for example, authors refer to cues. What kind of cues do you take from the data you are presented with? You can skew your answers on either side of rating scale by not paying attention to balance. You have to realize the respondent may have trouble differentiating between words, such as tolerable and satisfactory. Choose words in your rating scales that have enough differentiation to make the point clear to the respondent. For example, suppose you are confronted with this survey question:
How would you rate the food services provided for on-campus students?
---- ---- ----- -----
/__/Excellent. /___/ Satisfactory. /____/ Tolerable. /____/ Poor.
Think about: You could say "excellent," "above average," "average or satisfactory," "below average," and "poor." Now you have made a differentiation between the degrees of rating. You are not leaving the respondent to wonder about the difference between satisfactory and tolerable. The distinction was too fine in the first writing of the question.
Look at how effectively the following survey question took advantage of balance. The terms were defined in parentheses to help the respondent complete the instrument. Doubts were removed from the respondent's mind because of the care in constructing the question. Here's the question:
1. How often do you shop at our supermarket Store #561? Please check the one that most applies.
O Shop-aholic (everday)
O Always (5-6 times per week)
O Frequently (3-4 times per week)
O Often (1-2 times per week)
O As needed (once a week)
O Convenience only (1 shop regularly elsewhere).
Think about: Notice the scale is not top or bottom heavy. All the degrees of "often" are properly delineated. Ferber in Handbook of Marketing Research makes the point about balanced cues: "A scale is balanced when it has an equal number of cues on either side of the indifferent cue." With a little paraphrasing I have adapted a Ferber example where the scales are not balanced and heavily placed on the favorable part of the scale:
1. What is your reaction to bank ATMs? Please check the one that most applies.
O Enthusiastic
O Extremely favorable
O Very favorable
O Favorable
O Fair
O Poor.
The following paragraphs are adapted from ideas contained in Pamela Alreck's and Robert Settle's book, The Survey Research Handbook, 2d ed. The question examples are adapted from their examples for "Inapplicable," "Loaded," and "Likert Scale."
At times you may be called on to write a question that might be considered inapplicable. That means the person's experience in answering the question is not taken into account. Such a poor example occurs with the following wording:
1. How long does it take you find a parking place after you arrive at the campus?
Note: You have to take of the person's experience in trying to answer the previous question. Do you come to the campus by bicycle, bus, or motorcycle? Certain of these modes of transportation may present no problem in responding to the previous question. The person's varied experience in transportation needs may not have been considered. You have an inapplicable question.
Improved Version:
1. If you drive to campus, how long does it take you to find a parking place after you arrive at the campus?
Now, we have narrowed the choices. Now, we are interested in the drivers. The transportation experience is more applicable. The data obtained will become less biased.
With loaded questions we are influencing the respondent even more than leading questions. We directed the respondent to a certain response with a leading question. Now, we are trying to influence the respondent even more with a loaded question.
Let's take a poor choice of wording to look at a loaded question:
1. Do you advocate removing brush from the hillside to save homes and human lives during a wildfire?
Note: You have already planted in the respondent's mind a desirable goal. You need to be objective in your wording.
Let's try an improved version of that same question:
1. Does hillside brush clearing require more attention from the homeowner?
We are getting at a similar idea by improving the wording. We are dealing more directly with the issue. We are trying to remove the bias and strengthen the reliability of the question.
Students of question writing often confuse the Likert scale with trying to come up with five responses of anything. The scale was named for the management authority, Rensis Likert. It deals with agreement. You do usually deal with a five-point scale, such as the following:
- Strongly agree
- Agree
- Neutral
- Disagree
- Strongly disagree
Taking the previous scale, we can react to the following items about our agreement or disagreement with them. Suppose we are designing a survey to reflect the costs of going to college. We might list the following items for their agreement or disagreement:
11. College tuition should never be raised in the state.
12. Students should pay extra fees for the student union on their campus.
13. Students should be charged for a laptop computer once they start their college education.
14. Tuition costs should not be balanced on the backs of the students.
15. Students should be willing to pay additional fees for parking on a campus.
16. Scholarships should be provided to all needy college students.
17. Financial aid on a campus should only be provided to those who qualify.
18. College students should be expected to pay double their current tuition for public universities in the next five years.
19. College tuition should be raised by the Board of Trustees at a public university whenever needed.
Do you begin to see how a Likert scale is constructed? We are talking about how often to take an action: agreement/disagreement. We are interested in the actions the respondents might have taken. The Likert scale is most employed for several rather than one or two. Statements should be composed that are typical of the global issue. We must make sure the respondents do not choose a neutral value. Half the items should be inclined toward the pro side of the issue and half toward the negative side of the issue.
Dr. June Reinisch, Director of External Affairs, Kinsey Institute for Research in Sex, Gender and Reproduction, Indiana University, submitted an eight-year old study to the Journal of the American Medical Association. In the article the doctor concluded "out of 599 students at an unnamed Midwest university, 59 percent said oral sex did not constitute having sex."
On the Good Morning America talk show, Dr. Reinisch said some fascinating comments about the construction of a question. The questions, she believed, should have been clear, descriptive questions with explicit language. Then, the respondents could have answered even more pointedly.
The journal editor, after 17 years as an editor, Dr. George Lundberg, was fired partly for including an old study in the middle of an impeachment hearing. The AMA does not want to be accused of having a political agenda instead of reporting medical breakthroughs. Other published articles also contributed to the firing.
The Internet continues to take over our lives in designing surveys. You may have heard of the Harris Poll. It is as famous as the Gallup Poll for surveying public opinion, especially during election years. Harris Black International, Ltd., in Rochester, New York has proposed surveying Internet users for the next political election and presidential race. Do you see any problems with that analysis?
First, can we trust Web polls? The Wall Street Journal reports that "most researchers condemn Web polls as flawed"("Is Web Political Poll Reliable? Yes? No? Maybe?", April 13, 1999, p. B1). You have to take account of how many people own personal computers, modems, and an Internet Service Provider. Most polls to date rely on telephone surveys of people more randomly chosen. The Web is not randomly chosen. It is a self-selected group of people. It reminds me of the recent story by one of my students about surveying one of her core business classes of people upset about not securing a particular class during a particular semester. I advised the student her class survey is not good randomness. You must simply say in your report the students were "self-selected." When individuals volunteer for Internet polling, they are considered a "self-selected group."
Second, let's consider some other issues. How well educated are the individuals who would do Internet polling? Are these people more educated about technology? How will they respond to questions about technology as we enter the Millennium.
Give Harris Black credit for amassing a database of three million users for its Internet polling. Participants are given passwords for the Internet polling; that ensures only one set of answers. Does Harris Black have a track record? Based on previous Internet polls for Senate and gubernatorial elections, the Internet poll predicted 21 out of 22 races.
The Harris Black effort represents an ambitious one. Do Democrats, for example, own most of the personal computers for the Internet poll? Now, let's get tough about polls. America OnLine, Inc. (AOL) asked the following question in a recent poll: Should President Clinton resign? That similar question had already been asked by Gallup, and a majority of polled Americans did not want the President to resign. Yet, when AOL polled their web users without accounting for the technology ownership, 52 percent of those polled wanted the President to resign. These disparities in numbers suggest reliability of polls need to be looked at carefully.
If Harris Black's numbers of Internet users at 45 percent are correct, the reliance on Internet polls may continue. The Internet polls have their problems as witnessed by a close Georgian governor's race where the black vote was not completely accounted for with the Internet polling. Harris Black defended its position on the difference in polling by stating: "Unexpected surges in turnout by particular groups can confound pollsters" (The Wall Street Journal, April 13, 1999, p. B4).
Third, another question confronts us with Internet polling. Are the people who cruise the Net more educated and more wealthy than others without the surfing ability? We have to wait the outcome of future elections to determine whether Internet polling will remain.
The Census 2000 questionnaires created major headaches for the people who had to receive the data. People became angry with what they termed "invasion-of-privacy questions." Such questions included:
- Do you have COMPLETE plumbing facilities in this house, apartment, or mobile home; that is (1 hot and cold piped water, 2) a flush toilet, and 3)a bathtub or shower?
- Do you have COMPLETE kitchen facilities in this house, apartment, or mobile home; that is 1)a sink with piped water, 2) a range or stove, and 3) a refrigerator?
Don't you suspect the word, COMPLETE, angered respondents? It should be noted that only 1 in 6 households received this long questionnaire. We may call the long form intrusive, but certain selected subjects have occurred on the form for considerable years:
- Race question. 1790 first used.
- Type of work question. 1820.
- Disability question. 1830.
- Occupation question. 1850.
- Home ownership. 1890 first used.
- Home value. 1930.
- Indoor plumbing. 1940.
- Kitchen in home. 1960.
- Farm residence. 1970.
- Telephone. 1980. (source: "200 Years and Counting: Census Nosiness Isn't New," USA Today, 6 April 2000, p. 16A)
From that previous analysis it would seem many of the questions keep appearing every 10 years. We are, however, dealing with a different population. People are quite concerned about their privacy rights, including the Internet. They don't trust the government the way they once did. Too many media reports have suggested the Government in all its phases may be using information about us in ways we never intended. One writer has suggested we need only one question on the Census: How many people live in your household? With that question, an official count of the population could be taken without all the extraneous questions. Both Majority Leader Trent Lott and Presidential Candidate George W. Bush have suggested people leave blank those questions that offend them. Still, they urge the Census forms should be sent in. Approximately, $180 billion is allocated from these Census questionnires. We all need to be counted. The Government had targeted a 61 percent return, but those Census questionnaires are not being returned with those numbers. In 1970, 78 percent of the households returned the forms. The numbers continued to drop until the last returns for 1990 showed 65 percent. A questionnaire with 53 questions on the long form does not entirely follow the principle of being easy to fill out. No citizen should be saddled with questions that are "nice to know."
William Safire, the maven of language, has asked some pointed questions about the wording. He is concerned about the problem with parallelism in the question about occupation:
patient care, directing hiring policies, supervising order clerks, repairing automobiles, reconciling financial records."
As Safire sharply notes, patient care is not tuned with the parallelism of the other examples. Safire also takes issue with the question about relationships. He notes the list includes:
Husband/wife, Natural-born son/ward. (source: William Safire, "On Language: Census 2000" The New York Times Magazine, 2 April 2000, p. 24)
One should raise the question with genetic engineering advances what "natural" means. Test-tube babies in some people's eyes may qualify as "natural." The Census Bureau has not considered the march of technology that affects the way we phrase all questions on demographics. A writer from the Los Angeles Times phrased the issue correctly when he noted the need for pertinent information: ". . . we need to make decisions based on sound information" (Nick Anderson, "Q. What Causes Anger? A. Long Form of Census," Los Angeles Times, 1 April 2000, A (Main), p. A10).
Don't forget to check the home page for any additional help.
copyright(c)G. Jay Christensen, All Rights Reserved
Last updated Wednesday, October 22, 2003