Abstract
This chapter discusses the challenges of studying sensitive attitudes and topics in fragility, conflict, and violence settings and summarizes the most common approaches to overcoming them. The first section reviews the challenges involved in studying sensitive attitudes and the factors that could introduce bias and affect the validity of such research. The second section discusses four techniques (endorsement experiment, list experiment, randomized response, and behavioral approaches) that have been developed by researchers to overcome these challenges. The chapter presents an overview of studies that have utilized these techniques and discusses their advantages and limitations.
You have full access to this open access chapter, Download chapter PDF
Similar content being viewed by others
1 Motivation
Fragility, conflict, and violence (FCV) drastically undermines the effectiveness and efficiency of providing public goods and services to the poor. FCV is moreover, a difficult field to study because of the sensitivity and complexity of the nature of events to be addressed. To understand how conflict and violence affect development programs and peoples’ livelihood in fragile states requires assessing people’s perception of the state, insurgent groups, international actors, and actions taken by these actors. Expressing views about these actors and their activities, however, are risky for those living in fragile states. People may fear that expressing their views could cost them potential benefits and that they may incur threats by state and non-state actors, stigmatization, and social ostracism. As a result, questions on issues that are perceived to be sensitive can introduce sensitivity bias, that is, respondents may either avoid answering sensitive questions altogether or provide untruthful responses.
Sensitivity biases generally originate from one of four sources: self-image, taboo (intrusive topics), risk of disclosure, and social desirability.Footnote 1 Self-image bias refers to untruthful replies based on misperceptions that individuals may have about themselves. Based on self-affirmation theory in psychology, individuals tend to maintain a perception of global integrity and moral adequacy and will reinterpret their own experience until their self-image is restored.Footnote 2 Individuals may therefore provide untruthful answers to questions that relate to their integrity and morality because of their distorted self-image, rather than admit an intent to deceive others. The second source of sensitivity bias is taboo or intrusive topics that respondents do not feel comfortable discussing with others. In such cases, non-response is more likely than untruthful answers as individuals try to avoid discussing the topic.Footnote 3 Risk of disclosure is the third source of sensitivity bias. Here, respondents are reluctant to reply altogether or provide a truthful response fearing that their response could be disclosed to the government, rebel groups, criminal groups, or local power holders.Footnote 4 Risk of disclosure, in the form of security threats by state and non-state actors or social sanctions by the community, is particularly relevant for research in an FCV context where the expression of views on sensitive topics could be very costly for individuals.Footnote 5
Finally, social scientists have long identified social desirability, the fourth source of bias, as a common threat to the validity of research findings.Footnote 6 Social desirability refers to ‘the tendency on behalf of the subjects to deny socially undesirable traits and to claim socially desirable ones, and the tendency to say things which place the speaker in a favorable light.’Footnote 7 Social desirability usually reflects a respondent’s concern about favorable attitudes of a reference group. The reference group could be peers, bystanders, family members or relatives present at the interview or even broader groups such as one’s community or other communities, institutions, or individuals that consume the research findings.Footnote 8 An important reference group whose presence could introduce social desirability bias includes researchers and surveyors. In this case, social desirability is sometimes referred to as the ‘experimenter demand effect.’ In a study of anti-American sentiment in Pakistan, social desirability bias (social image) is found to potentially lead to the underestimation or overestimation of attitudes toward sensitive issues depending on whether those with extreme views conform to, and express views consistent with moderate respondents, and vice versa.Footnote 9
Experimenter demand effects highlight that even if a survey or experiment is conducted in a private context where peer pressure is ruled out, the presence of a researcher alone could introduce bias and prevent respondents from expressing honest views and attitudes.Footnote 10 In a randomized experiment, it was demonstrated that participants who did not vote in an election were 20 percentage points less likely to answer the door to participate in a survey when they had been previously informed through a flyer about the survey, relative to those who had not received a flyer.Footnote 11 The experiment shows the strength of stigma and shame that respondents may feel upon revealing that they did not vote to a surveyor, a stranger whom they may never interact with again.Footnote 12
Social desirability bias may be even stronger in fragile contexts where social stigma could be costlier for individuals and where the association of surveys with aid and development projects could disincentivize truthful responses.
Regardless of the type, sensitivity bias can introduce two problems in surveys: item non-response and untruthful responses conditional on a response. In the case of item non-response, respondents take part in the survey but eschew answering sensitive questions, which is recorded as ‘Don’t Know’ or ‘Refused to Answer.’ Item-non-response can lead to an underestimation of sensitive attitudes/behaviors and bias estimates of treatment effects when sensitivity is correlated with treatment status.Footnote 13 Untruthful reply conditional on a response reflects cases where respondents do not avoid answering questions but provide deceitful replies. Both of these outcomes undermine research findings. Considering the importance of studying sensitive attitudes, researchers have invested in developing approaches to eliminate or reduce sensitivity biases. Below, we discuss these approaches and highlight whether they address item non-response, untruthful reply conditional on response, or both.
2 Approaches
Researchers in the fields of psychology, economics, and political science have developed a range of approaches to studying sensitive attitudes, which can be very useful for conducting research and data collection in fragile contexts. Endorsement experiments, list experiment, and randomized response are the most commonly used techniques developed to mitigate sensitivity bias. Table 1 summarizes the three techniques, as well as direct questioning, with respect to their ability to mitigate different types of sensitivity biases.Footnote 14 The three techniques can clearly improve direct questioning by reducing non-response and bias due to risk of disclosure and social desirability. However, they are costly in terms of sample size (because they leverage statistical inference on the difference between two groups vs. using the mean in one group), require extensive pre-testing, and cannot address bias due to the intrusiveness of the topic (taboos) and self-image. In this section, we review the three approaches, their advantages, and limitations.Footnote 15 At the end of the section, we will provide a brief overview of behavioral approaches to address sensitivity biases.
2.1 Endorsement Experiments
Endorsement experiments aim to mitigate non-response and biases due to social desirability and risk of disclosure by obfuscating the object of study. They were first used to study race relations in the US but were later used for studying support for states, international actors, and militant groups.Footnote 16
Since questions about support for the state or insurgent groups in fragile states could pose safety issues for enumerators as well as respondents, answers to direct questions about the state or insurgents may not elicit honest answers and typically face high non-response rates. The endorsement experiments overcome both issues by obfuscating the object of evaluation. When applied to measuring support for particular political actors, endorsement experiments seek respondents’ views about particular policies, instead of asking the respondents to express views about particular groups or individuals. Researchers solicit views of actors by dividing respondents at random into treatment and control groups. In the control group, respondents are simply asked whether or not they support a particular policy. In the treatment group, respondents are asked the same questions but are reminded that the policy is endorsed by the groups or individuals who are the subject of the study. This approach is based on extensive research in social psychology, which show that individuals are more likely to favor policies that are endorsed by individuals from groups whom they like.Footnote 17
As endorsement experiments avoid direct questioning about sensitive topics, respondents feel more comfortable answering questions, reducing non-response rates. Because this method provides a reasonable degree of plausible deniability, respondents are more likely to provide truthful replies, reducing bias due to risk of disclosure and social desirability. This method can potentially mitigate bias due to taboo (intrusive topics) if researchers can phrase questions in such a way that respondents do not feel that intrusive words are being associated with them. It cannot, however, mitigate biases due to self-image because it does not deal with misperceptions that individuals have about themselves.
In a study on support for Islamist militant groups in Pakistan, researchers included questions about support for the polio vaccination, among other policies.Footnote 18 The respondents in control group received the following message: ‘The World Health Organization recently announced a plan to introduce universal Polio vaccination across Pakistan. How much do you support such a policy?’
The respondents in the treatment group were administered this slightly different statement and question, one which associated the policy with one of four militant groups active in the country at the time: ‘The World Health Organization recently announced a plan to introduce universal Polio vaccination across Pakistan. Pakistani militant groups fighting in Kashmir have voiced support for this program. How much do you support such a policy?’Footnote 19
Compared to the direct questions about the militant groups in this study, the endorsement experiment questions received much lower non-response rates. For instance, while the non-response rate for direct questions ranged from 22% (questions about Al-Qaeda) to 6% (questions about the Kashmir Tanzeem), the non-response rate for endorsement experiments was much lower, ranging from 7.6 to 0.6%.
In addition to measuring sensitive attitudes, endorsement experiments can be utilized to study sensitive political behaviors as well. One study used an endorsement experiment to study voting ‘no’ on a personhood referendum in Mississippi.Footnote 20 They administered two slightly different primes among the treatment and control group, as in the following box.
Endorsement experiment assessing behavior | |
---|---|
Control group | Treatment group |
We’d like to get your overall opinion of some people in the news. As I read each name, please say if you have a very favorable, somewhat favorable, somewhat unfavorable, or very unfavourable opinion of each person Phil Bryant, Governor of Mississippi? Very favorable Somewhat favorable Don’t know/no opinion Somewhat unfavorable Very unfavorable Refused | We’d like to get your overall opinion of some people in the news. As I read each name, please say if you have a very favorable, somewhat favorable, somewhat unfavorable, or very unfavourable opinion of each person Phil Bryant, Governor of Mississippi, who campaigned in favor of the ‘Personhood’ Initiative on the 2011 Mississippi General Election ballot? |
By obfuscating the researcher’s intention and object of evaluation, endorsement experiments are useful in reducing non-response bias and recovering estimates of sensitive attitudes. Official results from an anti-abortion referendum in Mississippi in 2011 showed that while direct questioning significantly underestimated the votes against the referendum (by close to 20% in most counties) and had significant non-response rates, the endorsement experiment and list experiment—discussed below—reduced item non-response and removed approximately half the underestimate of ‘no’ votes. In contrast, randomized response methods—also discussed below—almost completely recovered the known vote shares.Footnote 21
A number of studies have utilized endorsement experiments to study a range of sensitive topics, particularly support for the state and insurgents in fragile states.Footnote 22 A useful resource on this topic is a comprehensive guide for, and illustration of, questioning strategy, regression methods, and analysis tools (including software package in R) for endorsement experiments.Footnote 23
The advantage of an endorsement experiment is that it obscures the object of the evaluation above and beyond concealing the respondent’s answer to the sensitive question. The main disadvantage is that a latent variable model is needed to estimate sensitive behavior and attitudes. In addition, the endorsement effect does not have an obvious scale, e.g. it is unclear a priori how a certain percentage change in support for a policy when it is associated with a group vs. not, would indicate supporting the group strongly to opposing it strongly on a standard Likert scale. Its estimates are also statistically inefficient (in the sense of requiring a larger sample to achieve a given confidence interval) compared to the other indirect methods discussed below.Footnote 24
2.2 List Experiments
List experiments try to mitigate sensitivity biases by introducing uncertainty through aggregation. This method, also referred to as an ‘item count technique’ has been extensively used to study racial attitudes and prejudice as well as voter turnout and vote buying.Footnote 25
Similar to the endorsement experiment, the sample is randomly divided into treatment and control groups. Both groups are asked to mention the total number of items on a list that they view as favorable or unfavorable (or number of actions they have taken), without identifying which specific items are favorable or unfavorable. The two groups receive similar lists except that the response options for the treatment group includes one additional item, the sensitive item which is the subject of the study.
As with endorsement experiments, list experiments can be used to study both sensitive attitudes and behavior.Footnote 26 A list experiment to study vote buying in Nicaragua found that almost one quarter of voters were offered gifts or services in exchange for votes while only 3% reported such activities when asked directly.Footnote 27 The following box shows the control and treatment statements used for assessing vote buying.
A regression analysis technique can be used to analyze list experiment data and recent work illustrates the application of the method investigating racial hatred in the US based on the 1991 National Race and Politics Survey.Footnote 28 There is also a wide range of studies that have relied on list experiments for studying sensitive topics.Footnote 29
List Experiment assessing behavior | |
---|---|
Control group | Treatment group |
I’m going to hand you a card that mentions various activities, and I would like for you to tell me if they were carried out by candidates or activists during the last electoral campaign. Please, do not tell me which ones, only HOW MANY • they put up campaign posters or signs in your neighborhood/city • they visited your home • they placed campaign advertisements on television or radio • they threatened you to vote for them | I’m going to hand you a card that mentions various activities, and I would like for you to tell me if they were carried out by candidates or activists during the last electoral campaign. Please, do not tell me which ones, only HOW MANY • they put up campaign posters or signs in your neighborhood/city • they visited your home • they placed campaign advertisements on television or radio • they threatened you to vote for them • they gave you a gift or did you a favor |
The advantage of list experiments is that respondents do not disclose whether the sensitive item applies to them. By concealing which items a respondent has favorable or unfavorable views about, the list experiment can reduce non-response rates and mitigate biases due to the risk of disclosure and social desirability. Since respondents do not actually reveal which items they agree or disagree with, this method could alleviate the respondents’ fear of disclosing their views and their concerns about reference groups. By only expressing the number of favorable or unfavorable items, they can deny reference to the sensitive item. This method, however, cannot mitigate biases due to taboo since the intrusive words need to be mentioned either in the question or options. This method cannot reduce biases due to self-image either. The main drawback of this approach is the problem of floor and ceiling effects. In the example above, if the respondent has experienced all the control items, then an honest response would no longer be obscure as it reveals that the respondent received a gift or favor in exchange for a vote, which is an example of the ceiling effect.Footnote 30
In a comprehensive meta-analysis of list experiments applied to political attitudes and behaviors, the list experiment performs well, both in terms of recovering estimates consistent with direct questions about non-sensitive behaviors and in terms of reducing bias.Footnote 31
2.3 Randomized Response
The randomized response approach is useful for estimating population-level variables by obscuring respondents’ truthful answers through introducing noise in the responses.Footnote 32 In this approach, respondents rely on a random outcome (such as flipping a coin) to add noise to the response, noise whose distribution the researcher knows, and can thus later remove from population-level summaries of the responses.
Randomized response questions come in two variants. In the disguised response version, the respondent is given two questions (an innocuous question and a sensitive question) and asked to flip a coin or other randomizing device out of sight of the surveyor. The coin flip determines which of the two questions the respondent answers. In the forced response version, the respondent is asked to answer the sensitive question but the randomizing device can determine their answer, obfuscating each individual’s answer. The following box provides an illustration of these techniques.
Randomized response | |
---|---|
Disguised response | Forced response |
Please flip a coin, but do not tell me what you got. If you receive heads answer question A, otherwise answer question B. Do not tell me what you got, just answer the question based on your coin flip Question A: Did your coin land on heads? Yes/No Question B: Have you ever shoplifted? Yes/No | For this question, I want you to answer yes or no. But I want you to consider the number of your dice throw. If 1 shows on the dice, tell me no. If 6 shows, tell me yes. But if another number, like 2 or 3 or 4 or 5 shows, tell me your own opinion about the question that I will ask you after you throw the dice [TURN AWAY FROM THE RESPONDENT] Now you throw the dice so that I cannot see what comes out. Please do not forget the number that comes out Now, during the height of the conflict in 2007 and 2008, did you know any militants, like a family member, a friend, or someone you talked to on a regular basis? Please, before you answer, take note of the number you rolled on the dice |
Although the randomized response approach has not been used as widely as the endorsement and list experiments because it is slightly harder to explain to respondents, it is an effective method for studying sensitive attitudes and behaviors in contexts where the population is familiar with some randomization device such as the dice.Footnote 33 The randomized response technique has been used to study social connections and contacts with members of armed groups in Nigeria, which was not only sensitive but could even pose security threats to the respondents and surveyors if inquired about directly. This method has been used for estimating a range of sensitive behaviors, from application faking to cheating and drug use.Footnote 34 In the study on Nigeria, a multivariate regression analysis technique was used, and researchers provided guidance for power analysis and robust design for randomized response and illustration of applying this technique to their study of contacts with armed groups in Nigeria, in addition to a software package in R for data analysis.Footnote 35,Footnote 36
Validation studies of the randomized response approach have led to mixed results. A number of validation studies have found that the randomized response method leads to less biased estimates than direct questioning and reduces item non-response, although it is not always better than list experiments and endorsement experiments. In a validation of the Mississippi referendum on the ‘Personhood Initiative’, the authors found that randomized response outperformed other methods in terms of reducing bias.Footnote 37 Compared to the actual referendum results, the bias in the weighted estimate of support for the referendum was only 0.04 in the randomized response while it was 0.236 in the direct question, 0.149 in the list experiment and 0.069 in the endorsement experiment. However, this method was not the best in reducing the non-response rate. Although the non-response rate in the randomized experiment (13%) was lower than the direct question method (20%), it was much higher than the non-response rate on the list experiment (2%) and the endorsement experiment (0.003%).
The main disadvantage of a randomized response approach is that it requires respondents to administer randomization, which can lead to high rates of item non-response and even survey and attrition. Furthermore, using randomizing devices or flipping coins may be culturally inappropriate in some contexts. A number of validation studies report high rates of non-response and less valid estimates for randomized response approach than a list experiment although other studies have found more favorable results and smaller non-response rates.Footnote 38
2.4 Behavioral Approaches
Behavioral approaches mitigate sensitivity bias through direct observation of behaviors that reveal preferences without direct inquiry about those preferences. Two common approaches to measuring behavior are dictator games (where the participants are asked to decide whether they want to share money with another participant) or ‘offer’ experiments where the respondents decide whether or not to accept an amount of money. The strength of these approaches is in their indirect measurement of sensitive attitudes and high degree of obfuscating the objective of the research.
Behavioral approaches have been used in studying a range of attitudes and behaviors, such as discrimination and xenophobia, altruism and prosocial behavior, religious beliefs, and anti-American attitudes.Footnote 39 For instance, one study uses financial costs to indirectly study anti-American identity in Pakistan.Footnote 40 Study participants were given Pakistani Rupees (Rs.) 100 or 500, when the daily wage of a manual laborer is between Rs. 400 and 500, merely for checking a box to thank the donor. As shown in the box below, in one version of the instrument, the donor was local (the Lahore University of Management Science) while in the second version it was foreign (the US government).
Behavioral approach: Revealed preference | |
---|---|
Local donor | Foreign donor |
You are one of 50% who are taking this survey receiving this offer to receive an additional Rs. 100. Funding for this bonus payment comes from LUMS We can pay you Rs. 100 for completing the survey, but in order to receive the bonus payment you are required to acknowledge receipt of the funds provided by LUMS and thank the funder Option 1: I gratefully thank LUMS for its generosity and accept the payment from them Option 2: I do not accept the payment | You are one of 50% who are taking this survey receiving this offer to receive an additional Rs. 100. Funding for this bonus payment comes from the US government We can pay you Rs. 100 for completing the survey, but in order to receive the bonus payment you are required to acknowledge receipt of the funds provided by the US government Option 1: I gratefully thank the US government for its generosity and accept the payment from them Option 2: I do not accept the payment |
The study in Pakistan found that when participants make decision privately and if the source of the funds is the US government, almost one quarter of them forgo the money, Rs. 100.Footnote 41 However, when they expect their decision to be public, a significantly smaller proportion (around 10%) rejects the payment. They conclude that since the participants expect the majority to accept the payment from the US government, a substantial number of them (15%) conform to the majority and accept the payment although they would not in private. When the payment is increased to Rs. 500, the rejection rate falls from 25%, but a significant proportion of the participants (10%) still forgo the payment.
3 Practical Issues
In addition to being useful tools in recovering truthful responses, the indirect methods reviewed in this chapter have a number of practical advantages over direct questioning. First, they help reduce survey staff vulnerability, which might be particularly important in conflict settings. By masking the nature of the question itself, survey staff are more likely to be protected when local authorities do not allow sensitive questions being to be asked, despite legal protection. There is also the added benefit that plausible deniability may protect individuals by not revealing their true response at the individual level in case the survey instruments are compromised. These issues typically do not arise in non-conflict settings but can be particularly important when protecting individual responses is critically important.
Although the indirect methods for studying sensitive topics outperform direct questioning in many settings, they also have limitations. First, the indirect methods add noise to the estimates, which means that for any given level of statistical power, much larger samples are required to measure group-level differences.Footnote 42 Although scholars have proposed ways to reduce noise and remedy the problem of large samples in some cases (such as using double lists or negatively correlated items in a list experiment), the requirement of a large sample remains an important drawback of these indirect methods.Footnote 43 Second, these methods require much more extensive pre-testing and preparation than direct questions, which would increase the costs (both financial and human resources) for studying the same topics and could affect the research timeline as well. Third, although these methods reduce sensitivity bias, they cannot overcome incentive compatibility issues. These methods may not provide incentives for the respondents to reveal their true views and attitudes even if they are assured that their individual views will not be disclosed. In essence, these methods reduce the cost of expressing views as long as respondents are interested in expressing their views. If the respondents see advantages in concealing their views and attitudes, these methods do not provide them with incentives to express their views. Some of the behavioral approaches overcome this problem by imposing costs on the subjects if they do not reveal their preferences, but the three indirect methods do not impose such costs.Footnote 44
The most important lesson learned from the studies that have utilized indirect methods, however, is the significance of pre-testing. Endorsement experiments require finding political issues on which the groups in question would plausibly take a stand for and that all relate to the same latent policy dimension. Properly implementing list experiments requires choosing control items so that floor and ceiling effects are avoided for almost all respondents. And randomized response requires finding a culturally appropriate randomization device and choosing the appropriate type of question. In short, all indirect methods require much more pre-testing of questions and instruments than traditional direct question do in order to ensure that they can recover truthful replies in which researchers are interested.
Given the cultural and contextual diversity of FCV contexts, some of these methods may work in some contexts but not in others. It is very important to select the appropriate method taking into consideration the concerns and context where the research is conducted. Finally, if feasible, researchers should consider validating the findings of indirect methods by comparing them with available census data or social media data whenever available.
Notes
- 1.
Our formulation here and in Sect. 2 draws heavily on Graeme Blair, Alexander Coppock, and Margaret Moor (2018), “When to Worry About Sensitivity Bias: Evidence from 500 List Experiments.” Draft. The authors conduct a thorough meta-analysis of more than 500 list experiments (technique explained below).
- 2.
Steele, Claude M., Steven J. Spencer, and Michael Lynch (1993), “Self-Image Resilience and Dissonance: The Role of Affirmational Resources,” Journal of Personality and Social Psychology 64 (6): 885–896; Liu, T. J., and G. M. Steele (1986), “Attribution as Self-Affirmation,” Journal of Personality and Social Psychology 51: 351–340.
- 3.
Tourangeau, Roger, Lance J. Rips, and Kenneth Rasinski (2000), The Psychology of Survey Response. Cambridge: Cambridge University Press.
- 4.
Blair et al. (2018).
- 5.
Reminders of local insecurity reduce response rates on sensitive topics more than on other topics in a recent survey experiment in Somalia. Denny, Elaine, and Jesse Driscoll (2018), “Calling Mogadishu: How Reminders of Anarchy Bias Survey Participation,” The Journal of Experimental Political Science. For an early paper on this challenges of measurement see Bullock, Will, Kosuke Imai, and Jacob N. Shapiro (2011), “Statistical Analysis of Endorsement Experiments: Measuring Support for Militant Groups in Pakistan,” Political Analysis 19: 363–384.
- 6.
Nederhof, Anton J. (1985), “Methods of Coping with Social Desirability Bias: A Review,” European Journal of Social Psychology 15: 263–280; Rosenthal, Robert (1963), “On the Social Psychology of the Psychological Experiment: The Experiment’s Hypothesis as Unintended Determinant of Experimental Results,” American Scientist 51: 268–283; and Rosenthal, Robert (1966), Experimenter Effects in Behavioral Research. New York: Appleton Century-Crofts.
- 7.
Nederhof (1985: 264).
- 8.
Blair et al. (2018) and Tajfel, Henri, and John C. Turner (1979), “An Integrative Theory of Intergroup Conflict,” The Social Psychology of Intergroup Relations 33 (47): 74.
- 9.
Bursztyn et al. (2017).
- 10.
Rosenthal (1963, 1966).
- 11.
Dellavigna et al. (2016).
- 12.
Dellavigna, Stefano, John A. List, Ulrike Malmendier, and Gautam Rao (2016), “Voting to Tell others,” The Review of Economic Studies 84 (1): 143–181.
- 13.
For example, when estimating the correlation between receiving aid and support for militant groups one might worry that respondents in pro-militant communities are more reluctant to express support if they have gotten aid because they fear future aid will would be withheld. They therefore avoid the question at higher rates than those in other communities, leading one to erroneously conclude that receiving aid was negatively correlated with support for militants.
- 14.
We thank Graeme Blair for excellent advice on how to frame these issues.
- 15.
For statistical software and several papers employing these methods, see Graeme Blair and Kosuke Imai’s excellent website: http://sensitivequestions.org.
- 16.
Sniderman, Paul M., and Thomas Piazza (1993), The Scar of Race. Boston: Harvard University Press; Blair, Graeme, C. Christine Fair, Neil Malhotra, and Jacob N. Shapiro (2012), “Poverty and Support for Militant Politics: Evidence from Pakistan,” American Journal of Political Science.
- 17.
Chaiken, S. (1980), “Heuristic Versus Systematic Information Processing and the Use of Source Versus Message Cues in Persuasion,” Journal of Personality and Social Psychology 39 (5): 752–766; Petty, Richard E., John T. Cacioppo, and David Schumann (1983), “Central and Peripheral Routes to Advertising Effectiveness: The Moderating Role of Involvement,” Journal of Consumer Research 10 (2): 135–146; and Wood, Wendy, and Carl A. Kallgren (1988), “Communicator Attributes and Persuasion: Recipients’ Access to Attitude-Relevant Information in Memory,” Personality and Social Psychology Bulletin 14 (1): 172–182.
- 18.
Blair et al. (2012).
- 19.
Blair et al. (2012).
- 20.
Rosenfeld, Bryn, Kosuke Imai, and Jacob N. Shapiro (2015), “An Empirical Validation Study of Popular Survey Methodologies for Sensitive Questions,” American Journal of Political Science, 1–20.
- 21.
Rosenfeld et al. (2015).
- 22.
See, for example: Lyall, Jason, Graeme Blair, and Kosuke Imai (2013), “Explaining Support for Combatants During Wartime: A Survey Experiment in Afghanistan.” American Political Science Review 107 (4): 679–705; and Blair, Graeme, Jason Lyall, and Kosuke Imai, (2014), “Comparing and Combining List and Endorsement Experiments: Evidence from Afghanistan,” American Journal of Political Science 58 (4): 1043–1063.
- 23.
Bullock et al. (2011), follow-on the work by Bullock et al. (2011). For the relevant software package in R and analysis tools, refer to http://endorse.sensitivequestions.org/.
- 24.
Rosenfeld et al. (2015).
- 25.
Raghavarao, Damaraju, and Walter T. Federer (1979), “Block Total Response as an Alternative to the Randomized Response Method in Surveys,” Journal of the Royal Statistical Society, Series B (Statistical Methodology) 41 (1): 40–45; Gonzalez-Ocantos, Ezequiel, Chad Kiewiet de Jonge, Carlos Mel´endez, Javier Osorio, and David W. Nickerson (2012), “Vote Buying and Social Desirability Bias: Experimental Evidence from Nicaragua,” American Journal of Political Science 56: 202–217; Kuklinski, J., M. Cobb, and M. Gilens (1997), “Racial Attitudes and the ‘New South,’” Journal of Politics 59 (2): 323–349; and Holbrook, A. L., and J. A. Krosnick (2010), “Social Desirability Bias in Voter Turnout Reports: Tests Using the Item Count Technique,” Public Opinion Quarterly 74 (1): 37–67.
- 26.
For examples of research using list experiment to study racial attitudes see Kuklinski et al. (1997) and Kuklinski, J., P. Sniderman, K. Knight, T. Piazza, P. Tetlock, G. Lawrence, and B. Mellers (1997), “Racial Prejudice and Attitudes Toward Affirmative Action,” American Journal of Political Science 41 (2): 402–419.
- 27.
Gonzalez-Ocantos et al. (2012).
- 28.
Imai, Kosuke (2011), “Multivariate Regression Analysis for the Item Count Technique,” Journal of the American Statistical Association 106 (494): 407–417. The software package in R for analysis of list experiments can be obtained at http://list.sensitivequestions.org/.
- 29.
Blair et al. (2018).
- 30.
Rosenfeld et al. (2015) and Glynn, Adam N. (2013), “What We Can Learn With Statistical Truth Serum? Design and Analysis of the List Experiment,” Public Opinion Quarterly 77: 159–172.
- 31.
Blair et al. (2018).
- 32.
Warner, Stanley L. (1965), “Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias,” Journal of the American Statistical Association 60 (309): 63–69.
- 33.
Blair, Graeme, Kosuke Imai, and Yang-Yang Zhou (2015), “Design and Analysis of the Randomized Response Technique,” Journal of the American Statistical Association 110 (511): 1304–1319.
- 34.
Donovan, John J., Stephen A. Dwight, and Gregory M. Hurtz (2009), “An Assessment of the Prevalence, Severity, and Verifiability of Entry-Level Applicant Faking Using the Randomized Response Technique,” Human Performance 16 (1): 81–106; Scheers, N. J., and C. Mitchell Dayton (1987), “Improved Estimation of Academic Cheating Behavior Using the Randomized Response Technique,” Research in Higher Education 26 (1): 61–69; Goodstadt, Michael S., and Valerie Gruson (2012), “The Randomized Response Technique: A Test on Drug Use,” Journal of the American Statistical Association 70 (352): 814–818; and Clark, Stephen J., and Robert A. Desharnais (1998), “Honest Answers to Embarrassing Questions: Detecting Cheating in the Randomized Response Model,” Psychological Methods 3 (2): 160–168.
- 35.
Blair, Graeme, Kosuke Imai, and Yang-Yang Zhou (2015), “Design and Analysis of the Randomized Response Technique,” Journal of the American Statistical Association 110 (511): 1304–1319.
- 36.
The software package in R can be obtained at http://rr.sensitivequestions.org/.
- 37.
Rosenfeld et al. (2015).
- 38.
For the discussion of advantages and disadvantages of randomized response, see Rosenfeld et al. (2015).
- 39.
Studies of discrimination and xenophobia include Becker, Gary S. (1957), The Economics of Discrimination. Chicago: University of Chicago Press; Bursztyn, Leonardo, Georgy Egorov, and Stefano Fiorin (2017), “From Extreme to Mainstream: How Social Norms Unravel,” NBER Working Paper No. 23415, May 2017; Rao, Gautam (2013), “Familiarity Does Not Breed Contempt: Diversity, Discrimination and Generosity in Delhi Schools,” Working Paper, https://scholar.harvard.edu/rao/publications/familiarity-does-not-breed-contempt-diversity-discrimination-and-generosity-delhi. For altruism and prosocial behavior, see Anderoni, James (1990), “Impure Altruism and Donations to Public Goods: A Theory of Warm-Glow,” Economic Journal 100: 464–477; DellaVigna, Stefano, John A. List, and Ulrike Malmendier (2012), “Testing for Altruism and Social Pressure in Charitable Giving,” Quarterly Journal of Economics 127 (1): 1–56; and Ariely, Dan, Anat Bracha, and Stephan Meier (2009), “Doing Good or Doing Well? Image Motivation and Monetary Incentives in Behaving Prosocially,” American Economic Review 99 (1): 544–555. For studies using monetary offers to study religiosity, see Augenblick, Ned, Jesse M. Cunha, Ernesto Dal B’o, and Justin M. Rao (2012), “The Economics of Faith: Using an Apocalyptic Prophecy to Elicit Religious Beliefs in the Field,” NBER Working Paper No. 18641, December 2012; Condra, Luke N., Mohammad Isaqzadeh, and Sera Linardi (2017), “Clerics and Scriptures: Experimentally Disentangling the Influence of Religious in Afghanistan,” British Journal of Political Science, 1–19.
- 40.
Bursztyn et al. (2017).
- 41.
Bursztyn et al. (2017).
- 42.
Blair et al. (2018) show that most prior list experiments have been underpowered and recommend using direct questions for all but the most sensitive questions unless large samples can be obtained.
- 43.
For discussion of how to address ceiling effect and reduce noise in list experiments see Glynn (2013).
- 44.
In Burstyn et al. (2017), for instance, the subjects are imposed costs (forgoing payments from the U.S. government) for expressing anti-American identity. Game theory and “offer” experiments use financial incentives to study altruism.
References
Blair, Graeme, C. Christine Fair, Neil Malhotra, and Jacob N. Shapiro. (2012). “Poverty and Support for Militant Politics: Evidence from Pakistan.” American Journal of Political Science.
Bullock, Will, Kosuke Imai, and Jacob N. Shapiro. (2011). “Statistical Analysis of Endorsement Experiments: Measuring Support for Militant Groups in Pakistan.” Political Analysis 19: 363–384.
Fair, C. Christine, Neil Malhotra, and Jacob N. Shapiro. (2013). “Democratic Values and Support for Militant Politics: Evidence from a National Survey of Pakistan.” Journal of Conflict Resolution, 1–28.
Fair, C. Christine, Rebecca Littman, Neil Malhotra, and Jacob N. Shapiro. (2016). “Relative Poverty, Perceived Violence, and Support for Militant Politics: Evidence from Pakistan.” Political Science Research and Methods.
Lyall, Jason, Yuki Shiraito, and Kosuke Imai. (2015). “Coethnic Bias and Wartime Informing.” The Journal of Politics 77 (3): 833–848.
Sniderman, Paul M., and Thomas Piazza. (1993). The Scar of Race. Boston: Harvard University Press.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
The opinions expressed in this chapter are those of the author(s) and do not necessarily reflect the views of the International Bank for Reconstruction and Development/The World Bank, its Board of Directors, or the countries they represent
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 3.0 IGO license (https://creativecommons.org/licenses/by/3.0/igo/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the International Bank for Reconstruction and Development/The World Bank, provide a link to the Creative Commons license and indicate if changes were made.
Any dispute related to the use of the works of the International Bank for Reconstruction and Development/The World Bank that cannot be settled amicably shall be submitted to arbitration pursuant to the UNCITRAL rules. The use of the International Bank for Reconstruction and Development/The World Bank's name for any purpose other than for attribution, and the use of the International Bank for Reconstruction and Development/The World Bank's logo, shall be subject to a separate written license agreement between the International Bank for Reconstruction and Development/The World Bank and the user and is not authorized as part of this CC-IGO license. Note that the link provided above includes additional terms and conditions of the license.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2020 International Bank for Reconstruction and Development/The World Bank
About this chapter
Cite this chapter
Isaqzadeh, M., Gulzar, S., Shapiro, J. (2020). Studying Sensitive Topics in Fragile Contexts. In: Hoogeveen, J., Pape, U. (eds) Data Collection in Fragile States. Palgrave Macmillan, Cham. https://doi.org/10.1007/978-3-030-25120-8_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-25120-8_10
Published:
Publisher Name: Palgrave Macmillan, Cham
Print ISBN: 978-3-030-25119-2
Online ISBN: 978-3-030-25120-8
eBook Packages: Economics and FinanceEconomics and Finance (R0)