Can a List Experiment Improve Validity of Abortion Measurement?

被引:24
作者
Bell, Suzanne O. [1 ]
Bishai, David [1 ]
机构
[1] Johns Hopkins Bloomberg Sch Publ Hlth, Baltimore, MD 21205 USA
关键词
SENSITIVE QUESTIONS; COUNT TECHNIQUE; PREGNANCY; PREVALENCE; ANSWERS;
D O I
10.1111/sifp.12082
中图分类号
C921 [人口统计学];
学科分类号
摘要
Although induced abortion is common, measurement issues have long made this area of research challenging. The current analysis applies an indirect method known as the list experiment to try to improve survey-based measurement of induced abortion. We added a double list experiment to a population-based survey of reproductive age women in Rajasthan, India and compared resulting abortion estimates to those we obtained via direct questioning in the same sample. We then evaluated list experiment assumptions. The final sample completing the survey consisted of 6,035 women. Overall, 1.8 percent of the women reported a past abortion via the list experiment questions, whereas 3.5 percent reported an abortion via the direct questions, and this difference was statistically significant. As such, the list experiment failed to produce more valid estimates of this sensitive behavior on a population-based survey of reproductive age women in Rajasthan, India. One explanation for the poor list experiment performance is our finding that key assumptions of the methodology were violated. Women may have mentally enumerated the treatment list items differently from the way they enumerated control list items. Further research is required to determine whether researchers can learn enough about how the list experiment performs in different contexts to effectively and consistently leverage its potential benefits to improve measurement of induced abortion.
引用
收藏
页码:43 / 61
页数:19
相关论文
共 56 条
[1]  
[Anonymous], 1994, INTRO BOOTSTRAP
[2]  
Aronow Peter M, 2015, J Surv Stat Methodol, V3, P43, DOI 10.1093/jssam/smu023
[3]  
Auguie B., 2017, gridExtra: Miscellaneous Functions for"Grid" GraphicsR package version 2.3
[4]  
Biemer P., 2005, Journal of Official Statistics, V21, P287
[5]   Statistical Analysis of List Experiments [J].
Blair, Graeme ;
Imai, Kosuke .
POLITICAL ANALYSIS, 2012, 20 (01) :47-77
[6]  
Carpenter J, 2000, STAT MED, V19, P1141, DOI 10.1002/(SICI)1097-0258(20000515)19:9<1141::AID-SIM479>3.0.CO
[7]  
2-F
[8]   Measuring Social Desirability Effects on Self-Reported Turnout Using the Item Count Technique [J].
Comsa, Mircea ;
Postelnicu, Camil .
INTERNATIONAL JOURNAL OF PUBLIC OPINION RESEARCH, 2013, 25 (02) :153-172
[9]   Sensitive Questions, Truthful Answers? Modeling the List Experiment with LISTIT [J].
Corstange, Daniel .
POLITICAL ANALYSIS, 2009, 17 (01) :45-63
[10]   Sensitive Questions in Online Surveys: Experimental Results for the Randomized Response Technique (RRT) and the Unmatched Count Technique (UCT) [J].
Coutts, Elisabeth ;
Jann, Ben .
SOCIOLOGICAL METHODS & RESEARCH, 2011, 40 (01) :169-193