Using sensors to measure technology adoption in the social sciences

被引:0
作者
Rom A. [1 ]
Günther I. [2 ]
Borofsky Y. [2 ]
机构
[1] ETH4D, ETH Zurich, Clausiusstrasse 37, Zurich
[2] Development Economics Group, ETH Zurich
关键词
Hawthorne effect; Measurement error; Self-report surveys; Sensor; Social desirability bias; Technology adoption;
D O I
10.1016/j.deveng.2020.100056
中图分类号
学科分类号
摘要
Empirical social sciences rely heavily on surveys to measure human behavior. Previous studies show that such data are prone to random errors and systematic biases caused by social desirability, recall challenges, and the Hawthorne effect. Moreover, collecting high frequency survey data is often impossible, which is important for outcomes that fluctuate. Innovation in sensor technology might address these challenges. In this study, we use sensors to describe solar light adoption in Kenya and analyze the extent to which survey data are limited by systematic and random error. Sensor data reveal that households used lights for about 4 h per day. Frequent surveyor visits for a random sub-sample increased light use in the short term, but had no long-term effects. Despite large measurement errors in survey data, self-reported use does not differ from sensor measurements on average and differences are not correlated with household characteristics. However, mean-reverting measurement error stands out: households that used the light a lot tend to underreport, while households that used it little tend to overreport use. Last, general usage questions provide more accurate information than asking about each hour of the day. Sensor data can serve as a benchmark to test survey questions and seem especially useful for small-sample analyses. © 2020 The Authors
引用
收藏
相关论文
共 48 条
[1]  
Adair J.G., Sharpe D., Huynh C.L., Hawthorne control procedures in educational experiments: a reconsideration of their use and effectiveness, Rev. Educ. Res., 59, 2, pp. 215-228, (1989)
[2]  
Angrist J., Pischke J., The credibility revolution in empirical economics: how better research design is taking the con out of econometrics, J. Econ. Perspect., 24, 2, pp. 3-30, (2010)
[3]  
Arthi V.S., Beegle K., De Weerdt J., Palacios-Lopez A., Not Your Average Job: Measuring Farm Labor in Tanzania, (2016)
[4]  
Banerjee A.V., Duflo E., Glennerster R., Kothari D., Improving immunisation coverage in rural India: clustered randomised controlled evaluation of immunisation campaigns with and without incentives, BMJ, 340, (2010)
[5]  
Bardasi E., Beegle K., Dillon A., Serneels P., Do labor statistics depend on how and to whom the questions are asked? Results from a survey experiment in Tanzania, World Bank Econ. Rev., 25, 3, pp. 418-447, (2011)
[6]  
Beaman L., Magruder J., Robinson J., Minding small change among small firms in Kenya, J. Dev. Econ., 108, pp. 69-86, (2014)
[7]  
Beegle K., Carletto C., Himelein K., Reliability of recall in agricultural data, J. Dev. Econ., 98, 1, pp. p34-41, (2012)
[8]  
Bertrand M., Mullainathan S., Do people mean what they say? Implications for subjective survey data, Am. Econ. Rev., 91, 2, pp. p67-72, (2001)
[9]  
Bonggeun K., Solon G., Implications of mean-reverting measurement error for longitudinal studies of wages and employment, Rev. Econ. Stat., 87, 1, pp. p193-196, (2005)
[10]  
Bound J., Krueger A.B., The extent of measurement error in longitudinal earnings data: do two wrongs make a right?, J. Labor Econ., 9, 1, pp. p1-24, (1991)