KOPS - The Institutional Repository of the University of Konstanz

Sensitive Questions in Surveys : A Comprehensive Meta-Analysis of Experimental Survey Studies on the Performance of the Item Count Technique

Sensitive Questions in Surveys : A Comprehensive Meta-Analysis of Experimental Survey Studies on the Performance of the Item Count Technique

Cite This

Files in this item

Checksum: MD5:ff31c2651c5d6907677383176d608d8d

EHLER, Ingmar, Felix WOLTER, Justus JUNKERMANN, 2021. Sensitive Questions in Surveys : A Comprehensive Meta-Analysis of Experimental Survey Studies on the Performance of the Item Count Technique. In: Public Opinion Quarterly. Oxford University Press (OUP). 85(1), pp. 6-27. ISSN 0033-362X. eISSN 1537-5331. Available under: doi: 10.1093/poq/nfab002

@article{Ehler2021Sensi-56033, title={Sensitive Questions in Surveys : A Comprehensive Meta-Analysis of Experimental Survey Studies on the Performance of the Item Count Technique}, year={2021}, doi={10.1093/poq/nfab002}, number={1}, volume={85}, issn={0033-362X}, journal={Public Opinion Quarterly}, pages={6--27}, author={Ehler, Ingmar and Wolter, Felix and Junkermann, Justus} }

Sensitive Questions in Surveys : A Comprehensive Meta-Analysis of Experimental Survey Studies on the Performance of the Item Count Technique Wolter, Felix eng 2021-12-23T10:08:36Z In research on sensitive questions in surveys, the item count technique (ICT) has gained increased attention in recent years as a means of counteracting the problem of misreporting, that is, the under- and over-reporting of socially undesirable and socially desirable behaviors or attitudes. The performance of ICT compared with conventional direct questioning (DQ) has been investigated in numerous experimental studies, yielding mixed evidence. This calls for a systematic review.<br /><br />For this purpose, the present article reports results from a comprehensive meta-analysis of experimental studies comparing ICT estimates of sensitive items to those obtained via DQ. In total, 89 research articles with 124 distinct samples and 303 effect estimates are analyzed. All studies rely on the “more (less) is better” assumption, meaning that higher (lower) estimates of negatively (positively) connoted traits or behaviors are considered more valid.<br /><br />The results show (1) a significantly positive pooled effect of ICT on the validity of survey responses compared with DQ; (2) a pronounced heterogeneity in study results, indicating uncertainty that ICT would work as intended in future studies; and (3) as meta-regression models indicate, the design and characteristics of studies, items, and ICT procedures affect the success of ICT. There is no evidence for an overestimation of the effect due to publication bias.<br /><br />Our conclusions are that ICT is generally a viable method for measuring sensitive topics in survey studies, but its reliability has to be improved to ensure a more stable performance. Junkermann, Justus 2021-12-23T10:08:36Z Wolter, Felix Junkermann, Justus 2021 Ehler, Ingmar Ehler, Ingmar terms-of-use

Downloads since Dec 23, 2021 (Information about access statistics)

Ehler_2-817x4qalcptg5.pdf 149

This item appears in the following Collection(s)

Search KOPS


Browse

My Account