Replication: How Well Do My Results Generalize Now? The External Validity of Online Privacy and Security Surveys.

arxiv(2022)

引用 12|浏览0
暂无评分
摘要
Privacy and security researchers often rely on data collected through online crowdsourcing platforms such as Amazon Mechanical Turk (MTurk) and Prolific. Prior work--which used data collected in the United States between 2013 and 2017--found that MTurk responses regarding security and privacy were generally representative for people under 50 or with some college education. However, the landscape of online crowdsourcing has changed significantly over the last five years, with the rise of Prolific as a major platform and the increasing presence of bots. This work attempts to replicate the prior results about the external validity of online privacy and security surveys. We conduct an online survey on MTurk ( n =800), a gender-balanced survey on Prolific ( n =800), and a representative survey on Prolific ( n = 800) and compare the responses to a probabilistic survey conducted by the Pew Research Center ( n = 4272). We find that MTurk response quality has degraded over the last five years, and our results do not replicate the earlier finding about the generalizability of MTurk responses. By contrast, we find that data collected through Prolific is generally representative for questions about user perceptions and experiences, but not for questions about security and privacy knowledge. We also evaluate the impact of Prolific settings, attention check questions, and statistical methods on the external validity of online surveys, and we develop recommendations about best practices for conducting online privacy and security surveys.
更多
查看译文
关键词
security surveys,privacy,results generalize,external validity
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要