谷歌浏览器插件
订阅小程序
在清言上使用

On reliability of annotations in contextual emotion imagery

Scientific data(2023)

引用 1|浏览3
暂无评分
摘要
We documented the relabeling process for a subset of a renowned database for emotion-in-context recognition, with the aim of promoting reliability in final labels. To this end, emotion categories were organized into eight groups, while a large number of participants was requested for tagging. A strict control strategy was performed along the experiments, whose duration was 13.45 minutes average per day. Annotators were free to participate in any of the daily experiments (the average number of participants was 28), and a Z -Score filtering technique was implemented to keep trustworthiness of annotations. As a result, the value of the agreement parameter Fleiss’ Kapa increasingly varied from slight to almost perfect, revealing a coherent diversity of the experiments. Our results support the hypothesis that a small number of categories and a large number of voters benefit reliability of annotations in contextual emotion imagery.
更多
查看译文
关键词
Computer science,Human behaviour,Science,Humanities and Social Sciences,multidisciplinary
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要