Using Retest-Adjusted Correlations As Indicators of the Semantic Similarity of Items.
Journal of personality and social psychology(2023)
摘要
Determining whether different items provide the same information or mean the same thing within a population is a central concern when determining whether different scales or constructs are overlapping or redundant. In the present study, we suggest that retest-adjusted correlations provide a valuable means of adjusting for item-level unreliability. More exactly, we suggest dividing the estimated correlation between items X and Y measured over measurement interval |d| by the average retest correlations of the items over the same measurement interval. For instance, if we correlate scores from items X and Y measured 1 week apart, their retest-adjusted correlation is estimated by using their 1-week retest correlations. Using data from four inventories, we provide evidence that retest-adjusted correlations are significantly better predictors of whether two items are consensually regarded as "meaning the same thing" by judges than raw-score correlations. The results may provide the first empirical evidence that Spearman's (1904, 1910) suggested reliability adjustment do-in certain (perhaps very constrained!) circumstances-improve upon raw-score correlations as indicators of the informational or semantic equivalence of different tests. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
更多查看译文
关键词
semantic similarity,test equivalence,reliability,nuances,item-level analysis
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要