谷歌浏览器插件
订阅小程序
在清言上使用

Testing Stimulus Equivalence in Transformer-Based Agents

Alexis Carrillo,Moisés Betancort

Future Internet(2024)

引用 0|浏览0
暂无评分
摘要
This study investigates the ability of transformer-based models (TBMs) to form stimulus equivalence (SE) classes. We employ BERT and GPT as TBM agents in SE tasks, evaluating their performance across training structures (linear series, one-to-many and many-to-one) and relation types (select–reject, select-only). Our findings demonstrate that both models performed above mastery criterion in the baseline phase across all simulations (n = 12). However, they exhibit limited success in reflexivity, transitivity, and symmetry tests. Notably, both models achieved success only in the linear series structure with select–reject relations, failing in one-to-many and many-to-one structures, and all select-only conditions. These results suggest that TBM may be forming decision rules based on learned discriminations and reject relations, rather than responding according to equivalence class formation. The absence of reject relations appears to influence their responses and the occurrence of hallucinations. This research highlights the potential of SE simulations for: (a) comparative analysis of learning mechanisms, (b) explainability techniques for TBM decision-making, and (c) TBM bench-marking independent of pre-training or fine-tuning. Future investigations can explore upscaling simulations and utilize SE tasks within a reinforcement learning framework.
更多
查看译文
关键词
stimulus equivalence,transformers,BERT,GPT,matching to sample,reject control,training structures
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要