Chrome Extension
WeChat Mini Program
Use on ChatGLM

Comparing the Performances of Chimpanzees (pan Troglodytes) and Gorillas (gorilla Gorilla Gorilla) in Two Self-Awareness Tasks.

Lisa-Claire Vanhooland, Constanze Mager, Aurora Teuben,Thomas Bugnyar,Jorg J M Massen

Alpine Entomology(2025)SCI 3区

Department of Behavioral and Cognitive Biology | Royal Burgers´ Zoo | University of Amsterdam | Animal Behaviour and Cognition

Cited 0|Views0
Abstract
Self-awareness has most commonly been studied in nonhuman animals by implementing mirror self-recognition (MSR) tasks. The validity of such tasks as a stand-alone method has, however, been debated due to their high interindividual variation (including in species deemed self-aware like chimpanzees), their reliance on only one sensory modality, their discrete outcomes (i.e., pass/fail) and, in general, questionned regarding their ability to assess self-awareness. Therefore, a greater variety of methods that assess different aspects of the self, while simultaneously contributing to a more gradualist view of self-awareness, would be desirable. One such method is the body-as-obstacle task (BAO), testing for another dimension of body self-awareness. The ability to understand one's own body as an obstacle to the completion of a desired action emerges in young children at approximately the same age as mirror self-recognition, suggesting a shared mental representation. Whereas recently some studies showed body self-awareness in nonhuman animals, so far, outside of children no studies have compared how the performances of individuals relate between these two tasks. Therefore, here we study both a MSR and a BAO task in chimpanzees and gorillas. We chose these species particularly because evidence for MSR in chimpanzees is well established, whereas results for gorillas have been mixed, which has been attributed to the study design of MSR tasks, and for which a BAO task might thus provide more conclusive evidence. We find that although only some chimpanzees showed evidence for mirror self-recognition, thus replicating previous findings on interspecies differences in MSR, chimpanzees and gorillas performed equally well in the BAO task. Yet, we further found no correlation between the individuals' performances in both tasks. We discuss the implications of these findings for the interpretation of the results of BAO tasks as a possible alternative paradigm for the study of self-awareness in non-human animals.
More
Translated text
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest