Chrome Extension
WeChat Mini Program
Use on ChatGLM

Foraging Distance Distributions Reveal How Honeybee Waggle Dance Recruitment Varies with Landscape

COMMUNICATIONS BIOLOGY(2024)

Royal Holloway Univ London | Imperial Coll London

Cited 0|Views0
Abstract
Honeybee (Apis mellifera) colonies use a unique collective foraging system, the waggle dance, to communicate and process the location of resources. Here, we present a means to quantify the effect of recruitment on colony forager allocation across the landscape by simply observing the waggle dance on the dancefloor. We show first, through a theoretical model, that recruitment leaves a characteristic imprint on the distance distribution of foraging sites that a colony visits, which varies according to the proportion of trips driven by individual search. Next, we fit this model to the real-world empirical distance distribution of forage sites visited by 20 honeybee colonies in urban and rural landscapes across South East England, obtained via dance decoding. We show that there is considerable variation in the use of dancing information in colony foraging, particularly in agri-rural landscapes. In our dataset, reliance on dancing increases as arable land gives way to built-up areas, suggesting that dancing may have the greatest impact on colony foraging in the complex and heterogeneous landscapes of forage-rich urban areas. Our model provides a tool to assess the relevance of this extraordinary behaviour across modern anthropogenic landscape types. Honeybees communicate the location of flowers through the waggle dance. A method is presented to quantify how much a colony uses the waggle dance when foraging. Waggle dance use varies considerably and is used more in complex landscapes.
More
Translated text
PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Try using models to generate summary,it takes about 60s
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
GPU is busy, summary generation fails
Rerequest