Automatic recognition of lactating sow postures by refined two-stream RGB-D faster R-CNN

Biosystems Engineering(2020)

引用 32|浏览30
暂无评分
摘要
This paper proposes an end-to-end refined two-stream RGB-D Faster region convolutional neural network (R-CNN) algorithm, which fuses RGB-D image features in the feature extraction stage for recognising five postures of lactating sows (standing, sitting, sternal recumbency, ventral recumbency, and lateral recumbency) in scenes at a pig farm. Based on the Faster R-CNN algorithm, two CNNs were first used to extract the RGB image features and depth image features. Then, a proposed single RGB-D region proposal network was used to generate the regions of interest (ROIs) for the two types of image feature maps in RGB-D. Next, the features of the RGB-D ROIs were extracted and merged using a feature fusion layer. Finally, the fused features of the RGB-D ROIs were input into a Fast R-CNN to obtain the recognition results. A total of 12,600 pairs of RGB-D images of five postures were obtained by a Kinect v2.0 sensor and were randomly selected from the first 21 of 28 pens as the training set, and 5533 pairs were randomly selected from the remaining 7 pens as the test set. The proposed method was used to recognise the five postures of lactating sows. The recognition accuracy of the concatenation fusion method was the highest for the test set with average precisions for the five categories of lactating sow postures of 99.74%, 96.49%, 90.77%, 90.91%, and 99.45%, respectively. Compared with related methods (RGB-only method, depth-only method, RGB-D early fusion, and later fusion), our method attained the highest mean average precision. (C) 2019 IAgrE. Published by Elsevier Ltd. All rights reserved.
更多
查看译文
关键词
RGB-D,Faster R-CNN,Feature fusion,Lactating sow,Posture recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要