A typical camera trap survey may produce millions of images that require slow, expensive manual review. Consequently, critical conservation questions may be answered too slowly to support decision‐making. Recent studies… Click to show full abstract
A typical camera trap survey may produce millions of images that require slow, expensive manual review. Consequently, critical conservation questions may be answered too slowly to support decision‐making. Recent studies demonstrated the potential for computer vision to dramatically increase efficiency in image‐based biodiversity surveys; however, the literature has focused on projects with a large set of labelled training images, and hence many projects with a smaller set of labelled images cannot benefit from existing machine learning techniques. Furthermore, even sizable projects have struggled to adopt computer vision methods because classification models overfit to specific image backgrounds (i.e. camera locations). In this paper, we combine the power of machine intelligence and human intelligence via a novel active learning system to minimize the manual work required to train a computer vision model. Furthermore, we utilize object detection models and transfer learning to prevent overfitting to camera locations. To our knowledge, this is the first work to apply an active learning approach to camera trap images. Our proposed scheme can match state‐of‐the‐art accuracy on a 3.2 million image dataset with as few as 14,100 manual labels, which means decreasing manual labelling effort by over 99.5%. Our trained models are also less dependent on background pixels, since they operate only on cropped regions around animals. The proposed active deep learning scheme can significantly reduce the manual labour required to extract information from camera trap images. Automation of information extraction will not only benefit existing camera trap projects, but can also catalyse the deployment of larger camera trap arrays.
               
Click one of the above tabs to view related content.