This Is Auburn

Show simple item record

Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning


Metadata FieldValueLanguage
dc.contributorAnh Nguyen; azn0044@auburn.eduen_US
dc.creatorNorouzzadeh, Mohammad Sadegh
dc.creatorNguyen, Anh
dc.creatorKosmala, Margaret
dc.creatorSwanson, Alexandra
dc.creatorPalmer, Meredith S.
dc.creatorPacker, Craig
dc.creatorClune, Jeff
dc.date.accessioned2020-07-27T20:48:12Z
dc.date.available2020-07-27T20:48:12Z
dc.date.created2018
dc.identifier10.1073/pnas.1719367115en_US
dc.identifier.urihttps://www.pnas.org/content/115/25/E5716/en_US
dc.identifier.urihttps://aurora.auburn.edu//handle/11200/49919
dc.identifier.urihttp://dx.doi.org/10.35099/aurora-7
dc.description.abstractHaving accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into "big data" sciences. Motion-sensor "camera traps" enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild.en_US
dc.formatPDFen_US
dc.publisherNational Academy of Sciencesen_US
dc.relation.ispartofProceedings of the National Academy of Sciences of the United States of Americaen_US
dc.relation.ispartofseries0027-8424en_US
dc.rights© 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/en_US
dc.subjectartificial intelligenceen_US
dc.subjectcamera-trap imagesen_US
dc.subjectdeep learningen_US
dc.subjectdeep neural networksen_US
dc.subjectfearen_US
dc.subjectlandscapeen_US
dc.subjectmanagementen_US
dc.subjectmodelen_US
dc.subjectsoftwareen_US
dc.subjectwildlife ecologyen_US
dc.titleAutomatically identifying, counting, and describing wild animals in camera-trap images with deep learningen_US
dc.typeTexten_US
dc.type.genreJournal Article, Academic Journalen_US
dc.citation.volume115en_US
dc.citation.issue25en_US
dc.citation.spageE5716en_US
dc.citation.epageE5725en_US
dc.description.statusPublisheden_US
dc.description.peerreviewYesen_US

Files in this item

Show simple item record