Metadata Field | Value | Language |
dc.contributor | Anh Nguyen; azn0044@auburn.edu | en_US |
dc.creator | Norouzzadeh, Mohammad Sadegh | |
dc.creator | Nguyen, Anh | |
dc.creator | Kosmala, Margaret | |
dc.creator | Swanson, Alexandra | |
dc.creator | Palmer, Meredith S. | |
dc.creator | Packer, Craig | |
dc.creator | Clune, Jeff | |
dc.date.accessioned | 2020-07-27T20:48:12Z | |
dc.date.available | 2020-07-27T20:48:12Z | |
dc.date.created | 2018 | |
dc.identifier | 10.1073/pnas.1719367115 | en_US |
dc.identifier.uri | https://www.pnas.org/content/115/25/E5716/ | en_US |
dc.identifier.uri | https://aurora.auburn.edu//handle/11200/49919 | |
dc.identifier.uri | http://dx.doi.org/10.35099/aurora-7 | |
dc.description.abstract | Having accurate, detailed, and up-to-date information about the location and behavior of animals in the wild would improve our ability to study and conserve ecosystems. We investigate the ability to automatically, accurately, and inexpensively collect such data, which could help catalyze the transformation of many fields of ecology, wildlife biology, zoology, conservation biology, and animal behavior into "big data" sciences. Motion-sensor "camera traps" enable collecting wildlife pictures inexpensively, unobtrusively, and frequently. However, extracting information from these pictures remains an expensive, time-consuming, manual task. We demonstrate that such information can be automatically extracted by deep learning, a cutting-edge type of artificial intelligence. We train deep convolutional neural networks to identify, count, and describe the behaviors of 48 species in the 3.2 million-image Snapshot Serengeti dataset. Our deep neural networks automatically identify animals with >93.8% accuracy, and we expect that number to improve rapidly in years to come. More importantly, if our system classifies only images it is confident about, our system can automate animal identification for 99.3% of the data while still performing at the same 96.6% accuracy as that of crowdsourced teams of human volunteers, saving >8.4 y (i.e., >17,000 h at 40 h/wk) of human labeling effort on this 3.2 million-image dataset. Those efficiency gains highlight the importance of using deep neural networks to automate data extraction from camera-trap images, reducing a roadblock for this widely used technology. Our results suggest that deep learning could enable the inexpensive, unobtrusive, high-volume, and even real-time collection of a wealth of information about vast numbers of animals in the wild. | en_US |
dc.format | PDF | en_US |
dc.publisher | National Academy of Sciences | en_US |
dc.relation.ispartof | Proceedings of the National Academy of Sciences of the United States of America | en_US |
dc.relation.ispartofseries | 0027-8424 | en_US |
dc.rights | © 2018. This manuscript version is made available under the CC-BY-NC-ND 4.0 license http://creativecommons.org/licenses/by-nc-nd/4.0/ | en_US |
dc.subject | artificial intelligence | en_US |
dc.subject | camera-trap images | en_US |
dc.subject | deep learning | en_US |
dc.subject | deep neural networks | en_US |
dc.subject | fear | en_US |
dc.subject | landscape | en_US |
dc.subject | management | en_US |
dc.subject | model | en_US |
dc.subject | software | en_US |
dc.subject | wildlife ecology | en_US |
dc.title | Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning | en_US |
dc.type | Text | en_US |
dc.type.genre | Journal Article, Academic Journal | en_US |
dc.citation.volume | 115 | en_US |
dc.citation.issue | 25 | en_US |
dc.citation.spage | E5716 | en_US |
dc.citation.epage | E5725 | en_US |
dc.description.status | Published | en_US |
dc.description.peerreview | Yes | en_US |