Abstract

The Monterey Bay Aquarium Research Institute routinely deploys remotely operated underwater vehicles equipped with high definition cameras for use in scientific studies. Utilizing a video collection of over 22,000 hours and the Video Annotation and Reference System, we have set out to automate the detection and classification of deep-sea animals. This paper serves to explore the pitfalls of automation and suggest possible solutions to automated detection in diverse ecosystems with varying field conditions. Detection was tested using a saliency-based neuromorphic selective attention algorithm. The animals that were not detected were then used to tune saliency parameters. Once objects are detected, cropped images of the animals are then used to build a training set for future automation. Moving forward, neural networks or other methods could be used for automated taxonomic identification of deep-sea animals. With access to greater computing power and a second layer of classification, this method of detection may prove to be very effective in the future for analyzing and classifying large video datasets like the one found at MBARI. Finally, the entire process was used to develop a high school lesson plan that satisfies the Next Generation Science Standards.

Disciplines

Aquaculture and Fisheries | Biodiversity | Marine Biology | Other Animal Sciences | Other Computer Sciences | Population Biology

Mentor

Duane Edgington

Lab site

Monterey Bay Aquarium Research Institute (MBARI)

Funding Acknowledgement

This project has been made possible with support from the National Marine Sanctuary Foundation (www.marinesanctuary.org) and the California State University STEM Teacher Researcher Program.

Share

COinS
 

URL: https://digitalcommons.calpoly.edu/star/370