Automated object detection within imagery is challenging in the field of wildlife biology. Uncontrolled conditions, along with the relative size of target species to the more abundant background makes manual detection tedious and error-prone. In order to address these concerns, the Wildlife@Home project has been developed with a web portal to allow citizen scientists to inspect and catalog these images, which in turn provides training data for computer vision algorithms to automate the detection process. This work focuses on a project with over 65,000 Unmanned Aerial System (UAS) images from flights in the Hudson Bay area of Canada gathered in the years 2015 and 2016. This data set comprises over 3TB of raw imagery and also contains a further 2 million images from related ecological projects. Given the data scale, the person-hours that would be needed to manually inspect the data is extremely high. This work examines the efficacy of using citizen science data as inputs to convolutional neural networks (CNNs) used for object detection. Three CNNs were trained with expert observations, citizen scientist observations, and matched observations made by pairing citizen scientist observations of the same object and taking the intersection of the two observations. The expert, matched, and unmatched CNNs overestimated the number of lesser snow geese in the testing images by 88%, 150%, and 250%, respectively, which is less than current work using similar techniques on all visible (RGB) UAS imagery. These results show that the accuracy of the input data is more important than the quantity of the input data, as the unmatched citizen scientists observations are shown to be highly variable, but substantial in number, while the matched observations are much closer to the expert observations, though less in number. To increase the accuracy of the CNNs, it is proposed to use a feedback loop to ensure the CNN gets continually trained using extracted observations that it did poorly on during the testing phase.