Paper details

Title: Benchmarking Invasive Alien Species Image Recognition Models for a Citizen Science Based Spatial Distribution Monitoring

Authors: Tom Niers, Jan Stenkamp, Nick Pascal Jakuschona, Thomas Bartoschek, Sven Schade

Abstract: Obtained from CrossRef

Abstract. Recent developments in image recognition technology including artificial intelligence and machine learning led to an intensified research in computer vision models. This progress also allows advances for the collection of spatio-temporal data on Invasive Alien Species (IAS), in order to understand their geographical distribution and impact on the biodiversity loss. Citizen Science (CS) approaches already show successful solutions how the public can be involved in collecting spatio-temporal data on IAS, e.g. by using mobile applications for monitoring. Our work analyzes recently developed image-based species recognition models suitable for the monitoring of IAS in CS applications. We demonstrate how computer vision models can be benchmarked for such a use case and how their accuracy can be evaluated by testing them with IAS of European Union concern. We found out that available models have different strengths. Depending on which criteria (e.g. high species coverage, costs, maintenance, high accuracies) are considered as most important, it needs to be decided individually which model fits best. Using only one model alone may not necessarily be the best solution, thus combining multiple models or developing a new custom model can be desirable. Generally, cooperation with the model providers can be advantageous.

Codecheck details

Certificate identifier: 2022-008

Codechecker name: Daniel Nüst

Time of codecheck: 2022-07-09 12:00:00

Repository: https://osf.io/K78EB

Codecheck report: https://doi.org/10.17605/OSF.IO/K78EB

Summary:

The article presents a comparison of seven image-based species recognition models, which were benchmarked against a set of species. Selected model executions were successfully reproduced. The outputs were manually compared on a sample basis and match the result data shared privately by the authors; no summary statistics were recalculated. The authors provided the used data privately, but all code and good documentation is available online and properly deposited and cited using a data repository. Only two of the four online classification APIs were tested due to the requirement of registering accounts, therefore this reproduction is only partially complete.


https://codecheck.org.uk/ | GitHub codecheckers

© Stephen Eglen & Daniel Nüst

Published under CC BY-SA 4.0

DOI of Zenodo Deposit

CODECHECK is a process for independent execution of computations underlying scholarly research articles.