Paper details

Title: Extraction of linear structures from digital terrain models using deep learning

Authors: Ramish Satari, Bashir Kazimi, Monika Sester

Abstract: Obtained from CrossRef

Abstract. This paper explores the role deep convolutional neural networks play in automated extraction of linear structures using semantic segmentation techniques in Digital Terrain Models (DTMs). DTM is a regularly gridded raster created from laser scanning point clouds and represents elevations of the bare earth surface with respect to a reference. Recent advances in Deep Learning (DL) have made it possible to explore the use of semantic segmentation for detection of terrain structures in DTMs. This research examines two novel and practical deep convolutional neural network architectures i.e. an encoder-decoder network named as SegNet and the recent state-of-the-art high-resolution network (HRNet). This paper initially focuses on the pixel-wise binary classification in order to validate the applicability of the proposed approaches. The networks are trained to distinguish between points belonging to linear structures and those belonging to background. In the second step, multi-class segmentation is carried out on the same DTM dataset. The model is trained to not only detect a linear feature, but also to categorize it as one of the classes: hollow ways, roads, forest paths, historical paths, and streams. Results of the experiment in addition to the quantitative and qualitative analysis show the applicability of deep neural networks for detection of terrain structures in DTMs. From the deep learning models utilized, HRNet gives better results.

Codecheck details

Certificate identifier: 2021-004

Codechecker names: Daniel Nüst, Anita Graser

Time of codecheck: 2021-06-10 12:00:00

Repository: https://osf.io/2sc7g

Codecheck report: https://doi.org/10.17605/osf.io/2sc7g

Summary:

The provided workflow was partially reproduced. Based on the provided test file and instructions, we were able to recreate the computing environment and run the segmentation models. Relevant tables from the paper could be recreated. The training and validation part of the workflow is irreproducible because of proprietary data, therefore no figures could be recreated.


https://codecheck.org.uk/ | GitHub codecheckers

© Stephen Eglen & Daniel Nüst

Published under CC BY-SA 4.0

DOI of Zenodo Deposit

CODECHECK is a process for independent execution of computations underlying scholarly research articles.