request Request a Protocol
ask Ask a question
Favorite

The main data processing (georeferencing, strip adjustment, filtering, classification, etc.) was done by the respective operating LiDAR companies (ALS point clouds: Vermessung AVT‐ZT‐GmbH and ULS point cloud: Department of Geography, University of Innsbruck). A precise compliance of the quality parameters (like strip adjustment, mean ground point density or classification) of the different LiDAR campaigns was verified by additional commissioned LiDAR companies.

Based on this basic pre‐processing, an additional quality check was carried out by us to prepare data for further analysis and ensure data comparability.

The ULS point cloud was referenced at University of Innsbruck by real‐time kinematic (RTK) positioning without using ground control points (GCP). Stable surfaces (e.g., roof areas) within the ALS point cloud were used for matching the ULS to the ALS point cloud. This means that we used the Schöttlbach‐ALS as reference data for further processing (additional height adjustment) and quality control of the ULS point cloud. We used stable surfaces with different orientations and evenly distributed over the study area to verify the relative vertical and horizontal accuracy of the ULS point cloud. For this purpose, we applied the software package LAStools for classification, filtering, height above ground adjustment of the raw point clouds and interpolation of the respective DTMs from the ground points. LAStools (rapidlasso GmbH, 2021) can process large datasets with minimal computer processing power and in a short amount of time.

Since we used LAStools for data processing, we had to use either a TIN approach with standard linear interpolation (las2dem/blast2dem) or a grid‐based approach by computing the highest, lowest or average‐z‐value of all ground points within a cell (lasgrid). We used the TIN approach according to Fuller and Hutchinson (2007) who used this approach for fluvial environments. The spatial resolution of the raster had to be approximated to the underlying continuous terrain as well as the point density to reduce errors in the final DoDs (Fisher & Tate, 2006). Based on the different point densities of the data and evaluating QCD‐results determined with different spatial resolutions, a 0.5 m spatial resolution provided the best results for the Schöttlbach DTMs.

The software was originally designed for ALS point cloud processing; the ULS dataset required more manual post‐processing (e.g., manual filtering and classification) to guarantee the best possible results. Therefore, a manual re‐classification of the ULS ground points focusing on the small‐scale structures along and in the riverbed was required. The quality of the DTM and in a further step any kind of hydro‐morphological analysis would be substantially affected if misclassified objects were not corrected (e.g., boulders classified as vegetation).

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

0/150

tip Tips for asking effective questions

+ Description

Write a detailed description. Include all information that will help others answer your question including experimental processes, conditions, and relevant images.

post Post a Question
0 Q&A