Processing Drone LiDAR from YellowScan’s Surveyor, a Velodyne Puck based System

Points clouds from UAVs have become a common sight. Cheap consumer drones equipped with cameras produce points from images with increasing quality as photogrammetry software is improving. But vegetation is always a show stopper for point clouds generated from imagery data. Only an active sensing technique such as laser scanning can penetrate through the vegetation and generate points on the ground under the canopy of a forested area. Advances in UAV technology and the miniaturization of LiDAR systems have allowed lasers-scanning solutions for drones to enter the market.

Last summer we attended the LiDAR for Drone 2017 Conference by YellowScan and processed some data sets flown with their Surveyor system that is built around the Velodyne VLP-16 Puck LiDAR scanner and the Applanix APX15 single board GNSS-Inertial solution. One common challenge observed in LiDAR data generated by the Velodyne Puck is that surfaces are not as “crisp” as those generated by other laser scanners. Flat and open terrain surfaces are described by a layer of points with a “thickness” of a few centimeter as you can see in the images below. This visualization uses a 10 meter by 5 meter cut-out of from this data set with the coordinate range [774280,774290) in x and [6279463,6279468) in y. Standard ground classification routines will “latch onto” the lowermost envelope of these thick point layers and therefore produce a sub-optimal Digital Terrain Model (DTM).

In part this “thickness” can be reduced by using fewer flightlines as the “thickness” of each flightline by itself is lower but it is compounded when merging all flightlines together. However, deciding which (subset of) flightlines to use for which part of the scene to generate the best possible ground model is not an obvious tasks either and even per flightline there will be a remaining “thickness” to deal with as can be seen in the following set of images.

In the following we show how to deal with “thickness” in a layer of points describing a ground surface. We first produce a “lowest ground” which we then widen into a “thick ground” from which we then derive “median ground” points that create a plausible terrain representation when interpolated by a Delaunay triangulation and rasterized onto a DTM. Step by step we process this example data set captured in a “live demo” during the LiDAR for Drone 2017 Conference – the beautiful Château de Flaugergues in Montpellier, France where the event took place. You can download this data via this link if you would like to repeat these processing steps:

Once you decompress the RAR file (e.g. with the UnRar.exe freeware) you will find six raw flight strips in LAS format and the trajectory of the UAV in ASCII text format as it was provided by YellowScan.

E:\LAStools\bin>dir Flaugergues
06/27/2017 08:03 PM 146,503,985 Flaugergues_test_demo_ppk_L1.las
06/27/2017 08:02 PM  91,503,103 Flaugergues_test_demo_ppk_L2.las
06/27/2017 08:03 PM 131,917,917 Flaugergues_test_demo_ppk_L3.las
06/27/2017 08:03 PM 219,736,585 Flaugergues_test_demo_ppk_L4.las
06/27/2017 08:02 PM 107,705,667 Flaugergues_test_demo_ppk_L5.las
06/27/2017 08:02 PM  74,373,053 Flaugergues_test_demo_ppk_L6.las
06/27/2017 08:03 PM   7,263,670 Flaugergues_test_demo_ppk_traj.txt

As usually we start with quality checking by visual inspection with lasview and by creating a textual report with lasinfo.

E:\LAStools\bin>lasview Flaugergues_test_demo_ppk_L1.las

The raw LAS file “Flaugergues_test_demo_ppk_L1.las” colored by elevation.

E:\LAStools\bin>lasinfo Flaugergues_test_demo_ppk_L1.las
lasinfo (171011) report for Flaugergues_test_demo_ppk_L1.las
reporting all LAS header entries:
 file signature: 'LASF'
 file source ID: 1
 global_encoding: 1
 project ID GUID data 1-4: 00000000-0000-0000-0000-000000000000
 version major.minor: 1.2
 system identifier: 'YellowScan Surveyor'
 generating software: 'YellowReader by YellowScan'
 file creation day/year: 178/2017
 header size: 227
 offset to point data: 297
 number var. length records: 1
 point data format: 3
 point data record length: 34
 number of point records: 4308932
 number of points by return: 4142444 166488 0 0 0
 scale factor x y z: 0.001 0.001 0.001
 offset x y z: 774282 6279505 92
 min x y z: 774152.637 6279377.623 82.673
 max x y z: 774408.344 6279541.646 116.656
variable length header record 1 of 1:
 reserved 0
 user ID 'LASF_Projection'
 record ID 34735
 length after header 16
 description ''
 GeoKeyDirectoryTag version 1.1.0 number of keys 1
 key 3072 tiff_tag_location 0 count 1 value_offset 2154 - ProjectedCSTypeGeoKey: RGF93 / Lambert-93
reporting minimum and maximum for all LAS point record entries ...
 X -129363 126344
 Y -127377 36646
 Z -9327 24656
 intensity 0 65278
 return_number 1 2
 number_of_returns 1 2
 edge_of_flight_line 0 0
 scan_direction_flag 0 0
 classification 0 0
 scan_angle_rank -120 120
 user_data 75 105
 point_source_ID 1 1
 gps_time 219873.160527 219908.550379
 Color R 0 0
 G 0 0
 B 0 0
number of first returns: 4142444
number of intermediate returns: 0
number of last returns: 4142444
number of single returns: 3975956
overview over number of returns of given pulse: 3975956 332976 0 0 0 0 0
histogram of classification of points:
 4308932 never classified (0)

Nicely visible are the circular scanning patterns of the Velodyne VLP-16 Puck. We also notice that the trajectory of the UAV can be seen in the lasview visualization because the Puck was scanning the drone’s own landing gear. The lasinfo report tells us that point coordinates are stored with too much resolution (mm) and that points do not need to be stored using point type 3 (with RGB colors) because all RGB values are zero. We fix this with an initial run of las2las and also compress the raw strips to the LAZ format on 4 CPUs in parallel.

las2las -i Flaugergues\*.las ^
        -rescale 0.01 0.01 0.01 ^
        -auto_reoffset ^
        -set_point_type 1 ^
        -odir Flaugergues\strips_raw -olaz ^
        -cores 4

Next we do the usual check for flightline alignment with lasoverlap (README) which we consider to be by far the most important quality check. We compare the lowest elevation from different flightline per 25 cm by 25cm cell in all overlap areas. We consider a vertical difference of up to 5 cm as acceptable (color coded as white) and mark differences of over 30 cm (color coded as saturated red or blue).

lasoverlap -i Flaugergues\strips_raw\*.laz -faf ^
           -step 0.25 ^
           -min_diff 0.05 -max_diff 0.3 ^
           -odir Flaugergues\quality -o overlap.png

The vertical difference in open areas between the flightlines is slightly above 5 cm which we consider acceptable in this example. Depending on the application we recommend to investigate further where these differences come from and what consequences they may have for post processing. We also create a color-coded visualization of the last return density per 25 cm by 25 cm cell using lasgrid (README) with blue meaning less than 100 returns per square meter and red meaning more than 4000 returns per square meter.

lasgrid -i Flaugergues\strips_raw\*.laz -merged ^
        -keep_last ^
        -step 0.25 ^
        -point_density ^
        -false -set_min_max 100 4000 ^
        -odir Flaugergues\quality -o density_100_4000.png

Color coded density of last returns per square meter for each 25 cm by 25 cm cell. Blue means 100 or less last returns per square meter. Red means 4000 or more last returns per square meter

As usual we start the LiDAR processing by reorganizing the flightlines into square tiles. Because of the variability in the density that is evident in the visualization above we use lastile (README) to create an adaptive tiling that starts with 200 m by 200 m tiles and then iterate to refine those tiles with over 10 million points down to smaller 25 m by 25 m tiles.

lastile -i Flaugergues\strips_raw\*.laz ^
        -apply_file_source_ID ^
        -tile_size 200 -buffer 8 -flag_as_withheld ^
        -refine_tiling 10000000 ^
        -odir Flaugergues\tiles_raw -o flauge.laz

lastile -i Flaugergues\tiles_raw\flauge*_200.laz ^
        -refine_tiles 10000000 ^
        -olaz ^
        -cores 4

lastile -i Flaugergues\tiles_raw\flauge*_100.laz ^
        -refine_tiles 10000000 ^
        -olaz ^
        -cores 4

lastile -i Flaugergues\tiles_raw\flauge*_50.laz ^
        -refine_tiles 10000000 ^
        -olaz ^
        -cores 4

Subsequent processing is faster when the points have a spatially coherent order. Therefore we rearrange the points into standard space-filling z-order using a call to lassort (README). We run this in parallel on as many cores as it makes sense (i.e. not using more cores than there are physical CPUs).

lassort -i Flaugergues\tiles_raw\flauge*.laz ^
        -odir Flaugergues\tiles_sorted -olaz ^
        -cores 4

Next we classify those points as noise that are isolated on a 3D grid of 1 meter cell size using lasnoise. See the README file of lasnoise for a description on the exact manner in which the isolated points are classified. We do this to eliminate low noise points that would otherwise cause trouble in the subsequent processing.

lasnoise -i Flaugergues\tiles_sorted\flauge*.laz ^
         -step 1 -isolated 5 ^
         -odir Flaugergues\tiles_denoised -olaz ^
         -cores 4

Next we mark the subset of lowest points on a 2D grid of 10 cm cell size with classification code 8 using lasthin (README) while ignoring the noise points with classification code 7 that were marked as noise in the previous step.

lasthin -i Flaugergues\tiles_denoised\flauge*.laz ^
        -ignore_class 7 ^
        -step 0.1 -lowest ^
        -classify_as 8 ^
        -odir Flaugergues\tiles_lowest -olaz ^
        -cores 4

Considering only the resulting points marked with classification 8 we then create a temporary ground classification that we refer to as the “lowest ground”. For this we run lasground (README) with a set of suitable parameters that were found by experimentation on two of the most complex tiles from the center of the survey.

lasground -i Flaugergues\tiles_lowest\flauge*.laz ^
          -ignore_class 0 7 ^
          -step 5 -hyper_fine -bulge 1.5 -spike 0.5 ^
          -odir Flaugergues\tiles_lowest_ground -olaz ^
          -cores 4

We then “thicken” this “lowest ground” by classifying all points that are between 2 cm below and 15 cm above the lowest ground to a temporary classification code 6 using the lasheight (README) tool. Depending on the spread of points in your data set you may want to tighten this range accordingly, for example when processing the flightlines acquired by the Velodyne Puck individually. We picked our range based on the visual experiments with “drop lines” and “rise lines” in the lasview viewer that are shown in images above.

lasheight -i Flaugergues\tiles_lowest_ground\flauge*.laz ^
          -do_not_store_in_user_data ^
          -classify_between -0.02 0.15 6 ^
          -odir Flaugergues\tiles_thick_ground -olaz ^
          -cores 4

The final ground classification is obtained by creating the “median ground” from the “thick ground”. This uses a brand-new option in the lasthin (README) tool of LAStools. The new ‘-percentile 50 10’ option selects the point that is closest to the specified percentile of 50 of all point elevations within a grid cell of a specified size given there are at least 10 points in that cell. The selected point either survives the thinning operation or gets marked with a specified classification code or flag.

lasthin -i Flaugergues\tiles_thick_ground\flauge*.laz ^
        -ignore_class 0 1 7 ^
        -step 0.1 -percentile 50 10 ^
        -classify_as 8 ^
        -odir Flaugergues\tiles_median_ground_10_10cm -olaz ^
        -cores 4

lasthin -i Flaugergues\tiles_median_ground_10_10cm\%NAME%*.laz ^
        -ignore_class 0 1 7 ^
        -step 0.2 -percentile 50 10 ^
        -classify_as 8 ^
        -odir Flaugergues\tiles_median_ground_10_20cm -olaz ^
        -cores 4

lasthin -i Flaugergues\tiles_median_ground_10_20cm\%NAME%*.laz ^
        -ignore_class 0 1 7 ^
        -step 0.4 -percentile 50 10 ^
        -classify_as 8 ^
        -odir Flaugergues\tiles_median_ground_10_40cm -olaz ^
        -cores 4

lasthin -i Flaugergues\tiles_median_ground_10_40cm\flauge*.laz ^
        -ignore_class 0 1 7 ^
        -step 0.8 -percentile 50 10 ^
        -classify_as 8 ^
        -odir Flaugergues\tiles_median_ground_10_80cm -olaz ^
         -cores 4

We now compare a triangulation of the median ground points with a triangulation of the highest and the lowest points per 10 cm by 10 cm cell to demonstrate that – at least in open areas – we really have computed a median ground surface.

Finally we raster the tiles with the las2dem (README) tool onto binary elevation grids in BIL format. Here we make the resolution dependent on the tile size, giving the 25 meter and 50 meter tiles the highest resolution of 10 cm and rasterize the 100 meter and 200 meter tiles at 20 cm and 40 cm respectively.

las2dem -i Flaugergues\tiles_median_ground_10_80cm\*_25.laz ^
        -i Flaugergues\tiles_median_ground_10_80cm\*_50.laz ^
        -keep_class 8 ^
        -step 0.1 -use_tile_bb ^
        -odir Flaugergues\tiles_dtm -obil ^
        -cores 4

las2dem -i Flaugergues\tiles_median_ground_10_80cm\*_100.laz ^
        -keep_class 8 ^
        -step 0.2 -use_tile_bb ^
        -odir Flaugergues\tiles_dtm -obil ^
        -cores 4

las2dem -i Flaugergues\tiles_median_ground_10_80cm\*_200.laz ^
        -keep_class 8 ^
        -step 0.4 -use_tile_bb ^
        -odir Flaugergues\tiles_dtm -obil ^
        -cores 4

Because all LAStools can read BIL files via on the fly conversion from rasters to points we can visually inspect the resulting elevation rasters with the lasview (README) tool. By adding the ‘-faf’ or ‘files_are_flightlines’ argument we treat the BIL files as if they were different flightlines which allows us to assign different color to points from different files to better inspect the transitions between tiles. The ‘-points 10000000’ argument instructs lasview to load up to 10 million points into memory instead of the default 5 million.

lasview -i Flaugergues\tiles_dtm\*.bil -faf ^
        -points 10000000

Final raster tiles in BIL format of three different sizes form seamless DTM.

For visual comparison we also produce a DSM and create hillshades. Note that the workflow for DSM creation shown below produces a “highest DSM” that will always be a few centimeter above the “median DTM”. This will be noticeable only in open areas of the terrain where the DSM and the DTM should coincide and their elevation should be identical.

lasthin -i Flaugergues\tiles_denoised\flauge*.laz ^
        -keep_z_above 110 ^
        -filtered_transform ^
        -set_classification 18 ^
        -ignore_class 7 18 ^
        -step 0.1 -highest ^
        -classify_as 5 ^
        -odir Flaugergues\tiles_highest -olaz ^
        -cores 4

las2dem -i Flaugergues\tiles_highest\*_25.laz ^
        -i Flaugergues\tiles_highest\*_50.laz ^
        -keep_class 5 ^
        -step 0.1 -use_tile_bb ^
        -odir Flaugergues\tiles_dsm -obil ^
        -cores 4

las2dem -i Flaugergues\tiles_highest\*_100.laz ^
        -keep_class 5 ^
        -step 0.2 -use_tile_bb ^
        -odir Flaugergues\tiles_dsm -obil ^
        -cores 4

las2dem -i Flaugergues\tiles_highest\*_200.laz ^
        -keep_class 5 ^
        -step 0.4 -use_tile_bb ^
        -odir Flaugergues\tiles_dsm -obil ^
        -cores 4

We thank YellowScan for challenging us to process their drone LiDAR with LAStools in order to present results at their LiDAR for Drone 2017 Conference and for sharing several example data sets with us, including the one used here.

LASmoons: Huaibo Mu

Huaibo Mu (recipient of three LASmoons)
Environmental Mapping, Department of Geography
University College London (UCL), UK

Background:
This study is a part of the EU-funded Metrology for Earth Observation and Climate project (MetEOC-2). It aims to combine terrestrial and airborne LiDAR data to estimate biomass and allometry for woodland trees in the UK. Airborne LiDAR can capture large amounts of data over large areas, while terrestrial LiDAR can provide much more details of high quality in specific areas. The biomass and allometry for individual specific tree species in 1 ha of Wytham Woods located about 5km north west of the University of Oxford, UK are estimated by combining both airborne and terrestrial LiDAR. Then the bias will be evaluated when estimation are applied on different levels: terrestrial LiDAR level, tree level, and area level. The goal are better insights and a controllable error range in the bias of biomass and allometry estimates for woodland trees based on airborne LiDAR.

Goal:
The study aims to find the suitable parameters of allometric equations for different specific species and establish the relationship between the tree height and stem diameter and crown diameter to be able to estimate the above ground biomass using airborne LiDAR. The biomass estimates under different levels are then compared to evaluate the bias and for the total 6ha of Wytham Woods for calibration and validation. Finally the results are to be applied to other woodlands in the UK. The LiDAR processing tasks for which LAStools are used mainly center around the creation of suitable a Canopy Height Model (CHM) from the airborne LiDAR.

Data:
+ Airborne LiDAR data produced by Professor David Coomes (University of Cambridge) with Airborne Research and Survey Facility (ARSF) Project code of RG13_08 in June 2014. The average point density is about 5.886 per m^2.
+ Terrestrial LiDAR data collected by UCL’s team leader by Dr. Mat Disney and Dr. Kim Calders in order to develop very detailed 3D models of the trees.
+ Fieldwork from the project “Initial Results from Establishment of a Long-term Broadleaf Monitoring Plot at Wytham Woods, Oxford, UK” by Butt et al. (2009).

LAStools processing:
1) check LiDAR quality as described in these videos and articles [lasinfo, lasvalidate, lasoverlap, lasgrid, las2dem]
2) classify into ground and non-ground points using tile-based processing  [lastile, lasground]
3) generate a Digital Terrain Model (DTM) [las2dem]
4) compute height of points and delete points higher than maximum tree height obtained from terrestrial LiDAR [lasheight]
5) convert points into disks with 10 cm diameter to conservatively account for laser beam width [lasthin]
6) generate spike-free Digital Surface Model (DSM) based on algorithm by Khosravipour et al. (2016) [las2dem]
7) create Canopy Height Model (CHM) by subtracting DTM from spike-free DSM [lasheight].

References:
Butt, N., Campbell, G., Malhi, Y., Morecroft, M., Fenn, K., & Thomas, M. (2009). Initial results from establishment of a long-term broadleaf monitoring plot at Wytham Woods, Oxford, UK. University Oxford, Oxford, UK, Rep.
Khosravipour, A., Skidmore, A.K., Isenburg, M., Wang, T.J., Hussin, Y.A., (2014). Generating pit-free Canopy Height Models from Airborne LiDAR. PE&RS = Photogrammetric Engineering and Remote Sensing 80, 863-872.
Khosravipour, A., Skidmore, A.K., Isenburg, M. and Wang, T.J. (2015) Development of an algorithm to generate pit-free Digital Surface Models from LiDAR, Proceedings of SilviLaser 2015, pp. 247-249, September 2015.
Khosravipour, A., Skidmore, A.K., Isenburg, M (2016) Generating spike-free Digital Surface Models using raw LiDAR point clouds: a new approach for forestry applications, (journal manuscript under review).

Removing Excessive Low Noise from Dense-Matching Point Clouds

Point clouds produced with dense-matching by photogrammetry software such as SURE, Pix4D, or Photoscan can include a fair amount of the kind of “low noise” as seen below. Low noise causes trouble when attempting to construct a Digital Terrain Model (DTM) from the points as common algorithm for classifying points into ground and non-ground points – such as lasground – tend to “latch onto” those low points, thereby producing a poor representation of the terrain. This blog post describes one possible LAStools workflow for eliminating excessive low noise. It was developed after a question in the LAStools user forum by LASmoons holder Muriel Lavy who was able to share her noisy data with us. See this, this, this, thisthis, and this blog post for further reading on this topic.

Here you can download the dense matching point cloud that we are using in the following work flow:

We leave the usual inspection of the content with lasinfolasview, and lasvalidate that we always recommend on newly obtained data as an exercise to the reader. Note that a check for proper alignment of flightlines with lasoverlap that we consider mandatory for LiDAR data is not applicable for dense-matching points.

With lastile we turn the original file with 87,261,083 points into many smaller 500 by 500 meter tiles for efficient multi-core processing. Each tile is given a 25 meter buffer to avoid edge artifacts. The buffer points are marked as withheld for easier on-the-fly removal. We add a (terser) description of the WGS84 UTM zone 32N to each tile via the corresponding EPSG code 32632:
lastile -i muriel\20161127_Pancalieri_UTM.laz ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -epsg 32632 ^
        -odir muriel\tiles_raw -o panca.laz
Because dense-matching points often have a poor point order in the files they get delivered in we use lassort to rearrange them into a space-filling curve order as this will speed up most following processing steps:
lassort -i muriel\tiles_raw\panca*.laz ^
        -odir muriel\tiles_sorted -olaz ^
        -cores 7
We then run lasthin to reclassify the highest point of every 2.5 by 2.5 meter grid cell with classification code 8. As the spacing of the dense-matched points is around 40 cm in both x and y, around 40 points will fall into each such grid cell from which the highest is then classified as 8:
lasthin -i muriel\tiles_sorted\panca*.laz ^
        -step 2.5 ^
        -highest -classify_as 8 ^
        -odir muriel\tiles_thinned -olaz ^
        -cores 7
Considering only those points classified as 8 in the last step we then run lasnoise to find points that are highly isolated in wide and flat neighborhoods that are then reclassified as 7. See the README file of lasnoise for a detailed explanation of the different parameters:
lasnoise -i muriel\tiles_thinned\panca*.laz ^
         -ignore_class 0 ^
         -step_xy 5 -step_z 0.1 -isolated 4 ^
         -classify_as 7 ^
         -odir muriel\tiles_isolated -olaz ^
         -cores 7
Now we run a temporary ground classification of only (!!!) on those points that are still classified as 8 using the default parameters of lasground. Hence we only use the points that were the highest points on the 2.5 by 2.5 meter grid and that were not classified as noise in the previous step. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_isolated\panca*.laz ^
          -city -ultra_fine -ignore_class 0 7 ^
          -odir muriel\tiles_temp_ground -olaz ^
          -cores 7
The result of this temporary ground filtering is then merely used to mark all points that are 0.5 meter below the triangulated TIN of these temporary ground points with classification code 12 using lasheight. See the README file of lasheight for a detailed explanation of the different parameters:
lasheight -i muriel\tiles_temp_ground\panca*.laz ^
          -do_not_store_in_user_data ^
          -classify_below -0.5 12 ^
          -odir muriel\tiles_temp_denoised -olaz ^
          -cores 7
In the resulting tiles the low noise (but also many points above the ground) are now marked and in a final step we produce properly classified denoised tiles by re-mapping the temporary classification codes to conventions that are more consistent with the ASPRS LAS specification using las2las:
las2las -i muriel\tiles_temp_denoised\panca*.laz ^
        -change_classification_from_to 1 0 ^
        -change_classification_from_to 2 0 ^
        -change_classification_from_to 7 0 ^
        -change_classification_from_to 12 7 ^
        -odir muriel\tiles_denoised -olaz ^
        -cores 7
Let us visually check what each of the above steps has produced by zooming in on a 300 meter by 100 meter strip of points with the bounding box (388500,4963125) to (388800,4963225) in tile ‘panca_388500_4963000.laz’:
The final classification of all points that are not already classified as noise (7) into ground (2) or non-ground (1) was done with a final run of lasground. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_denoised\panca*.laz ^
          -ignore_class 7 ^
          -city -ultra_fine ^
          -odir muriel\tiles_ground -olaz ^
          -cores 7
Then we create a seamless hill-shaded DTM tiles by triangulating all the points classified as ground into a temporary TIN (including those in the 25 meter buffer) and then rasterizing only the inner 500 meter by 500 meter of each tile with option ‘-use_tile_bb’ of las2dem. For more details on the importance of buffers in tile-based processing see this blog post here.
las2dem -i muriel\tiles_ground\panca*.laz ^
        -keep_class 2 ^
        -step 1 -hillshade ^
        -use_tile_bb ^
        -odir muriel\tiles_dtm -opng ^
        -cores 7

And here the original DSM side-by-side with resulting DTM after low noise removal. One dense forested area near the center of the data was not entirely removed due to the lack of ground points in this area. Integrating external ground points or manual editing with lasview are two possible way to rectify these few remaining errors …

Integrating External Ground Points in Forests to Improve DTM from Dense-Matching Photogrammetry

The biggest problem of generating a Digital Terrain Model (DTM) from the photogrammetric point clouds that are produced from aerial imagery with dense-matching software such as SURE, Pix4D, or Photoscan is dense vegetation: when plants completely cover the terrain not a single point is generated on the ground. This is different for LiDAR point clouds as the laser can even penetrate dense multi-level tropical forests. The complete lack of ground points in larger vegetated areas such as closed forests or dense plantations means that the many processing workflows for vegetation analysis that have been developed for LiDAR cannot be used for photogrammetric point clouds  … unless … well unless we are getting those missing ground points some other way. In the following we see how to integrate external ground points to generate a reasonable DTM under a dense forest with LAStools. See this, this, this, this, and this article for further reading.

Here you can download the dense matching point cloud, the manually collected ground points, and the forest stand delineating polygon that we are using in the following example work flow:

We leave the usual inspection of the content with lasinfo and lasview that we always recommend on newly obtained data as an exercise to the reader. Using las2dem and lasgrid we created the Google Earth overlays shown above to visualize the extent of the dense matched point cloud and the distribution of the manually collected ground points:

las2dem -i DenseMatching.laz ^
        -thin_with_grid 1.0 ^
        -extra_pass ^
        -step 2.0 ^
        -hillshade ^
        -odix _hill_2m -opng

lasgrid -i ManualGround.laz ^
        -set_RGB 255 0 0 ^
        -step 10 -rgb ^
        -odix _grid_10m -opng

Attempts to ground-classify the dense matching point cloud directly are futile as there are no ground points under the canopy in the heavily forested area. Therefore 558 ground points were manually surveyed in the forest of interest that are around 50 to 120 meters apart from another. We show how to integrate these points into the dense matching point cloud such that we can successfully extract bare-earth information from the data.

In the first step we “densify” the manually collected ground points by interpolating them with triangles onto a raster of 2 meter resolution that we store as LAZ points with las2dem. You could consider other interpolation schemes to “densify” the ground points, here we use simple linear interpolation to prove the concept. Due to the varying distance between the manually surveyed ground points we allow interpolating triangles with edge lengths of up to 125 meters. These triangles then also cover narrow open areas next to the forest, so we clip the interpolated ground points against the forest stand delineating polygon with lasclip to classify those points that are really in the forest as “key points” (class 8) and all others as “noise” (class 7).

las2dem -i ManualGround.laz ^
        -step 2 ^
        -kill 125 ^
        -odix _2m -olaz

lasclip -i ManualGround_2m.laz ^
        -set_classification 7 ^ 
        -poly forest.shp ^
        -classify_as 8 -interior ^
        -odix _forest -olaz

Below we show the resulting densified ground points colored by elevation that survive the clipping against the forest stand delineating polygon and were classified as “key points” (class 8). The interpolated ground points in narrow open areas next to the forest that fall outside this polygon were classified as “noise” (class 7) and are shown in violet. They will be dropped in the next step.

We then merge the dense matching points with the densified manual ground points (while dropping all the violet points marked as noise) as input to lasthin and reclassify the lowest point per 1 meter by 1 meter with a temporary code (here we use class 9 that usually refers to “water”). Only the subset of lowest points that receives the temporary classification code 9 will be used for ground classification later.

lasthin -i DenseMatching.laz ^
        -i ManualGround_2m_forest.laz ^
        -drop_class 7 ^
        -merged ^
        -lowest -step 1 -classify_as 9 ^
        -o DenseMatchingAndDensifiedGround.laz

We use the GUI of lasview to pick several interesting areas for visual inspection. The selected points load much faster when the LAZ file is spatially indexed and therefore we first run lasindex. For better orientation we also load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i DenseMatchingAndDensifiedGround.laz 

lasview -i DenseMatchingAndDensifiedGround.laz -gui

We pick the area shown below that contains the target forest with manually collected and densified ground points and a forested area with only dense matching points. The difference could not be more drastic as the visualizations show.

Now we run ground classification using lasground with option ‘-town’ using only the points with the temporary code 9 by ignoring all other classifications 0 and 8 in the file. We leave the temporary classification code 9 unchanged for all the points that were not classified with “ground” code 2 so we can visualize later which ones those are.

lasground -i DenseMatchingAndDensifiedGround.laz ^
          -ignore_class 0 8 ^
          -town ^
          -non_ground_unchanged ^
          -o GroundClassified.laz

We again use the GUI of lasview to pick several interesting areas after running lasindex and again load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i GroundClassified.laz 

lasview -i GroundClassified.laz -gui

We pick the area shown below that contains all three scenarios: the target forest with manually collected and densified ground points, an open area with only dense matching points, and a forested area with only dense matching points. The result is as expected: in the target forest the manually collected ground points are used as ground and in the open area the dense-matching points are used as ground. But there is no useful ground in the other forested area.

Now we can compute the heights of the points above ground for our target forest with lasheight and either replace the z elevations in the file of store them separately as “extra bytes”. Then we can compute, for example, a Canopy Height Model (CHM) that color codes the height of the vegetation above the ground with lasgrid. Of course this will only be correct in the target forest where we have “good” ground but not in the other forested areas. We also compute a hillshaded DTM to be able to visually inspect the topography of the generated terrain model.

lasheight -i GroundClassified.laz ^
          -store_as_extra_bytes ^
          -o GroundClassifiedWithHeights.laz

lasgrid -i GroundClassifiedWithHeights.laz ^
        -step 2 ^
        -highest -attribute 0 ^
        -false -set_min_max 0 25 ^
        -o chm.png

las2dem -i GroundClassified.laz ^
        -keep_class 2 -extra_pass ^
        -step 2 ^ 
        -hillshade ^
        -o dtm.png

Here you can download the resulting color-coded CHM and the resulting hill-shaded DTM as Google Earth KMZ overlays. Clearly the resulting CHM is only meaningful in the target forest where we used the manually collected ground points to create a reasonable DTM. In the other forested areas the ground is only correct near the forest edges and gets worse with increasing distance from open areas. The resulting DTM exhibits some interesting looking  bumps in the middle of areas with manually collected ground point. Those are a result of using the dense-matching points as ground whenever their elevation is lower than that of the manually collected points (which is decided in the lasthin step). Whether those bumps represent true elevations of are artifacts of low erroneous elevation from dense-matching remains to be investigated.

For forests on complex and steep terrain the number of ground points that needs to be manually collected may make such an approach infeasible in practice. However, maybe you have another source of elevation, such as a low-resolution DTM of 10 or 25 meter provided by your local government. Or maybe even a high resolution DTM of 1 or 2 meter from a LiDAR survey you did several years ago. While the forest may have grown a lot in the past years, the ground under the forest will probably not have changed much …

LASmoons: Marzena Wicht

Marzena Wicht (recipient of three LASmoons)
Department of Photogrammetry, Remote Sensing and GIS
Warsaw University of Technology, Poland.

Background:
More than half of human population (Heilig 2012) suffers from many negative effects of living in cities: increased air pollution, limited access to the green areas, Urban Heat Island (UHI) and many more. To mitigate some of these effects, many ideas came up over the years: reducing the surface albedo, the idea of the Garden City, green belts, and so on. Increasing horizontal wind speed might actually improve both, the air pollution dispersion and the thermal comfort in urban areas (Gál & Unger 2009). Areas of low roughness promote air flow – discharging the city from warm, polluted air and supplying it with cool and fresh air – if they share specific parameters, are connected and penetrate the inner city with a country breeze. That is why mapping low roughness urban areas is important in better understanding urban climate.

Goal:
The goal of this study is to derive buildings (outlines and height) and high vegetation using LAStools and to use that data in mapping urban ventilation corridors for our case study area in Warsaw. There are many ways to map these; however using ALS data has certain advantages (Suder& Szymanowski 2014) in this case: DSMs can be easily derived, tree canopy (incl. height) can be joined to the analysis and buildings can be easily extracted. The outputs are then used as a basis for morphological analysis, like calculating frontal area index. LAStools has the considerable advantage of processing large quantities of data (~500 GB) efficiently.

Frontal area index calculation based on 3D building database

Data:
+ LiDAR provided by Central Documentation Center of Geodesy and Cartography
+ average pulse density 12 p/m^2
+ covers 517 km^2 (whole Warsaw)

LAStools processing:
1) quality checking of the data as described in several videos and blog posts [lasinfo, lasvalidate, lasoverlap, lasgrid, lasduplicate, lasreturnlas2dem]
2) reorganize data into sufficiently small tiles with buffers to avoid edge artifacts [lastile]
3) classify point clouds into vegetation and buildings [lasground, lasclassify]
4) normalize LiDAR heights [lasheight]
5) create triangulated, rasterized derivatives: DSM / DTM / nDSM / CHM [las2dem, blast2dem]
6) compute height-based metrics (e.g. ‘-avg’, ‘-std’, and ‘-p 50’) [lascanopy]
7) generate subsets during the workflow [lasclip]
8) generate building footprints [lasboundary]

References:
Heilig, G. K. (2012). World urbanization prospects: the 2011 revision. United Nations, Department of Economic and Social Affairs (DESA), Population Division, Population Estimates and Projections Section, New York.
Gal, T., & Unger, J. (2009). Detection of ventilation paths using high-resolution roughness parameter mapping in a large urban area. Building and Environment, 44(1), 198-206.
Suder, A., & Szymanowski, M. (2014). Determination of ventilation channels in urban area: A case study of Wroclaw (Poland). Pure and Applied Geophysics, 171(6), 965-975.

LASmoons: Gudrun Norstedt

Gudrun Norstedt (recipient of three LASmoons)
Forest History, Department of Forest Ecology and Management
Swedish University of Agricultural Sciences, Umeå, Sweden

Background:
Until the end of the 17th century, the vast boreal forests of the interior of northern Sweden were exclusively populated by the indigenous Sami. When settlers of Swedish and Finnish ethnicity started to move into the area, colonization was fast. Although there is still a prospering reindeer herding Sami culture in northern Sweden, the old Sami culture that dominated the boreal forest for centuries or even millenia is to a large extent forgotten.
Since each forest Sami family formerly had a number of seasonal settlements, the density of settlements must have been high. However, only very few remains are known today. In the field, old Sami settlements can be recognized through the presence of for example stone hearths, storage caches, pits for roasting pine bark, foundations of certain types of huts, reindeer pens, and fences. Researchers of the Forest History section of the Department of Forest Ecology and Management have long been surveying such remains on foot. This, however, is extremely time consuming and can only be done in limited areas. Also, the use of aerial photographs is usually difficult due to dense vegetation. Data from airborne laser scanning should be the best way to find remains of the old forest Sami culture. Previous research has shown the possibilities of using airborne laser scanning data for detecting cultural remains in the boreal forest (Jansson et al., 2009; Koivisto & Laulamaa, 2012; Risbøl et al., 2013), but no studies have aimed at detecting remains of the forest Sami culture. I want to test the possibilities of ALS in this respect.

DTM from the Krycklan catchment, showing a row of hunting pits and (larger) a tar pit.

Goal:
The goal of my study is to test the potential of using LiDAR data for detecting cultural and archaeological remains on the ground in a forest area where Sami have been known to dwell during historical times. Since the whole of Sweden is currently being scanned by the National Land Survey, this data will be included. However, the average point density of the national data is only 0,5–1 pulses/m^2. Therefore, the study will be done in an established research area, the Krycklan catchment, where a denser scanning was performed in 2015. The Krycklan data set lacks ground point classification, so I will have to perform such a classification before I can proceed to the creation of a DTM. Having tested various kind of software, I have found that LAStools seems to be the most efficient way to do the job. This, in turn, has made me aware of the importance of choosing the right methods and parameters for doing a classification that is suitable for archaeological purposes.

Data:
The data was acquired with a multi-spectral airborne LiDAR sensor, the Optech Titan, and a Micro IRS IMU, operated on an aircraft flying at a height of about 1000 m and positioning was post-processed with the TerraPos software for higher accuracy.
The average pulse density is 20 pulse/m^2.
+ About 7 000 hectares were covered by the scanning. The data is stored in 489 tiles.

LAStools processing:
1) run a series of classifications of a few selected tiles with both lasground and lasground_new with various parameters [lasground and lasground_new]
2) test the outcomes by comparing it to known terrain to find out the optimal parameters for classifying this particular LiDAR point cloud for archaeological purposes.
3) extract the bare-earth of all tiles (using buffers!!!) with the best parameters [lasground or lasground_new]
4) create bare-earth terrain rasters (DTMs) and analyze the area [lasdem]
5) reclassify the airborne LiDAR data collected by the National Land Survey using various parameters to see whether it can become more suitable for revealing Sami cultural remains in a boreal forest landscape  [lasground or lasground_new]

References:
Jansson, J., Alexander, B. & Söderman, U. 2009. Laserskanning från flyg och fornlämningar i skog. Länsstyrelsen Dalarna (PDF).
Koivisto, S. & Laulamaa, V. 2012. Pistepilvessä – Metsien arkeologiset kohteet LiDAR-ilmalaserkeilausaineistoissa. Arkeologipäivät 2012 (PDF).
Risbøl, O., Bollandsås, O.M., Nesbakken, A., Ørka, H.O., Næsset, E., Gobakken, T. 2013. Interpreting cultural remains in airborne laser scanning generated digital terrain models: effects of size and shape on detection success rates. Journal of Archaeological Science 40:4688–4700.

LASmoons: Muriel Lavy

Muriel Lavy (recipient of three LASmoons)
RED (Risk Evaluation Dashboard) project
ISE-Net s.r.l, Aosta, ITALY.

Background:
The Aosta Valley Region is a mountainous area in the heart of the Alps. This region is regularly affected by hazard natural phenomena connected with the terrain geomorphometry and the climate change: snow avalanche, rockfalls and landslide.
In July 2016 a research program, funded by the European Program for the Regional Development, aims to create a cloud dashboard for the monitoring, the control and the analysis of several parameters and data derived from advanced sensors: multiparametrical probes, aerial and oblique photogrammetry and laser scanning. This tool will help the territory management agencies to improve the risk mitigation and management system.

The RIEGL VZ-4000 scanning the Aosta Valley Region in Italy.

Goal:
This study aims to classify the point clouds derived from aerial imagery integrated with laser scanning data in order to generate accurate DTM, DSM and Digital Snow Models. The photogrammetry data set was acquired with a Nikon D810 camera from an helicopter survey. The aim of further analysis is to detect changes of natural dynamic phenomena that have occurred via volume analysis and mass balance evaluation.

Data:
+ The photogrammetry data set was acquired with an RGB camera (Nikon D810) with a focal length equivalent of 50 mm from a helicopter survey: 1060 JPG images
+ The laser scanner data set was acquired using a Terrestrial Laser Scanner (RIEGL VZ-4000) combined with a Leica GNSS device (GS25) to georeference the project. The TLS dataset was then used as base reference to properly align and georeference the photogrammetry point cloud.

LAStools processing:
1) check the reference system and the point cloud density [lasinfo, lasvalidate]
2) remove isolated noise points [lasnoise]
3) classify point into ground and non-ground [lasground]
4) classify point clouds into vegetation and other [lasclassify]
5) create DTM and DSM  [las2dem, lasgrid, blast2dem]
6) produce 3D visualizations to facilitate the communication and the interaction [lasview]