LASmoons: Gabriele Garnero

Gabriele Garnero (recipient of three LASmoons)
Interuniversity Department of Regional and Urban Studies and Planning
Politecnico e Università degli Studi, Torino
ITALY

Background:
Last spring, the LARTU research group produced a laser scanner survey of the Abbey of Sant’Andrea in Vercelli, on the occasion of the VIII centenary of the dedication (1219). The database produced with a topographic tool that integrates the potential of a total station with laser scanner and photogrammetric sensors (Trimble SX 10), has been used to produce representations that can be consulted in interactive mode, navigating within the point clouds and producing a consultation platform that can also be accessed by non-specialist users such as art historians or archaeologists.

lasmoons_gabriele_garnero_0

Goal:
The LAStools software will be used to improve both the point cloud produced by eliminating the remaining noises, and check other ways of publishing the data, so as to make it usable from outside, to the community of researchers.

Data:
+
laser scanner and photogrammetric acquisitions of the interior of the building (150 millions of points)
+ laser scanner and photogrammetric acquisitions of the outside of the building (210 millions of points)
+ drone-based shooting of outdoor areas processed with Pix4D (23 millions of points)

LAStools processing:
1) tile large point cloud into tiles with buffer [lastile]
2) mark set of points whose z coordinate is a certain percentile of that of their neighbors [lasthin]
3) remove isolated low points from the set of marked points [lasnoise]
4) classify marked points into ground and non-ground [lasground]
5) creates a LiDAR portal for 3D visualization of LAS files [laspublish]

Clean DTM from Agisoft Photogrammetric Points of Urban Scene

We occasionally get permission to distribute a nice data sets and blog about how to best process it with LAStools because this gets around having to pay our “outrageous” consulting fees. (-: This time we received a nice photogrammetric point cloud of the Tafawa Balewa Square in Lagos Island, Lagos, Nigeria. This area is part of the central business district of Lagos and characterized by high-rise buildings. The Tafawa Balewa Square was constructed in 1972 over the site of a defunct track for horse racing and is bounded by Awolowo road, Cable street, Force road, Catholic Mission street and the 26-story Independence House. We want to create a nice Digital Terrain Model from the dense-matching point cloud that was generated with Photoscan by AgiSoft and – as always with photogrammetry – we have to take special care of low noise points. The final result is shown below. All processing commands used are here.

After downloading the data it is useful to familiarize yourself with the file, which can be done with lasview, lasinfo, and lasgrid using the command lines shown below. According to the lasinfo report there are around 47 million points points with RGB colors in the file and their average density is around 100 points per square meter.

lasview -i 0_raw\TafawaBalewa.laz

lasinfo -i 0_raw\TafawaBalewa.laz ^
        -cd -histo intensity 256 ^
        -histo z 1 ^
        -odir 1_quality -odix _info -otxt

lasgrid -i 0_raw\TafawaBalewa.laz ^
        -step 1 ^
        -density ^
        -false -set_min_max 50 150 ^
        -odir 1_quality -odix _d_50_150 -opng

The average point density value of 100 from the lasinfo report suggests that 50 as the minimum and 150 as the maximum are good false color ramp values for a map showing how the point density per square meter is distributed.

Color-coded point density: blue equals 50 or less and red means 150 or more points per square meter.

We use lastile to create a buffered tiling for the 47 million points. We use a tile size of 200 meters and request a large buffer of 50 meters around every tile because there are large buildings in the survey areas. We also flag buffer points as withheld so they can be easily be dropped later.

lastile -i 0_raw\TafawaBalewa.laz ^
        -tile_size 200 -buffer 50 -flag_as_withheld ^
        -odir 2_tiles_raw -o tafawa.laz

If you inspect the resulting tiles – such as ‘tafawa_544000_712600.laz’ as shown here – with lasview you will see the kind of low noise that is shown below and that may cause a ground classification algorithm. While our lasground software is able to deal with a certain amount of low noise – if there are too many it will likely latch onto them. Therefore we will first generate a subset of points that has as few as possible of such low noise points.

Typical low noise in dense-matching photogrammetry points in urban scene.

Next we use a sequence of three LAStools modules, namely lasthinlasground, and lasheight to classify this photogrammtric point cloud into ground and non-ground points. All processing commands used are here. First we use lasthin to give the point the classification code 8 that is closest to the 50th percentile in elevation within every 50 centimeter by 50 centimeter cell (but only if the cells containing at least 20 points).

lasthin -i 2_tiles_raw\tafawa*.laz ^
        -step 0.5 ^
        -percentile 50 20 ^
        -classify_as 8 ^
        -odir 3_tiles_median_50cm -olaz ^
        -cores 3

Next we use lasground to ground-classify only the points that have classification code 8 (i.e. by ignoring those with classification codes 0) and set their classification code to ground (2) or non-ground (1). Because of the large buildings in this urban scene we use ‘-metro’ which uses a large step size of 50 meters for the pre-processing. This also sets the internally used bulge parameter to 5.0 which you can see if you run the tool in verbose ‘-v’ mode. In three different trial runs we determined that forcing the bulge parameter to be 0.5 instead gave better results. The bulge and the spike parameters can be useful to vary in order to improve ground classification results (also see the README file).

lasground -i 3_tiles_median_50cm\tafawa*.laz ^
          -ignore_class 0 ^
          -metro -bulge 0.5 ^
          -odir 4_tiles_ground_50cm -olaz ^
          -cores 3

The resulting ground points are a subset with a resolution of 50 centimeter that is good enough to create a DTM with meter resolution, which we do with las2dem command line shown below. We really like storing DTM elevation rasters to the LAZ point format because it is a more compressed way of storing elevation rasters compared to ASC, BIL, TIF, or IMG. It also makes the raster output a natural input to subsequent LAStools processing steps.

las2dem -i 4_tiles_ground_50cm\tafawa*.laz ^
        -keep_class 2 ^
        -step 1 -kill 100 ^
        -use_tile_bb ^
        -odir 5_tiles_dtm_1m -olaz ^
        -cores 3

Finally we use blast2dem to create a seamless hill-shaded version of our 1 meter DTM from on-the-fly merged elevation rasters. This is the DTM pictured at the beginning of this article.

blast2dem -i 5_tiles_dtm_1m\tafawa*.laz -merged ^
          -step 1 ^
          -hillshade ^
          -o dtm_1m.png

The corresponding DSM pictured at the beginning of this article was generated with the two command lines below by first keeping only the 95th percentile highest elevation of every 50 cm by 50 cm cell with lasthin (which remove spurious high noise points) and then by triangulating the surviving points with blast2dem into a seamless TIN that is also hill-shaded and rasterized with 1 meter resolution. Running the 64 bit version of lasthin (note the ‘-cpu64‘ switch) allows us to work on the entire data set (rather than its tiles version) at once, where the standard 32 bit version may run out of memory.

lasthin -i 0_raw\TafawaBalewa.laz ^
        -cpu64 ^
        -step 0.5 ^
        -percentile 95 20 ^
        -o 0_raw\TafawaBalewa_p95_50.laz

blast2dem -i 0_raw\TafawaBalewa_p95_50.laz ^
          -step 1 ^
          -hillshade ^
          -o dsm_1m.png

In order to generate the final DTM at higher resolution we use lasheight to pull all points into the ground class that lie within a 5 cm distance vertically below or a 15 cm distance vertically above the triangulated surface of ground points computed in the previous step. You could experiment with other values here to be less or more conservative about pulling detail into the ground class.

lasheight -i 4_tiles_ground_50cm\tafawa*.laz ^
          -classify_between -0.05 0.15 2 ^
          -odir 6_tiles_ground -olaz ^
          -cores 3

We repeat the same processing step as before las2dem to create the raster DTM tiles, but this time with a resolution of 25 cm.

las2dem -i 6_tiles_ground\tafawa*.laz ^
        -keep_class 2 ^
        -step 0.25 -kill 100 ^
        -use_tile_bb ^
        -odir 7_tiles_dtm_25cm -olaz ^
        -cores 3

And we again use blast2dem to create a seamless hill-shaded version of the DTM from on-the-fly merged elevation rasters, but this time with a resolution of 25 cm. This is the DTM shown below. All processing commands used are here.

blast2dem -i 7_tiles_dtm_25cm\tafawa*.laz -merged ^
          -step 0.25 ^
          -hillshade ^
          -o dtm_25cm.png

Hill-shade of final DTM with resolution of 25 cm.

Smooth DTM from Drone LiDAR off Velodyne HDL 32A mounted on DJI M600 UAV

Recently we attempted to do a small LiDAR survey by drone for a pet project of our CEO in our “code and surf camp” here in Samara, Costa Rica. But surveying is difficult when you are a novice and we ran into a trajectory issue. The dramatic “wobbles” were entirely our fault, but fortunately our mistakes also led to something useful: We found some LAS export bugs. Our laser scanner was a Velodyne HDL-32E integrated with a NovAtel INS into the Snoopy Series A HD made by LiDARUSA. The system was carried by a DJI Matrice 600 (M600) drone. We processed the trajectory with NovAtel Inertial Explorer (here we made the “wobbles” error) and finally exported the LAS and LAZ files with ScanLook PC (version 1.0.182) from LiDARUSA.

While we were investigating our “wobbles” (which clearly were our mistake) we also found five different LAS export bugs in ScanLook PC that seem to have started sometime after version 1.0.171 and will likely end with version 1.0.193. Below an illustration of a correct export from version 1.0.129 and a buggy export from version 1.0.182. In both instances you see the returns from one revolution of the Velodyne HDL-32E scanner head ordered by their GPS time stamps and colored to distinguish the 32 separate beams. In the buggy version, groups of around seven non-adjacent returns are given the same time stamp. This bug will only affect you, if correct GPS time stamps are important for your subsequent LiDAR processing or if your client explicitly asked for ASPRS specification compliant LAS files. We plan to publish another blog post detailing how to find this GPS time stamping bug (and the other four bugs we found).

During the many interactions we had working through “wobbles” and export bugs, we obtained a nice set of six flight lines from Seth Gulich of Bowman Consulting – a US American company based in Stuart, Florida – who flew an identical “Snoopy Series A HD” system also on a DJI Matrice 600 drone at approximately 100 feet above ground level above a model airplane airport in Palm Beach, Florida. You can download the data set here. In the following we will check the flight line alignment of this data set and then process it into a smooth DTM. All command lines used are summarized in this text file.

First we generate a lasinfo report that includes a number of histograms for on-the-fly merged flight lines with lasinfo and then use the z coordinate histogram from the lasinfo report to set reasonable min/max values for the elevation color ramp of lasview:

lasinfo -i 0_strips_raw\Velodyne*.laz -merged ^
        -cd ^
        -histo z 1 ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -o 1_quality\Velodyne_merged_info.txt

lasview -i 0_strips_raw\Velodyne*.laz ^
        -points 10000000 ^
        -set_min_max 25 75

The lasinfo report shows no information about the coordinate reference system. We found out experimentally that the horizontal coordinates seem to be EPSG code 2236 and that the vertical units are most likely be US survey feet. The warnings you will see in the lasinfo report have to do with the fact that the double-precision bounding box stored in the LAS header was populated with numbers that have many more decimal digits than the coordinates in the file, which only have millifeet resolution as all three scale factors are 0.001 (meaning coordinates have three decimal digits). The information which of the 32 lasers was collecting which point is stored in both the ‘user data’ and the ‘point source ID’ field which is evident from the histograms in the lasinfo report. We need to be careful not to override both fields in later processing.

Next we use lasoverlap to check how well the LiDAR points from the flight out and the flight back align vertically. This tool computes the difference of the lowest points for each square foot covered by multiple flight lines. Differences of less than a quarter of a foot are both times mapped to white, differences of more than one foot (more than half a foot) are mapped to saturated red or blue depending on whether the difference is positive or negative in the first run (in the second run):

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 1.00 -step 1 ^
           -odir 1_quality -o overlap_025_100.png

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 0.50 -step 1 ^
           -odir 1_quality -o overlap_025_050.png

We use a new feature of the LAStools GUI (as of version 180429) to closer inspect large red or blue areas. With lasmerge we clip out regions that looks suspect for closer examination with lasview. First we spatially index the flight lines to make this process faster. With the ‘-gui’ switch we start the tool in GUI mode with flight lines already loaded. Using the new PNG overlay roll-out on the left we add the ‘overlap_025_050_diff.png’ image from the quality folder created in the last step and clip out three areas.

lasindex -i 0_strips_raw\Velodyne*.laz
         -tile_size 10 -maximum -100 ^
         -cores 3

lasmerge -i 0_strips_raw\Velodyne*.laz -gui

You can also clip out these three areas using the command lines below:

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 939500 889860 100 ^
         -o 1_quality\939500_889860.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940400 889620 100 ^
         -o 1_quality\940400_889620.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940500 890180 100 ^
         -o 1_quality\940500_890180.laz

The reader may inspect the areas 939500_889860.laz, 940400_889620.laz, and 940500_890180.laz with lasview using profile views via hot keys ‘x’ and switching back and forth between the points from different flight lines via hot keys ‘0’, ‘1’, ‘2’, ‘3’, … for individual and ‘a’ for all flight lines as we have done it in previous tutorials [1,2,3]. Using drop-lines or rise-lines via the pop-up menu gives you a sense of scale. Removing points with lastrack that are horizontally too far from the trajectory could be one strategy to use fewer outliers. But as our surfaces are expected to be “fluffy” (because we have a Velodyne LiDAR system), we accept these flight line differences and continue processing.

Here the complete LAStools processing pipeline for creating an average ground model from the set of six flight lines that results in the hillshaded DTM shown below. The workflow is similar to those we have developed in earlier blog posts for Velodyne Puck based systems like the Hovermap and the Yellowscan and in the other Snoopy tutorial. All command lines used are summarized in this text file.

Hillshaded DTM with half foot resolution generated via average ground computation with LAStools.

In the first step we lastile the six flight lines into 250 by 250 feet tiles with 25 feet buffer while preserving flight line information. The flight line information will be stored in the “point source ID” field of each point and therefore override the beam ID that is currently stored there. But the beam ID is also stored in the “user data” field as the  lasinfo report had told us. We set all classifications to zero and add information about the horizontal coordinate reference system EPSG code 2236 and the vertical units (US Survey Feet).

lastile -i 0_strips_raw\*.laz ^
        -faf ^
        -set_classification 0 ^
        -epsg 2236 -elevation_survey_feet ^
        -tile_size 250 -buffer 25 -flag_as_withheld ^
        -odir 2_tiles_raw -o pb.laz

On three cores in parallel we then lassort the points in the tiles into a space-filling curve order which will accelerate later operations.

lassort -i 2_tiles_raw\*.laz ^
        -odir 2_tiles_sorted -olaz ^
        -cores 3

Next we use lasthin to classify the point whose elevation is closest to the 5th elevation percentile among all points falling into its cell with classification code 8. We run lasthin multiple times and each time increase the cell size from 1, 2, 4, 8 to 16 foot. We do this because we have requested the 5th elevation percentile to only be computed when there are at least 20 points in the cell. Percentiles are statistical measures and need a reasonable sample size to be stable. Because drone flights are very dense in the center and more sparse at the edges this increase in cell size assures that we have a good selection of points classified with classification code 8 across the entire survey area.

lasthin -i 2_tiles_sorted\*.laz ^
        -step 1 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step01 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step01\*.laz ^
        -step 2 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step02 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step02\*.laz ^
        -step 4 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step04 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step04\*.laz ^
        -step 8 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step08 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step08\*.laz ^
        -step 16 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step16 -olaz ^
        -cores 3

Then we let lasground_new run on only the points classified with classification code 8 (i.e. by ignoring the points still classified with code 0) which classifies them into ground (code 2) and non-ground (code 1).

lasground_new -i 3_tiles_thinned_p05_step16\*.laz ^
              -ignore_class 0 ^
              -town ^
              -odir 4_tiles_ground_low -olaz ^
              -cores 3

The ground points we have computed form somewhat of a lower envelope of the “fluffy” points of a Velodyne scanner. With lasheight we now draw all the points near the ground – namely those from 0.1 foot below to 0.4 foot above the ground – into a new classification code 6 that we term “thick ground”. The ‘-do_not_store_in_user_data’ switch prevent the default behavior of lasheight from happening, which would override the beam ID information that it stored in the ‘user data’ field with approximate height value.

lasheight -i 4_tiles_ground_low\*.laz ^
          -classify_between -0.1 0.4 6 ^
          -do_not_store_in_user_data ^
          -odir 4_tiles_ground_thick -olaz ^
          -cores 3

A few close-up shots of the resulting “thick ground” are shown in the picture gallery below.

We then use lasgrid to average the (orange) thick ground points onto a regular grid with a cell spacing of half a foot. We do not grid the tile buffers by adding the ‘-use_tile_bb’ switch.

lasgrid -i 4_tiles_ground_thick\*.laz ^
        -keep_class 6 ^
        -step 0.5 -average ^
        -use_tile_bb ^
        -odir 5_tiles_gridded_mean_ground -olaz ^
        -cores 3

Finally we use blast2dem to merge all the averaged ground point grids into one file, interpolate across open areas without ground points, and compute the hillshaded DTM shown above. All command lines used are summarized in this text file.

blast2dem -i 5_tiles_gridded_mean_ground\*.laz ^
          -merged ^
          -step 0.5 ^
          -hillshade ^
          -o dtm.png

We thank Seth Gulich of Bowman Consulting for sharing this LiDAR data set with us. It was flown with a DJI Matrice 600 drone carrying a “Snoopy A series HD” LiDAR system from LidarUSA.

Using Open LiDAR to Remove Low Noise from Photogrammetric UAV Point Clouds

We collected drone imagery of the restored “Kance” tavern during the lunch stop of the UAV Tartu summer school field trip (actually the organizer Marko Kohv did that, as I was busy introducing students to SUP boarding). With Agisoft PhotoScan we then processed the images into point clouds below the deck of the historical “Jõmmu” barge on the way home (actually Marko did that, because I was busy enjoying the view of the wetlands in the afternoon sun). The resulting data set with 7,855,699 points is shown below and can be downloaded here.

7,855,699 points produced with Agisoft Photoscan

Generating points using photogrammetric techniques in scenes containing water bodies tends to be problematic as dense blotches of noise points above and below the water surface are common as you can see in the picture below. Especially the low points are troublesome as they adversely affect ground classification which results in poor Digital Elevation Models (DTMs).

Clusters of low noise points nearly 2 meters below the actual surface in water areas.

In a previous article we have described a LAStools workflow that can remove excessive low noise. In this article here we use external information about the topography of the area to clean our photogrammetry points. How convenient that the Estonian Land Board has just released their entire LiDAR archives as open data.

Following these instructions here you can download the available open LiDAR for this area, which has the map sheet index 475681. Alternatively you can download the four currently available data sets here flown in spring 2010, in summer 2013, in spring 2014, and in summer 2017. In the following we will use the one flown in spring 2014.

We can view both data sets simultaneously in lasview. By adding ‘-faf’ to the command-line we can switch back and forth between the two data sets by pressing ‘0’ and ‘1’.

lasview -i Kantsi.laz ^
        -i 475681_2014_tava.laz ^
        -points 10000000 ^
        -faf

We find cut the 1 km by 1 km LiDAR tile down to a 250 m by 250 m tile that nicely surrounds our photogrammetric point set using the following las2las command-line:

las2las -i 475681_2014_tava.laz ^
        -inside_tile 681485 6475375 250 ^
        -o LiDAR_Kantsi.laz

lasview -i Kantsi.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

Scrutinizing the two data sets we quickly find that there is a miss-alignment between the dense imagery-derived and the comparatively sparse LiDAR point clouds. With lasview we investigate the difference between the two point clouds by hovering over a point from one point cloud and pressing <i> and then hovering over a somewhat corresponding point from the other point cloud and pressing <SHIFT>+<i>. We measure displacements of around 2 meters vertically and of around 3 to 3.5 meter in total.

Before we can use the LiDAR points to remove the low noise from the photogrammetric points we must align them properly. For simple translation errors this can be done with a new feature that was recently added to lasview. Make sure to download the latest version (190404 or newer) of LAStools to follow the steps shown in the image sequence below.

las2las -i Kantsi.laz ^
        -translate_xyz 0.89 -1.90 2.51 ^
        -o Kantsi_shifted.laz

lasview -i Kantsi_shifted.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result looks good in the sense that both sides of the photogrammetric roof are reasonably well aligned with the LiDAR. But there is still a shift along the roof so we repeat the same thing once more as shown in the next image sequence:

We use a suitable displacement vector and apply it to the photogrammetry points, shifting them again:

las2las -i Kantsi_shifted.laz ^
        -translate_xyz -1.98 -0.95 0.01 ^
        -o Kantsi_shifted_again.laz

lasview -i Kantsi_shifted_again.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result is still not perfect as there is also some rotational error and you may find another software such as Cloud Compare more suited to align the two point clouds, but for this exercise the alignment shall suffice. Below you see the match between the photogrammetry points and the LiDAR TIN before and after shifting the photogrammetry points with the two (interactively determined) displacement vectors.

The final steps of this exercise use las2dem and the already ground-classified LiDAR compute a 1 meter DTM, which we then use as input to lasheight. We classify the photogrammetry points using their height above this set of ground points with 1 meter spacing: points that are 40 centimeter or more below the LiDAR DTM are classified as noise (7), points that are between 40 below to 1 meter above the LiDAR DTM are classified to a temporary class (here we choose 8) that has those points that could potentially be ground points. This will help, for example, with subsequent ground classification as large parts of the photogrammetry points – namely those on top of buildings and in higher vegetation – can be ignored from the start by a ground classification algorithm such as lasground.

las2dem -i LiDAR_Kantsi.laz ^
        -keep_class 2 ^
        -kill 1000 ^
        -o LiDAR_Kantsi_dtm_1m.bil
lasheight -i Kantsi_shifted_again.laz ^
          -ground_points LiDAR_Kantsi_dtm_1m.bil ^
          -classify_below -0.4 7 ^
          -classify_between -0.4 1.0 8 ^
          -o Kantsi_cleaned.laz

Below the results we have achieved after “roughly” aligning the two point clouds with some new lasview tricks and then using the LiDAR elevations to classify the photogrammetry points into “low noise”, “potential ground”, and “all else”.

We thank the Estonian Land Board for providing open data with a permissive license. Special thanks also go to the organizers of the UAV Summer School in Tartu, Estonia and the European Regional Development Fund for funding this event. Especially fun was the fabulous excursion to the Emajõe-Suursoo Nature Reserve and through to Lake Peipus aboard, overboard and aboveboard the historical barge “Jõmmu”. If you look carefully you can also find the barge in the photogrammetry point cloud. The photogrammetry data used here was acquired during our lunch stop.

Fun aboard and overboard the historical barge “Jõmmu”.

Digital Pothole Removal: Clean Road Surface from Noisy Pix4D Point Cloud

How to generate a clean Digital Terrain Model (DTM) from point clouds that were generated with the image matching techniques implemented in various photogrammetry software packages like those from Pix4D, AgiSoft, nframes, DroneDeploy and others has become an ever more frequent inquiry. There are many other blog posts on the topic that you can peruse as well [1,2,3,4,5,6,7,8]. In the following we go step by step through the process of removing low noise from a high-density point cloud that was generated with Pix4D software. A composite of the resulting DTM and DSM is shown below.

Final DSM and DTM created with LAStools for a photogrammetric point cloud of a road generated by Pix4D.

After downloading the data it is useful to familiarize oneself with the number of points, the density of points and their geo-location. This can be done with lasview, lasinfo, and lasgrid using the command lines shown below. There are around 19 million points in the file and their density averages around 2300 points per square meter. Because the RGB values have a 16 bit range (as evident in the lasinfo report) we need to add the option ‘-scale_rgb_down’ to the command line when producing the RGB raster with lasgrid.

lasview -i 0_photogrammetry\densified_point_cloud.laz

lasinfo -i 0_photogrammetry\densified_point_cloud.laz ^
        -cd ^
        -o 1_quality\densified_point_cloud.txt

lasgrid -i 0_photogrammetry\densified_point_cloud.laz ^
        -scale_rgb_down ^
        -step 0.10 ^
        -rgb ^
        -fill 1 ^
        -o 1_quality\densified_point_cloud.png

The first step is to use lastile and create smaller and buffered tiles for these 19 million photogrammetry points. We use a tile size of 100 meters, request a buffer of 10 meters around every tile, and flag buffer points as withheld so they can be easily be dropped later. We also make sure that all classification codes are reset to 0.

lastile -i 0_photogrammetry\points.laz ^
        -set_classification 0 ^
        -tile_size 100 -buffer 10 -flag_as_withheld ^
        -o 2_tiles_raw\seoul.laz -olaz

We start with lassort as a pre-processing step that rearranges the points into a more coherent spatial order which often accelerates subsequent processing steps.

lassort -i 2_tiles_raw\seoul_*.laz ^
        -odir 3_tiles_temp0 -olaz ^
        -cores 4

Next we use a sequence of four modules, namely lasthin, lasnoiselasground, and lasheight with fine-tuned parameters to remove the low noise points that are typical for point clouds generated from imagery by photogrammetry software. A typical example for such noise points are shown in the image below generated with this call to lasview:

lasview -i 3_tiles_temp0\seoul_210400_542900.laz ^
        -inside 210406 542894 210421 542921 ^
        -points 20000000 ^
        -kamera 0 -95 90 0 -0.3 1.6 ^
        -point_size 4

Ground surface noise (exaggerated by pressing <]> in lasview which doubles the scale in z).

As always, the idea is to construct a surface that is close to the ground but always above the noise so that it can be used to declare all points beneath it as noise. Below is a processing pipeline whose parameters work well for this data and that you can fine tune for the point density and the noise profile of your own data.

First we use lasthin to give those points the classification code 8 that are closest to the 70th percentile in elevation within every 20 cm by 20 cm cell. As statistics like percentiles are only stable for a sufficient number of points we only do this for cells that contain 25 points or more. Given that we have an average of 2300 points per square meter this should easily be the case for all relevant cells.

lasthin -i 3_tiles_temp0\seoul_*.laz ^
        -step 0.20 ^
        -percentile 70 25 ^
        -classify_as 8 ^
        -odir 3_tiles_temp1 -olaz ^
        -cores 4

The we run lasnoise only points on the points with classification code 8 and reclassify all “overly isolated” points with code 9. The check for isolation uses cells of size 20 cm by 20 cm by 5 cm and reclassifies the points in the center cell when the surrounding neighborhood of 27 cells has only 3 or fewer points in total. Changing the parameters for ‘-step_xy 0.20 -step_z 0.05 -isolated 3’ will remove isolated points more or less aggressive.

lasnoise -i 3_tiles_temp1\seoul_*.laz ^
         -ignore_class 0 ^
         -step_xy 0.20 -step_z 0.05 -isolated 3 ^
         -classify_as 9 ^
         -odir 3_tiles_temp2 -olaz ^
         -cores 4

Next we use lasground to ground-classify only the surviving points (that still have classification code 8) by ignoring those with classification codes 0 or 9. This sets their classification code to either ground (2) or non-ground (1). The temporary surface defined by the resulting ground points will be used to classify low points as noise in the next step.

lasground -i 3_tiles_temp2\seoul_*.laz ^
          -ignore_class 0 9 ^
          -town -ultra_fine -bulge 0.1 ^
          -odir 3_tiles_temp3 -olaz ^
          -cores 4

Then we use lasheight to classify all points that are 2.5 cm or more below the triangulated surface of temporary ground points as points as noise (7) and all others as unclassified (1).

lasheight -i 3_tiles_temp3\seoul_*.laz ^
          -classify_below -0.025 7 ^
          -classify_above -0.225 1 ^
          -odir 4_tiles_denoised -olaz ^
          -cores 4

The progress of each step is illustrated visually in the two image sequences shown below.

Now that all noise points are classified we start a standard processing pipeline, but always ignore the low noise points that are now classified with classification code 7.

The processing steps below create a 10 cm DTM raster. We first use lasthin to classify the lowest non-noise point per 10 cm by 10 cm cell. Considering only those lowest points we use lasground with options ‘-town’, ‘-extra_fine’, ‘-bulge 0.05’, and ‘-spike 0.05’. Using las2dem the resulting ground points are interpolated into a TIN and rasterized into a 10 cm DTM cutting out only the center 100 meter by 100 meter tile. We store the DTM raster as a gridded LAZ for maximal compression and finally merge these gridded LAZ files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\seoul_*.laz ^
        -ignore_class 7 ^
        -step 0.10 ^
        -lowest ^
        -classify_as 8 ^
        -odir 5_tiles_thinned_lowest -olaz ^
        -cores 4

lasground -i 5_tiles_thinned_lowest\seoul_*.laz ^
          -ignore_class 1 7 ^
          -town -extra_fine ^
          -bulge 0.05 -spike 0.05 ^
          -odir 6_tiles_ground -olaz ^
          -cores 4

las2dem -i 6_tiles_ground\seoul_*.laz ^
        -keep_class 2 ^
        -step 0.10 ^
        -use_tile_bb ^
        -odir 7_tiles_dtm -olaz ^
        -cores 4

blast2dem -i 7_tiles_dtm\seoul_*.laz -merged ^
          -hillshade ^
          -step 0.10 ^
          -o dtm_hillshaded.png

The processing steps below create a 10 cm DSM raster. We first use lasthin to classify the highest non-noise point per 10 cm by 10 cm cell. With las2dem the highest points are interpolated into a TIN and rasterized into a 10 cm DSM cutting out only the center 100 meter by 100 meter tile. Again we store the raster as gridded LAZ for maximal compression and merge these files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\seoul_*.laz ^
        -ignore_class 7 ^
        -step 0.10 ^
        -highest ^
        -classify_as 8 ^
        -odir 8_tiles_thinned_highest -olaz ^
        -cores 4

las2dem -i 8_tiles_thinned_highest\seoul_*.laz ^
        -keep_class 8 ^
        -step 0.10 ^
        -use_tile_bb ^
        -odir 9_tiles_dsm -olaz ^
        -cores 4

blast2dem -i 9_tiles_dsm\seoul_*.laz -merged ^
          -hillshade ^
          -step 0.10 ^
          -o dsm_hillshaded.png

The final result is below. The entire script is linked here. Simply download it, modify it as needed, and try it on this data or on your own data.

Scripting LAStools to Create a Clean DTM from Noisy Photogrammetric Point Cloud

A recent inquiry by Drone Deploy in the LAStools user forum gave us access to a nice photogrammetric point cloud for the village of Fillongley in the North Warwickshire district of England. They voted “Leave” with a whopping 66.9% according to the EU referendum results by the BBC. Before we say “Good riddance, Fillongley!” we EU-abuse this little village one last time and remove all their low noise points to create a nice Digital Terrain Model (DTM). The final result is shown below.

Side by side comparison of DTM and DSM generated with LAStools from photogrammetric point cloud by Drone Deploy.

After downloading the data it is useful to familiarize yourself with the point number, the point density and their geo-location, which can be done with lasview, lasinfo, and lasgrid using the command lines shown below. There are around 50 million points and their density averages close to 70 points per square meter.

lasview -i 0_photogrammetry\points.laz

lasinfo -i 0_photogrammetry\points.laz ^
        -cd ^
        -o 1_quality\fillongley.txt

lasgrid -i 0_photogrammetry\points.laz ^
        -step 0.50 ^
        -rgb ^
        -fill 1 ^
        -o 1_quality\fillongley.png

The first step is to use lastile and create smaller and buffered tiles for these 50 million photogrammetry points. We use a tile size of 200 meters, request a buffer of 25 meters around every tile, and flag buffer points as withheld so they can be easily be dropped later.

lastile -i 0_photogrammetry\points.laz ^
        -tile_size 200 -buffer 25 -flag_as_withheld ^
        -o 2_tiles_raw\fillongley.laz -olaz

Next we use a sequence of four modules, namely lasthin, lasnoiselasground, and lasheight with fine-tuned parameters to remove the low noise points that are typical for point clouds generated from imagery by photogrammetry software. A typical example for such noise points are shown in the image below.

lasview -i 2_tiles_raw\fillongley_596000_5815800.laz ^
        -inside 596050 5815775 596150 5815825 ^
        -kamera 0 -89 -1.75 0 0 1.5 ^
        -point_size 3

Clumps of low noise points typical for photogrammetry point clouds.

The idea to identify those clumps of noise is to construct a surface that is sufficiently close to the ground but always above the noise so that it can be used to classify all points beneath it as noise. However, preserving true ground features without latching onto low noise points often requires several iterations of fine-tuning the parameters. We did this interactively by repeatedly running the processing on only two representative tiles until a desired outcome was achieved.

First we use lasthin to give the point the classification code 8 that is closest to the 20th percentile in elevation within every 90 cm by 90 cm cell (but only if the cells containing at least 20 points). Choosing larger step sizes or higher percentiles resulted in missing ground features. Choosing smaller step sizes or lower percentiles resulted in low noise becoming part of the final ground model.

lasthin -i 2_tiles_raw\fillongley_*.laz ^
        -step 0.90 ^
        -percentile 20 20 ^
        -classify_as 8 ^
        -odir 3_tiles_temp1 -olaz ^
        -cores 4

The we run lasnoise only points on the points with classification code 8 (by ignoring those with classification code 0) and reclassify all “overly isolated” points with code 12. The check for isolation uses cells of size 200 cm by 200 cm by 50 cm and reclassifies the points in the center cell when the surrounding neighborhood of 27 cells has only 3 or fewer points in total. Changing the parameters for ‘-step_xy 2.00 -step_z 0.50 -isolated 3’ will remove noise more or less aggressive.

lasnoise -i 3_tiles_temp1\fillongley_*.laz ^
         -ignore_class 0 ^
         -step_xy 2.00 -step_z 0.50 -isolated 3 ^
         -classify_as 12 ^
         -odir 3_tiles_temp2 -olaz ^
         -cores 4

Next we use lasground to ground-classify only the surviving points (that still have classification code 8) by ignoring those with classification codes 0 or 12 and set their classification code to ground (2) or non-ground (1). The temporary surface defined by the resulting ground points will be used to classify low points as noise in the next step.

lasground -i 3_tiles_temp2\fillongley_*.laz ^
          -ignore_class 0 12 ^
          -town -ultra_fine ^
          -odir 3_tiles_temp3 -olaz ^
          -cores 4

Then we use lasheight to classify all points that are 20 cm or more below the triangulated surface of temporary ground points as points as noise (7) and all others as unclassified (1).

lasheight -i 3_tiles_temp3\fillongley_*.laz ^
          -classify_below -0.20 7 ^
          -classify_above -0.20 1 ^
          -odir 4_tiles_denoised -olaz ^
          -cores 4

The progress of each step is illustrated visually in the two image sequences shown below.

 

 

Now that all noise points are classified we start a standard processing pipeline, but always ignore the noise points that are now classified with classification code 7.

The processing steps below create a 25 cm DTM raster. We first use lasthin to classify the lowest non-noise point per 25 cm by 25 cm cell. Considering only those lowest points we use lasground with options ‘-town’, ‘-extra_fine’, or ‘-bulge 0.1’. Using las2dem the resulting ground points are interpolated into a TIN and rasterized into a 25 cm DTM cutting out only the center 200 meter by 200 meter tile. We store the DTM raster as a gridded LAZ for maximal compression and finally merge these gridded LAZ files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\fillongley_*.laz ^
        -ignore_class 7 ^
        -step 0.25 ^
        -lowest ^
        -classify_as 8 ^
        -odir 5_tiles_thinned_lowest -olaz ^
        -cores 4

lasground -i 5_tiles_thinned_lowest\fillongley_*.laz ^
          -ignore_class 1 7 ^
          -town -extra_fine -bulge 0.1 ^
          -odir 6_tiles_ground -olaz ^
          -cores 4

las2dem -i 6_tiles_ground\fillongley_*.laz ^
        -keep_class 2 ^
        -step 0.25 ^
        -use_tile_bb ^
        -odir 7_tiles_dtm -olaz ^
        -cores 4

blast2dem -i 7_tiles_dtm\fillongley_*.laz -merged ^
          -hillshade ^
          -step 0.25 ^
          -o dtm_hillshaded.png

The processing steps below create a 25 cm DSM raster. We first use lasthin to classify the highest non-noise point per 25 cm by 25 cm cell. With las2dem the highest points are interpolated into a TIN and rasterized into a 25 cm DSM cutting out only the center 200 meter by 200 meter tile. Again we store the raster as gridded LAZ for maximal compression and merge these files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\fillongley_*.laz ^
        -ignore_class 7 ^
        -step 0.25 ^
        -highest ^
        -classify_as 8 ^
        -odir 8_tiles_thinned_highest -olaz ^
        -cores 4

las2dem -i 8_tiles_thinned_highest\fillongley_*.laz ^
        -keep_class 8 ^
        -step 0.25 ^
        -use_tile_bb ^
        -odir 9_tiles_dsm -olaz ^
        -cores 4

blast2dem -i 9_tiles_dsm\fillongley_*.laz -merged ^
          -hillshade ^
          -step 0.25 ^
          -o dsm_hillshaded.png

The final result is below. The entire script is linked here. Simply download it, modify it as needed, and try it on your data.

 

New Step-by-Step Tutorial for Velodyne Drone LiDAR from Snoopy by LidarUSA

The folks from Harris Aerial gave us LiDAR data from a test-flight of one of their drones, the Carrier H4 Hybrid HE (with a 5kg maximum payload and a retail price of US$ 28,000), carrying a Snoopy A series LiDAR system from LidarUSA in the countryside near Huntsville, Alabama. The laser scanner used by the Snoopy A series is a Velodyne HDL 32E that has 32 different laser/detector pairs that fire in succession to scan up to 700,000 points per second within a range of 1 to 70 meters. You can download the raw LiDAR file from the 80 second test flight here. As always, the first thing we do is to visualize the file with lasview and to generate a textual report of its contents with lasinfo.

lasview -i Velodyne001.laz -set_min_max 680 750

It becomes obvious that the drone must have scanned parts of itself (probably the landing gear) during the flight and we exploit this fact in the later processing. The information which of the 32 lasers was collecting which point is stored into the ‘point source ID’ field which is usually used for the flightline information. This results in a psychedelic look in lasview as those 32 different numbers get mapped to the 8 different colors that lasview uses for distinguishing flightlines.

The lasinfo report we generate computes the average point density with ‘-cd’ and includes histograms for a number of point attributes, namely for ‘user data’, ‘intensity’, ‘point source ID’, ‘GPS time’, and ‘scan angle rank’.

lasinfo -i Velodyne001.laz ^
        -cd ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -histo intensity 16 ^
        -histo gps_time 1 ^
        -histo scan_angle_rank 5 ^
        -odir quality -odix _info -otxt

You can download the resulting report here and it will tell you that the information which of the 32 lasers was collecting which point was stored both into the ‘user data’ field and into the ‘point source ID’ field. The warnings you see below have to do with the fact that the double-precision bounding box stored in the LAS header was populated with numbers that have many more decimal digits than the coordinates in the file, which only have millimeter (or millifeet) resolution as all three scale factors are 0.001 (meaning three decimal digits).

WARNING: stored resolution of min_x not compatible with x_offset and x_scale_factor: 2171988.6475160527
WARNING: stored resolution of min_y not compatible with y_offset and y_scale_factor: 1622812.606925504
WARNING: stored resolution of min_z not compatible with z_offset and z_scale_factor: 666.63504345017589
WARNING: stored resolution of max_x not compatible with x_offset and x_scale_factor: 2172938.973065129
WARNING: stored resolution of max_y not compatible with y_offset and y_scale_factor: 1623607.5209975131
WARNING: stored resolution of max_z not compatible with z_offset and z_scale_factor: 1053.092674726669

Both the “return number” and the “number of returns” attribute of every points in the file is 2. This makes it appear as if the file would only contain the last returns of those laser shots that produced two returns. However, as the Velodyne HDL 32E only produces one return per shot we can safely conclude that those numbers should all be 1 instead of 2 and that this is just a small bug in the export software. We can easily fix this with las2las.

reporting minimum and maximum for all LAS point record entries ...
[...]
 return_number 2 2
 number_of_returns 2 2
[...]

The lasinfo report lacks information about the coordinate reference system as there is no VLR that stores projection information. Harris Aerial could not help us other than telling us that the scan was near Huntsville, Alamaba. Measuring certain distances in the scene like the height of the house or the tree suggests that both horizontal and vertical units are in feet, or rather in US survey feet. After some experimenting we find that using EPSG 26930 for NAD83 Alabama West but forcing the default horizontal units from meters to US survey feet gives a result that aligns well with high-resolution Google Earth imagery as you can see below:

lasgrid -i flightline1.laz ^
        -i flightline2.laz ^
        -merged ^
        -epsg 26930 -survey_feet ^
        -step 1 -highest ^
        -false -set_min_max 680 750 ^
        -o testing26930usft.png

Using EPSG code 26930 but with US survey feet instead of meters results in nice alignment with GE imagery.

We use the fact that the drone has scanned itself to extract an (approximate) trajectory by isolating those LiDAR returns that have hit the drone. Via a visual check with lasview (by hovering with the cursor over the lowest drone hits and pressing hotkey ‘i’) we determine that the lowest drone hits are all above 719 feet. We use two calls to las2las to split the point cloud vertically. In the same call we also change the resolution from three to two decimal digits, fix the return number issue, and add the missing geo-referencing information:

las2las -i Velodyne001.laz ^
        -rescale 0.01 0.01 0.01 ^
        -epsg 26930 -survey_feet -elevation_survey_feet ^
        -set_return_number 1 ^
        -set_number_of_returns 1 ^
        -keep_z_above 719 ^
        -odix _above719 -olaz

las2las -i Velodyne001.laz ^
        -rescale 0.01 0.01 0.01 ^
        -epsg 26930 -survey_feet -elevation_survey_feet ^
        -set_return_number 1 ^
        -set_number_of_returns 1 ^
        -keep_z_below 719 ^
        -odix _below719 -olaz

We then use the manual editing capabilities of lasview to change the classifications of the LiDAR points that correspond to drone hits from 1 to 12, which is illustrated by the series of screen shots below.

lasview -i Velodyne001_above719.laz

The workflow illustrated above results in a tiny LAY file that is part of the LASlayers functionality of LAStools. It only encodes the few changes in classifications that we’ve made to the LAZ file without re-writing those parts that have not changed. Those interested may use laslayers to inspect the structure of the LAY file:

laslayers -i Velodyne001_above719.laz

We can apply the LAY file on-the-fly with the ‘-ilay’ option, for example, when running lasview:

lasview -i Velodyne001_above719.laz -ilay

To separate the drone-hit trajectory from the remaining points we run lassplit with the ‘-ilay’ option and request to split by classification with this command line:

lassplit -i Velodyne001_above719.laz -ilay ^
         -by_classification -digits 3 ^
         -olaz

This gives us two new files with the three-digit appendices ‘_001’ and ‘_012’. The latter one contains those points we marked as being part of the trajectory. We now want to use lasview to – visually – find a good moment in time where to split the trajectory into multiple flightlines. The lasinfo report tells us that the GPS time stamps are in the range from 418,519 to 418,602. In order to use the same trick as we did in our recent article on processing LiDAR data from the Hovermap Drone, where we mapped the GPS time to the intensity for querying it via lasview, we first need to subtract a large number from the GPS time stamps to bring them into a suitable range that fits the intensity field as done here.

lasview -i Velodyne001_above719_012.laz ^
        -translate_gps_time -418000 ^
        -bin_gps_time_into_intensity 1
        -steps 5000

The ‘-steps 5000’ argument makes for a slower playback (press ‘p’ to repeat) to better follow the trajectory.

Hovering with the mouse over a point that – visually – seems to be one of the turning points of the drone and pressing ‘i’ on the keyboard shows an intensity value of 548 which corresponds to the GPS time stamp 418548, which we then use to split the LiDAR point cloud (without the trajectory) into two flightlines:

las2las -i Velodyne001_below719.laz ^
        -i Velodyne001_above719_001.laz ^
        -merged ^
        -keep_gps_time_below 418548 ^
        -o flightline1.laz

las2las -i Velodyne001_below719.laz ^
        -i Velodyne001_above719_001.laz ^
        -merged ^
        -keep_gps_time_above 418548 ^
        -o flightline2.laz

Next we use lasoverlap to check how well the LiDAR points from the flight out and the flight back align vertically. This tool computes the difference of the lowest points for each square foot covered by both flightlines. Differences of less than a quarter of a foot are mapped to white, differences of more than half a foot are mapped to saturated red or blue depending on whether the difference is positive or negative:

lasoverlap -i flightline1.laz ^
           -i flightline2.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 0.50 -step 1 ^
           -odir quality -o overlap_025_050.png

We then use a new feature of the LAStools GUI (as of version 180429) to closer inspect larger red or blue areas. We want to use lasmerge and clip out any region that looks suspect for closer examination with lasview. We start the tool in the GUI mode with the ‘-gui’ command and the two flightlines pre-loaded. Using the new PNG overlay roll-out on the left we add the ‘overlap_025_050_diff.png’ image from the quality folder created in the last step and clip out three areas.

lasmerge -i flightline1.laz -i flightline2.laz -gui

You can also clip out these three areas using the command lines below:

lasmerge -i flightline1.laz -i flightline2.laz ^
         -faf ^
         -inside 2172500 1623160 2172600 1623165 ^
         -o clip2500_3160_100x005.laz

lasmerge -i flightline1.laz -i flightline2.laz ^
         -faf ^
         -inside 2172450 1623450 2172550 1623455 ^
         -o clip2450_3450_100x005.laz

lasmerge -i flightline1.laz -i flightline2.laz ^
         -faf ^
         -inside 2172430 1623290 2172530 1623310 ^
         -o clip2430_3290_100x020.laz

A closer inspection of the three cut out slices explains the red and blue areas in the difference image created by lasoverlap. We find a small systematic error in two of the slices. In slice ‘clip2500_3160_100x005.laz‘ the green points from flightline 1 are on average slightly higher than the red points from flightline 2. Vice-versa in slice ‘clip2450_3450_100x005.laz‘ the green points from flightline 1 are on average slightly lower than the red points from flightline 2. However, the error is less than half a foot and only happens near the edges of the flightlines. Given that our surfaces are expected to be “fluffy” anyways (as is typical for Velodyne LiDAR systems), we accept these differences and continue processing.

The strong red and blue colors in the center of the difference image created by lasoverlap is easily explained by looking at slice ‘clip2430_3290_100x020.laz‘. The scanner was “looking” under a gazebo-like open roof structure from two different directions and therefore always seeing parts of the floor in one flightline that were obscured by the roof in the other.

While working with this data we’ve also implemented a new feature for lastrack that computes the 3D distance between LiDAR points and the trajectory and allows storing the result as an additional per point attribute with extra bytes. Those can then be visualized with lasgrid. Here an example:

lastrack -i flightline1.laz ^
         -i flightline2.laz ^
         -track Velodyne001_above719_012.laz ^
         -store_xyz_range_as_extra_bytes ^
         -odix _xyz_range -olaz ^
         =cores 2

lasgrid -i flightline*_xyz_range.laz -merged ^
        -drop_attribute_below 0 1 ^
        -attribute0 -lowest ^
        -false -set_min_max 20 200 ^
        -o quality/closest_xyz_range_020ft_200ft.png

lasgrid -i flightline*_xyz_range.laz -merged ^
        -drop_attribute_below 0 1 ^
        -attribute0 -highest ^
        -false -set_min_max 30 300 ^
        -o quality/farthest_xyz_range_030ft_300ft.png

Below the complete processing pipeline for creating a median ground model from the “fluffy” Velodyne LiDAR data that results in the hillshaded DTM shown here. The workflow is similar to those we have developed in earlier blog posts for Velodyne Puck based systems like the Hovermap and the Yellowscan.

Hillshaded DTM with a resolution of 1 foot generated via a median ground computation by the LAStools processing pipeline detailed below.

lastile -i flightline1.laz ^
        -i flightline2.laz ^
        -faf ^
        -tile_size 250 -buffer 25 -flag_as_withheld ^
        -odir tiles_raw -o somer.laz

lasnoise -i tiles_raw\*.laz ^
         -step_xy 2 -step 1 -isolated 9 ^
         -odir tiles_denoised -olaz ^
          -cores 4

lasthin -i tiles_denoised\*.laz ^
        -ignore_class 7 ^
        -step 1 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_1_foot -olaz ^
        -cores 4

lasthin -i tiles_thinned_1_foot\*.laz ^
        -ignore_class 7 ^
        -step 2 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_2_foot -olaz ^
        -cores 4

lasthin -i tiles_thinned_2_foot\*.laz ^
        -ignore_class 7 ^
        -step 4 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_4_foot -olaz ^
        -cores 4

lasthin -i tiles_thinned_4_foot\*.laz ^
        -ignore_class 7 ^
        -step 8 -percentile 0.05 10 -classify_as 8 ^
        -odir tiles_thinned_8_foot -olaz ^
        -cores 4

lasground -i tiles_thinned_8_foot\*.laz ^
          -ignore_class 1 7 ^
          -town -extra_fine ^
          -odir tiles_ground_lowest -olaz ^
          -cores 4

lasheight -i tiles_ground_lowest\*.laz ^
          -classify_between -0.05 0.5 6 ^
          -odir tiles_ground_thick -olaz ^
          -cores 4

lasthin -i tiles_ground_thick\*.laz ^
        -ignore_class 1 7 ^
        -step 1 -percentile 0.5 -classify_as 2 ^
        -odir tiles_ground_median -olaz ^
        -cores 4

las2dem -i tiles_ground_median\*.laz ^
        -keep_class 2 ^
        -step 1 -use_tile_bb ^
        -odir tiles_dtm -obil ^
        -cores 4

blast2dem -i tiles_dtm\*.bil -merged ^
          -step 1 -hillshade ^
          -o dtm_hillshaded.png

We thank Harris Aerial for sharing this LiDAR data set with us flown by their Carrier H4 Hybrid HE drone carrying a Snoopy A series LiDAR system from LidarUSA.