Smooth DTM from Drone LiDAR off Velodyne HDL 32A mounted on DJI M600 UAV

Recently we attempted to do a small LiDAR survey by drone for a pet project of our CEO in our “code and surf camp” here in Samara, Costa Rica. But surveying is difficult when you are a novice and we ran into a trajectory issue. The dramatic “wobbles” were entirely our fault, but fortunately our mistakes also led to something useful: We found some LAS export bugs. Our laser scanner was a Velodyne HDL-32E integrated with a NovAtel INS into the Snoopy Series A HD made by LiDARUSA. The system was carried by a DJI Matrice 600 (M600) drone. We processed the trajectory with NovAtel Inertial Explorer (here we made the “wobbles” error) and finally exported the LAS and LAZ files with ScanLook PC (version 1.0.182) from LiDARUSA.

While we were investigating our “wobbles” (which clearly were our mistake) we also found five different LAS export bugs in ScanLook PC that seem to have started sometime after version 1.0.171 and will likely end with version 1.0.193. Below an illustration of a correct export from version 1.0.129 and a buggy export from version 1.0.182. In both instances you see the returns from one revolution of the Velodyne HDL-32E scanner head ordered by their GPS time stamps and colored to distinguish the 32 separate beams. In the buggy version, groups of around seven non-adjacent returns are given the same time stamp. This bug will only affect you, if correct GPS time stamps are important for your subsequent LiDAR processing or if your client explicitly asked for ASPRS specification compliant LAS files. We plan to publish another blog post detailing how to find this GPS time stamping bug (and the other four bugs we found).

During the many interactions we had working through “wobbles” and export bugs, we obtained a nice set of six flight lines from Seth Gulich of Bowman Consulting – a US American company based in Stuart, Florida – who flew an identical “Snoopy Series A HD” system also on a DJI Matrice 600 drone at approximately 100 feet above ground level above a model airplane airport in Palm Beach, Florida. You can download the data set here. In the following we will check the flight line alignment of this data set and then process it into a smooth DTM. All command lines used are summarized in this text file.

First we generate a lasinfo report that includes a number of histograms for on-the-fly merged flight lines with lasinfo and then use the z coordinate histogram from the lasinfo report to set reasonable min/max values for the elevation color ramp of lasview:

lasinfo -i 0_strips_raw\Velodyne*.laz -merged ^
        -cd ^
        -histo z 1 ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -o 1_quality\Velodyne_merged_info.txt

lasview -i 0_strips_raw\Velodyne*.laz ^
        -points 10000000 ^
        -set_min_max 25 75

The lasinfo report shows no information about the coordinate reference system. We found out experimentally that the horizontal coordinates seem to be EPSG code 2236 and that the vertical units are most likely be US survey feet. The warnings you will see in the lasinfo report have to do with the fact that the double-precision bounding box stored in the LAS header was populated with numbers that have many more decimal digits than the coordinates in the file, which only have millifeet resolution as all three scale factors are 0.001 (meaning coordinates have three decimal digits). The information which of the 32 lasers was collecting which point is stored in both the ‘user data’ and the ‘point source ID’ field which is evident from the histograms in the lasinfo report. We need to be careful not to override both fields in later processing.

Next we use lasoverlap to check how well the LiDAR points from the flight out and the flight back align vertically. This tool computes the difference of the lowest points for each square foot covered by multiple flight lines. Differences of less than a quarter of a foot are both times mapped to white, differences of more than one foot (more than half a foot) are mapped to saturated red or blue depending on whether the difference is positive or negative in the first run (in the second run):

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 1.00 -step 1 ^
           -odir 1_quality -o overlap_025_100.png

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 0.50 -step 1 ^
           -odir 1_quality -o overlap_025_050.png

We use a new feature of the LAStools GUI (as of version 180429) to closer inspect large red or blue areas. With lasmerge we clip out regions that looks suspect for closer examination with lasview. First we spatially index the flight lines to make this process faster. With the ‘-gui’ switch we start the tool in GUI mode with flight lines already loaded. Using the new PNG overlay roll-out on the left we add the ‘overlap_025_050_diff.png’ image from the quality folder created in the last step and clip out three areas.

lasindex -i 0_strips_raw\Velodyne*.laz
         -tile_size 10 -maximum -100 ^
         -cores 3

lasmerge -i 0_strips_raw\Velodyne*.laz -gui

You can also clip out these three areas using the command lines below:

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 939500 889860 100 ^
         -o 1_quality\939500_889860.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940400 889620 100 ^
         -o 1_quality\940400_889620.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940500 890180 100 ^
         -o 1_quality\940500_890180.laz

The reader may inspect the areas 939500_889860.laz, 940400_889620.laz, and 940500_890180.laz with lasview using profile views via hot keys ‘x’ and switching back and forth between the points from different flight lines via hot keys ‘0’, ‘1’, ‘2’, ‘3’, … for individual and ‘a’ for all flight lines as we have done it in previous tutorials [1,2,3]. Using drop-lines or rise-lines via the pop-up menu gives you a sense of scale. Removing points with lastrack that are horizontally too far from the trajectory could be one strategy to use fewer outliers. But as our surfaces are expected to be “fluffy” (because we have a Velodyne LiDAR system), we accept these flight line differences and continue processing.

Here the complete LAStools processing pipeline for creating an average ground model from the set of six flight lines that results in the hillshaded DTM shown below. The workflow is similar to those we have developed in earlier blog posts for Velodyne Puck based systems like the Hovermap and the Yellowscan and in the other Snoopy tutorial. All command lines used are summarized in this text file.

Hillshaded DTM with half foot resolution generated via average ground computation with LAStools.

In the first step we lastile the six flight lines into 250 by 250 feet tiles with 25 feet buffer while preserving flight line information. The flight line information will be stored in the “point source ID” field of each point and therefore override the beam ID that is currently stored there. But the beam ID is also stored in the “user data” field as the  lasinfo report had told us. We set all classifications to zero and add information about the horizontal coordinate reference system EPSG code 2236 and the vertical units (US Survey Feet).

lastile -i 0_strips_raw\*.laz ^
        -faf ^
        -set_classification 0 ^
        -epsg 2236 -elevation_survey_feet ^
        -tile_size 250 -buffer 25 -flag_as_withheld ^
        -odir 2_tiles_raw -o pb.laz

On three cores in parallel we then lassort the points in the tiles into a space-filling curve order which will accelerate later operations.

lassort -i 2_tiles_raw\*.laz ^
        -odir 2_tiles_sorted -olaz ^
        -cores 3

Next we use lasthin to classify the point whose elevation is closest to the 5th elevation percentile among all points falling into its cell with classification code 8. We run lasthin multiple times and each time increase the cell size from 1, 2, 4, 8 to 16 foot. We do this because we have requested the 5th elevation percentile to only be computed when there are at least 20 points in the cell. Percentiles are statistical measures and need a reasonable sample size to be stable. Because drone flights are very dense in the center and more sparse at the edges this increase in cell size assures that we have a good selection of points classified with classification code 8 across the entire survey area.

lasthin -i 2_tiles_sorted\*.laz ^
        -step 1 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step01 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step01\*.laz ^
        -step 2 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step02 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step02\*.laz ^
        -step 4 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step04 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step04\*.laz ^
        -step 8 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step08 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step08\*.laz ^
        -step 16 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step16 -olaz ^
        -cores 3

Then we let lasground_new run on only the points classified with classification code 8 (i.e. by ignoring the points still classified with code 0) which classifies them into ground (code 2) and non-ground (code 1).

lasground_new -i 3_tiles_thinned_p05_step16\*.laz ^
              -ignore_class 0 ^
              -town ^
              -odir 4_tiles_ground_low -olaz ^
              -cores 3

The ground points we have computed form somewhat of a lower envelope of the “fluffy” points of a Velodyne scanner. With lasheight we now draw all the points near the ground – namely those from 0.1 foot below to 0.4 foot above the ground – into a new classification code 6 that we term “thick ground”. The ‘-do_not_store_in_user_data’ switch prevent the default behavior of lasheight from happening, which would override the beam ID information that it stored in the ‘user data’ field with approximate height value.

lasheight -i 4_tiles_ground_low\*.laz ^
          -classify_between -0.1 0.4 6 ^
          -do_not_store_in_user_data ^
          -odir 4_tiles_ground_thick -olaz ^
          -cores 3

A few close-up shots of the resulting “thick ground” are shown in the picture gallery below.

We then use lasgrid to average the (orange) thick ground points onto a regular grid with a cell spacing of half a foot. We do not grid the tile buffers by adding the ‘-use_tile_bb’ switch.

lasgrid -i 4_tiles_ground_thick\*.laz ^
        -keep_class 6 ^
        -step 0.5 -average ^
        -use_tile_bb ^
        -odir 5_tiles_gridded_mean_ground -olaz ^
        -cores 3

Finally we use blast2dem to merge all the averaged ground point grids into one file, interpolate across open areas without ground points, and compute the hillshaded DTM shown above. All command lines used are summarized in this text file.

blast2dem -i 5_tiles_gridded_mean_ground\*.laz ^
          -merged ^
          -step 0.5 ^
          -hillshade ^
          -o dtm.png

We thank Seth Gulich of Bowman Consulting for sharing this LiDAR data set with us. It was flown with a DJI Matrice 600 drone carrying a “Snoopy A series HD” LiDAR system from LidarUSA.

Using Open LiDAR to Remove Low Noise from Photogrammetric UAV Point Clouds

We collected drone imagery of the restored “Kance” tavern during the lunch stop of the UAV Tartu summer school field trip (actually the organizer Marko Kohv did that, as I was busy introducing students to SUP boarding). With Agisoft PhotoScan we then processed the images into point clouds below the deck of the historical “Jõmmu” barge on the way home (actually Marko did that, because I was busy enjoying the view of the wetlands in the afternoon sun). The resulting data set with 7,855,699 points is shown below and can be downloaded here.

7,855,699 points produced with Agisoft Photoscan

Generating points using photogrammetric techniques in scenes containing water bodies tends to be problematic as dense blotches of noise points above and below the water surface are common as you can see in the picture below. Especially the low points are troublesome as they adversely affect ground classification which results in poor Digital Elevation Models (DTMs).

Clusters of low noise points nearly 2 meters below the actual surface in water areas.

In a previous article we have described a LAStools workflow that can remove excessive low noise. In this article here we use external information about the topography of the area to clean our photogrammetry points. How convenient that the Estonian Land Board has just released their entire LiDAR archives as open data.

Following these instructions here you can download the available open LiDAR for this area, which has the map sheet index 475681. Alternatively you can download the four currently available data sets here flown in spring 2010, in summer 2013, in spring 2014, and in summer 2017. In the following we will use the one flown in spring 2014.

We can view both data sets simultaneously in lasview. By adding ‘-faf’ to the command-line we can switch back and forth between the two data sets by pressing ‘0’ and ‘1’.

lasview -i Kantsi.laz ^
        -i 475681_2014_tava.laz ^
        -points 10000000 ^
        -faf

We find cut the 1 km by 1 km LiDAR tile down to a 250 m by 250 m tile that nicely surrounds our photogrammetric point set using the following las2las command-line:

las2las -i 475681_2014_tava.laz ^
        -inside_tile 681485 6475375 250 ^
        -o LiDAR_Kantsi.laz

lasview -i Kantsi.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

Scrutinizing the two data sets we quickly find that there is a miss-alignment between the dense imagery-derived and the comparatively sparse LiDAR point clouds. With lasview we investigate the difference between the two point clouds by hovering over a point from one point cloud and pressing <i> and then hovering over a somewhat corresponding point from the other point cloud and pressing <SHIFT>+<i>. We measure displacements of around 2 meters vertically and of around 3 to 3.5 meter in total.

Before we can use the LiDAR points to remove the low noise from the photogrammetric points we must align them properly. For simple translation errors this can be done with a new feature that was recently added to lasview. Make sure to download the latest version (190404 or newer) of LAStools to follow the steps shown in the image sequence below.

las2las -i Kantsi.laz ^
        -translate_xyz 0.89 -1.90 2.51 ^
        -o Kantsi_shifted.laz

lasview -i Kantsi_shifted.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result looks good in the sense that both sides of the photogrammetric roof are reasonably well aligned with the LiDAR. But there is still a shift along the roof so we repeat the same thing once more as shown in the next image sequence:

We use a suitable displacement vector and apply it to the photogrammetry points, shifting them again:

las2las -i Kantsi_shifted.laz ^
        -translate_xyz -1.98 -0.95 0.01 ^
        -o Kantsi_shifted_again.laz

lasview -i Kantsi_shifted_again.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result is still not perfect as there is also some rotational error and you may find another software such as Cloud Compare more suited to align the two point clouds, but for this exercise the alignment shall suffice. Below you see the match between the photogrammetry points and the LiDAR TIN before and after shifting the photogrammetry points with the two (interactively determined) displacement vectors.

The final steps of this exercise use las2dem and the already ground-classified LiDAR compute a 1 meter DTM, which we then use as input to lasheight. We classify the photogrammetry points using their height above this set of ground points with 1 meter spacing: points that are 40 centimeter or more below the LiDAR DTM are classified as noise (7), points that are between 40 below to 1 meter above the LiDAR DTM are classified to a temporary class (here we choose 8) that has those points that could potentially be ground points. This will help, for example, with subsequent ground classification as large parts of the photogrammetry points – namely those on top of buildings and in higher vegetation – can be ignored from the start by a ground classification algorithm such as lasground.

las2dem -i LiDAR_Kantsi.laz ^
        -keep_class 2 ^
        -kill 1000 ^
        -o LiDAR_Kantsi_dtm_1m.bil
lasheight -i Kantsi_shifted_again.laz ^
          -ground_points LiDAR_Kantsi_dtm_1m.bil ^
          -classify_below -0.4 7 ^
          -classify_between -0.4 1.0 8 ^
          -o Kantsi_cleaned.laz

Below the results we have achieved after “roughly” aligning the two point clouds with some new lasview tricks and then using the LiDAR elevations to classify the photogrammetry points into “low noise”, “potential ground”, and “all else”.

We thank the Estonian Land Board for providing open data with a permissive license. Special thanks also go to the organizers of the UAV Summer School in Tartu, Estonia and the European Regional Development Fund for funding this event. Especially fun was the fabulous excursion to the Emajõe-Suursoo Nature Reserve and through to Lake Peipus aboard, overboard and aboveboard the historical barge “Jõmmu”. If you look carefully you can also find the barge in the photogrammetry point cloud. The photogrammetry data used here was acquired during our lunch stop.

Fun aboard and overboard the historical barge “Jõmmu”.

LASmoons: David Bandrowski

David Bandrowski (recipient of three LASmoons)
Yurok Tribe
Native American Indian Tribe in Northern California, USA

Background:
Wild spring-run Chinook salmon populations on the South Fork Trinity River in Northern California are near the brink of extinction. The South Fork Trinity River is the most remote and the largest un-dammed river in the State of California, federally designated as a wild and scenic river, and is a keystone watershed within the Klamath River basin supporting one of the last remaining populations of wild spring-run Chinook salmon. Ecosystem restoration is urgently needed to improve watershed health in the face of climate change, land use, and water diversions. This drastic decline of the wild salmon species motivated the Yurok Tribe and its partners to take action and implement this project as a last opportunity to save this species before extinction. Spring-run Chinook are extremely important for the Yurok people culturally, spiritually, and for a subsistence food source.

sample of the available photogrammetry data

Goal:
Due to budgetary constraints, airborne LiDAR is not available; therefore the Yurok Tribe has been using aerial drones and Structure for Motion (SfM) photogrammetry to develop DTM models that can be used in determining available salmon habitat and to develop prioritized locations for restoration. The watershed has extremely heavy vegetation, and obtaining bare-earth surfaces for hydraulic modeling is difficult without the proper tools. The goal is to use LAStools to further restoration science and create efficient workflows for DTM development.

Data:
+
 length of river mapped: 8 Kilometers
+ number of points: 150,856,819
+ horizontal datum: North American Datum 83 – California State Plane – Zone 1 (usft)
+ vertical datum: North American Vertical Datum 88

LAStools processing:
1) data quality checking [lasinfo, lasview, lasgrid]
2) classify ground and non-ground points [lasground and lasground_new]
3) remove low and high outliers [lasheight, lasnoise]
4) create DTM tiles at appropriate resolution [las2dem]
5) create a normalized point cloud [lasheight]

Digital Pothole Removal: Clean Road Surface from Noisy Pix4D Point Cloud

How to generate a clean Digital Terrain Model (DTM) from point clouds that were generated with the image matching techniques implemented in various photogrammetry software packages like those from Pix4D, AgiSoft, nframes, DroneDeploy and others has become an ever more frequent inquiry. There are many other blog posts on the topic that you can peruse as well [1,2,3,4,5,6,7,8]. In the following we go step by step through the process of removing low noise from a high-density point cloud that was generated with Pix4D software. A composite of the resulting DTM and DSM is shown below.

Final DSM and DTM created with LAStools for a photogrammetric point cloud of a road generated by Pix4D.

After downloading the data it is useful to familiarize oneself with the number of points, the density of points and their geo-location. This can be done with lasview, lasinfo, and lasgrid using the command lines shown below. There are around 19 million points in the file and their density averages around 2300 points per square meter. Because the RGB values have a 16 bit range (as evident in the lasinfo report) we need to add the option ‘-scale_rgb_down’ to the command line when producing the RGB raster with lasgrid.

lasview -i 0_photogrammetry\densified_point_cloud.laz

lasinfo -i 0_photogrammetry\densified_point_cloud.laz ^
        -cd ^
        -o 1_quality\densified_point_cloud.txt

lasgrid -i 0_photogrammetry\densified_point_cloud.laz ^
        -scale_rgb_down ^
        -step 0.10 ^
        -rgb ^
        -fill 1 ^
        -o 1_quality\densified_point_cloud.png

The first step is to use lastile and create smaller and buffered tiles for these 19 million photogrammetry points. We use a tile size of 100 meters, request a buffer of 10 meters around every tile, and flag buffer points as withheld so they can be easily be dropped later. We also make sure that all classification codes are reset to 0.

lastile -i 0_photogrammetry\points.laz ^
        -set_classification 0 ^
        -tile_size 100 -buffer 10 -flag_as_withheld ^
        -o 2_tiles_raw\seoul.laz -olaz

We start with lassort as a pre-processing step that rearranges the points into a more coherent spatial order which often accelerates subsequent processing steps.

lassort -i 2_tiles_raw\seoul_*.laz ^
        -odir 3_tiles_temp0 -olaz ^
        -cores 4

Next we use a sequence of four modules, namely lasthin, lasnoiselasground, and lasheight with fine-tuned parameters to remove the low noise points that are typical for point clouds generated from imagery by photogrammetry software. A typical example for such noise points are shown in the image below generated with this call to lasview:

lasview -i 3_tiles_temp0\seoul_210400_542900.laz ^
        -inside 210406 542894 210421 542921 ^
        -points 20000000 ^
        -kamera 0 -95 90 0 -0.3 1.6 ^
        -point_size 4

Ground surface noise (exaggerated by pressing <]> in lasview which doubles the scale in z).

As always, the idea is to construct a surface that is close to the ground but always above the noise so that it can be used to declare all points beneath it as noise. Below is a processing pipeline whose parameters work well for this data and that you can fine tune for the point density and the noise profile of your own data.

First we use lasthin to give those points the classification code 8 that are closest to the 70th percentile in elevation within every 20 cm by 20 cm cell. As statistics like percentiles are only stable for a sufficient number of points we only do this for cells that contain 25 points or more. Given that we have an average of 2300 points per square meter this should easily be the case for all relevant cells.

lasthin -i 3_tiles_temp0\seoul_*.laz ^
        -step 0.20 ^
        -percentile 70 25 ^
        -classify_as 8 ^
        -odir 3_tiles_temp1 -olaz ^
        -cores 4

The we run lasnoise only points on the points with classification code 8 and reclassify all “overly isolated” points with code 9. The check for isolation uses cells of size 20 cm by 20 cm by 5 cm and reclassifies the points in the center cell when the surrounding neighborhood of 27 cells has only 3 or fewer points in total. Changing the parameters for ‘-step_xy 0.20 -step_z 0.05 -isolated 3’ will remove isolated points more or less aggressive.

lasnoise -i 3_tiles_temp1\seoul_*.laz ^
         -ignore_class 0 ^
         -step_xy 0.20 -step_z 0.05 -isolated 3 ^
         -classify_as 9 ^
         -odir 3_tiles_temp2 -olaz ^
         -cores 4

Next we use lasground to ground-classify only the surviving points (that still have classification code 8) by ignoring those with classification codes 0 or 9. This sets their classification code to either ground (2) or non-ground (1). The temporary surface defined by the resulting ground points will be used to classify low points as noise in the next step.

lasground -i 3_tiles_temp2\seoul_*.laz ^
          -ignore_class 0 9 ^
          -town -ultra_fine -bulge 0.1 ^
          -odir 3_tiles_temp3 -olaz ^
          -cores 4

Then we use lasheight to classify all points that are 2.5 cm or more below the triangulated surface of temporary ground points as points as noise (7) and all others as unclassified (1).

lasheight -i 3_tiles_temp3\seoul_*.laz ^
          -classify_below -0.025 7 ^
          -classify_above -0.225 1 ^
          -odir 4_tiles_denoised -olaz ^
          -cores 4

The progress of each step is illustrated visually in the two image sequences shown below.

Now that all noise points are classified we start a standard processing pipeline, but always ignore the low noise points that are now classified with classification code 7.

The processing steps below create a 10 cm DTM raster. We first use lasthin to classify the lowest non-noise point per 10 cm by 10 cm cell. Considering only those lowest points we use lasground with options ‘-town’, ‘-extra_fine’, ‘-bulge 0.05’, and ‘-spike 0.05’. Using las2dem the resulting ground points are interpolated into a TIN and rasterized into a 10 cm DTM cutting out only the center 100 meter by 100 meter tile. We store the DTM raster as a gridded LAZ for maximal compression and finally merge these gridded LAZ files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\seoul_*.laz ^
        -ignore_class 7 ^
        -step 0.10 ^
        -lowest ^
        -classify_as 8 ^
        -odir 5_tiles_thinned_lowest -olaz ^
        -cores 4

lasground -i 5_tiles_thinned_lowest\seoul_*.laz ^
          -ignore_class 1 7 ^
          -town -extra_fine ^
          -bulge 0.05 -spike 0.05 ^
          -odir 6_tiles_ground -olaz ^
          -cores 4

las2dem -i 6_tiles_ground\seoul_*.laz ^
        -keep_class 2 ^
        -step 0.10 ^
        -use_tile_bb ^
        -odir 7_tiles_dtm -olaz ^
        -cores 4

blast2dem -i 7_tiles_dtm\seoul_*.laz -merged ^
          -hillshade ^
          -step 0.10 ^
          -o dtm_hillshaded.png

The processing steps below create a 10 cm DSM raster. We first use lasthin to classify the highest non-noise point per 10 cm by 10 cm cell. With las2dem the highest points are interpolated into a TIN and rasterized into a 10 cm DSM cutting out only the center 100 meter by 100 meter tile. Again we store the raster as gridded LAZ for maximal compression and merge these files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\seoul_*.laz ^
        -ignore_class 7 ^
        -step 0.10 ^
        -highest ^
        -classify_as 8 ^
        -odir 8_tiles_thinned_highest -olaz ^
        -cores 4

las2dem -i 8_tiles_thinned_highest\seoul_*.laz ^
        -keep_class 8 ^
        -step 0.10 ^
        -use_tile_bb ^
        -odir 9_tiles_dsm -olaz ^
        -cores 4

blast2dem -i 9_tiles_dsm\seoul_*.laz -merged ^
          -hillshade ^
          -step 0.10 ^
          -o dsm_hillshaded.png

The final result is below. The entire script is linked here. Simply download it, modify it as needed, and try it on this data or on your own data.

Scripting LAStools to Create a Clean DTM from Noisy Photogrammetric Point Cloud

A recent inquiry by Drone Deploy in the LAStools user forum gave us access to a nice photogrammetric point cloud for the village of Fillongley in the North Warwickshire district of England. They voted “Leave” with a whopping 66.9% according to the EU referendum results by the BBC. Before we say “Good riddance, Fillongley!” we EU-abuse this little village one last time and remove all their low noise points to create a nice Digital Terrain Model (DTM). The final result is shown below.

Side by side comparison of DTM and DSM generated with LAStools from photogrammetric point cloud by Drone Deploy.

After downloading the data it is useful to familiarize yourself with the point number, the point density and their geo-location, which can be done with lasview, lasinfo, and lasgrid using the command lines shown below. There are around 50 million points and their density averages close to 70 points per square meter.

lasview -i 0_photogrammetry\points.laz

lasinfo -i 0_photogrammetry\points.laz ^
        -cd ^
        -o 1_quality\fillongley.txt

lasgrid -i 0_photogrammetry\points.laz ^
        -step 0.50 ^
        -rgb ^
        -fill 1 ^
        -o 1_quality\fillongley.png

The first step is to use lastile and create smaller and buffered tiles for these 50 million photogrammetry points. We use a tile size of 200 meters, request a buffer of 25 meters around every tile, and flag buffer points as withheld so they can be easily be dropped later.

lastile -i 0_photogrammetry\points.laz ^
        -tile_size 200 -buffer 25 -flag_as_withheld ^
        -o 2_tiles_raw\fillongley.laz -olaz

Next we use a sequence of four modules, namely lasthin, lasnoiselasground, and lasheight with fine-tuned parameters to remove the low noise points that are typical for point clouds generated from imagery by photogrammetry software. A typical example for such noise points are shown in the image below.

lasview -i 2_tiles_raw\fillongley_596000_5815800.laz ^
        -inside 596050 5815775 596150 5815825 ^
        -kamera 0 -89 -1.75 0 0 1.5 ^
        -point_size 3

Clumps of low noise points typical for photogrammetry point clouds.

The idea to identify those clumps of noise is to construct a surface that is sufficiently close to the ground but always above the noise so that it can be used to classify all points beneath it as noise. However, preserving true ground features without latching onto low noise points often requires several iterations of fine-tuning the parameters. We did this interactively by repeatedly running the processing on only two representative tiles until a desired outcome was achieved.

First we use lasthin to give the point the classification code 8 that is closest to the 20th percentile in elevation within every 90 cm by 90 cm cell (but only if the cells containing at least 20 points). Choosing larger step sizes or higher percentiles resulted in missing ground features. Choosing smaller step sizes or lower percentiles resulted in low noise becoming part of the final ground model.

lasthin -i 2_tiles_raw\fillongley_*.laz ^
        -step 0.90 ^
        -percentile 20 20 ^
        -classify_as 8 ^
        -odir 3_tiles_temp1 -olaz ^
        -cores 4

The we run lasnoise only points on the points with classification code 8 (by ignoring those with classification code 0) and reclassify all “overly isolated” points with code 12. The check for isolation uses cells of size 200 cm by 200 cm by 50 cm and reclassifies the points in the center cell when the surrounding neighborhood of 27 cells has only 3 or fewer points in total. Changing the parameters for ‘-step_xy 2.00 -step_z 0.50 -isolated 3’ will remove noise more or less aggressive.

lasnoise -i 3_tiles_temp1\fillongley_*.laz ^
         -ignore_class 0 ^
         -step_xy 2.00 -step_z 0.50 -isolated 3 ^
         -classify_as 12 ^
         -odir 3_tiles_temp2 -olaz ^
         -cores 4

Next we use lasground to ground-classify only the surviving points (that still have classification code 8) by ignoring those with classification codes 0 or 12 and set their classification code to ground (2) or non-ground (1). The temporary surface defined by the resulting ground points will be used to classify low points as noise in the next step.

lasground -i 3_tiles_temp2\fillongley_*.laz ^
          -ignore_class 0 12 ^
          -town -ultra_fine ^
          -odir 3_tiles_temp3 -olaz ^
          -cores 4

Then we use lasheight to classify all points that are 20 cm or more below the triangulated surface of temporary ground points as points as noise (7) and all others as unclassified (1).

lasheight -i 3_tiles_temp3\fillongley_*.laz ^
          -classify_below -0.20 7 ^
          -classify_above -0.20 1 ^
          -odir 4_tiles_denoised -olaz ^
          -cores 4

The progress of each step is illustrated visually in the two image sequences shown below.

 

 

Now that all noise points are classified we start a standard processing pipeline, but always ignore the noise points that are now classified with classification code 7.

The processing steps below create a 25 cm DTM raster. We first use lasthin to classify the lowest non-noise point per 25 cm by 25 cm cell. Considering only those lowest points we use lasground with options ‘-town’, ‘-extra_fine’, or ‘-bulge 0.1’. Using las2dem the resulting ground points are interpolated into a TIN and rasterized into a 25 cm DTM cutting out only the center 200 meter by 200 meter tile. We store the DTM raster as a gridded LAZ for maximal compression and finally merge these gridded LAZ files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\fillongley_*.laz ^
        -ignore_class 7 ^
        -step 0.25 ^
        -lowest ^
        -classify_as 8 ^
        -odir 5_tiles_thinned_lowest -olaz ^
        -cores 4

lasground -i 5_tiles_thinned_lowest\fillongley_*.laz ^
          -ignore_class 1 7 ^
          -town -extra_fine -bulge 0.1 ^
          -odir 6_tiles_ground -olaz ^
          -cores 4

las2dem -i 6_tiles_ground\fillongley_*.laz ^
        -keep_class 2 ^
        -step 0.25 ^
        -use_tile_bb ^
        -odir 7_tiles_dtm -olaz ^
        -cores 4

blast2dem -i 7_tiles_dtm\fillongley_*.laz -merged ^
          -hillshade ^
          -step 0.25 ^
          -o dtm_hillshaded.png

The processing steps below create a 25 cm DSM raster. We first use lasthin to classify the highest non-noise point per 25 cm by 25 cm cell. With las2dem the highest points are interpolated into a TIN and rasterized into a 25 cm DSM cutting out only the center 200 meter by 200 meter tile. Again we store the raster as gridded LAZ for maximal compression and merge these files to create a hillshaded raster in PNG format with blast2dem.

lasthin -i 4_tiles_denoised\fillongley_*.laz ^
        -ignore_class 7 ^
        -step 0.25 ^
        -highest ^
        -classify_as 8 ^
        -odir 8_tiles_thinned_highest -olaz ^
        -cores 4

las2dem -i 8_tiles_thinned_highest\fillongley_*.laz ^
        -keep_class 8 ^
        -step 0.25 ^
        -use_tile_bb ^
        -odir 9_tiles_dsm -olaz ^
        -cores 4

blast2dem -i 9_tiles_dsm\fillongley_*.laz -merged ^
          -hillshade ^
          -step 0.25 ^
          -o dsm_hillshaded.png

The final result is below. The entire script is linked here. Simply download it, modify it as needed, and try it on your data.

 

LASmoons: Maria Kampouri

Maria Kampouri (recipient of three LASmoons)
Remote Sensing Laboratory, School of Rural & Surveying Engineering
National and Technical University of Athens, GREECE

Background:
The Aralar Natural Park, famous for its stunning landscapes, is located in the southeast of the province of Gipuzkoa, sharing a border with the neighboring province of Navarre. Inside the park there are nature reserves of exceptional importance, such as beech woods, large number of yew trees, very singular species of flora and fauna and areas of exceptional geological interest. Griffon vultures, Egyptian vultures, golden eagles and even bearded vultures (also known as lammergeier) can be seen flying over this area. European minks and Pyrenean desmans can be found in the streams and rivers that descend from the mountain tops.

The concept of biodiversity is based on inter- and intra-species genetic variation and has been evolving over the past 25 years. The importance of mapping biodiversity in order to plan its conservation, as well as identifying patterns in endemism and biodiversity hot-spots, have been pillars for EU and global environmental policy and legislation. The coupling of remote sensing and field data can increase reliability, periodicity and reproduce-ability of ecosystem process and biodiversity monitoring, leading to an increasing interest in environmental monitoring, using data for the same areas over time. Natural processes and complexity are best explored by observing ecosystems or landscapes through scale alteration, using spatial analysis tools, such as LAStools.

DTM generated with restricted version of las2dem above point limits

Goal:
The aim of this study is to investigate the potential use of LiDAR data for the identification and determination of forest patches of particular interest, with respect to ecosystem dynamics and biodiversity and to produce a relevant biodiversity map, based on Simpson’s Diversity Index for Aralar Natural Park.

Data:
+
 approximately 123 km^2 of LiDAR in 1km x 1km LAS tiles
+ Average point density: 2 pts/m^2
+ Spatial referencing system: ETRS89 UTM zone 30N with elevations on the EGM08 geoid. Data from LiDAR flights are These files were obtained from the LiDAR flight carried out in 2008 by the Provincial Council of Gipuzkoa and the LiDAR flights of the Basque Government.

LAStools processing:
1) data quality checking [lasinfolasoverlaplasgridlasreturn]
2) classify ground and non-ground points [lasground]
3) remove low and high outliers [lasheight, lasnoise]
4) identify buildings within the study area [lasclassify]
5) create DTM tiles with 0.5 step in ‘.bil’ format [las2dem]
6) create DSM tiles with 0.5 step in ‘.bil’ format [las2dem]
7) create a normalized point cloud [lasheight]
8) create a highest-return canopy height model (CHM) [lasthin, las2dem]
9) create a pit-free (CHM) with the spike-free algorithm [las2dem]
10) create various rasters with forest metrics [lascanopy]

The generated elevation and forest metrics rasters are then combined with satellite data to create a biodiversity map, using Simpson’s Diversity Index.

Complete LiDAR Processing Pipeline: from raw Flightlines to final Products

This tutorial serves as an example for a complete end-to-end workflow that starts with raw LiDAR flightlines (as they may be delivered by a vendor) to final classified LiDAR tiles and derived products such as raster DTM, DSM, and SHP files with contours, building footprint and vegetation layers. The three example flightlines we are using here were flown in Ayutthaya, Thailand with a RIEGL LMS Q680i LiDAR scanner by Asian Aerospace Services who are based at the Don Mueang airport in Bangkok from where they are serving South-East-Asia and beyond. You can download them here:

Quality Checking

The minimal quality checks consist of generating textual reports (lasinfo & lasvalidate), inspecting the data visually (lasview), making sure alignment and overlap between flightlines fulfill expectations (lasoverlap), and measuring pulse density per square meter (lasgrid). Additional checks for points replication (lasduplicate), completeness of all returns per pulse (lasreturn), and validation against external ground control points (lascontrol) may also be performed.

lasinfo -i Ayutthaya\strips_raw\*.laz ^
        -cd ^
        -histo z 5 ^
        -histo intensity 64 ^
        -odir Ayutthaya\quality -odix _info -otxt ^
        -cores 3

lasvalidate -i Ayutthaya\strips_raw\*.laz ^
            -o Ayutthaya\quality\validate.xml

The lasinfo report generated with the command line shown computes the average density for each flightline and also generates two histograms, one for the z coordinate with a bin size of 5 meter and one for the intensity with a bin size of 64. The resulting textual descriptions are output into the specified quality folder with an appendix ‘_info’ added to the original file name. Perusing these reports tells you that there are up to 7 returns per pulse, that the average pulse density per flightline is between 7.1 to 7.9 shots per square meter, that the point source IDs of the points are already populated correctly, that there are isolated points far above and far below the scanned area, and that the intensity values range from 0 to 1023 with the majority being below 400. The warnings in the lasinfo and the lasvalidate reports about the presence of return numbers 6 and 7 have to do with the history of the LAS format and can safely be ignored.

lasoverlap -i Ayutthaya\strips_raw\*.laz ^
           -files_are_flightlines ^
           -min_diff 0.1 -max_diff 0.3 ^
           -odir Ayutthaya\quality -o overlap.png

This results in two color illustrations. One image shows the flightline overlap with blue indicating one flightline, turquoise indicating two, and yellow indicating three flightlines. Note that wet areas (rivers, lakes, rice paddies, …) without LiDAR returns affect this visualization. The other image shows how well overlapping flightlines align. Their vertical difference is color coded with while meaning less than 10 cm error while saturated red and blue indicate areas with more than 30 cm positive or negative difference.

One pixel wide red and blue along building edges and speckles of red and blue in vegetated areas are normal. We need to look-out for large systematic errors where terrain features or flightline outlines become visible. If you click yourself through this photo album you will eventually see typical examples (make sure to read the comments too). One area slightly below the center looks suspicious. We load the PNG into the GUI to pick this area for closer inspection with lasview.

lasview -i Ayutthaya\strips_raw\*.laz -gui

Why these flightline differences exist and whether they are detrimental to your purpose are questions that you will have to explore further. For out purpose this isolated difference was noted but will not prevent us from proceeding further. Next we want to investigate the pulse density. We do this with lasgrid. We know that the average last return density per flightline is between 7.1 to 7.9 shots per square meter. We set up the false color map for lasgrid such that it is blue when the last return density drops to 5 shots (or less) per square meter and such that it is red when the last return density reaches 10 shots (or more).

lasgrid -i Ayutthaya\strips_raw\*.laz -merged ^
        -keep_last ^
        -step 2 -density ^
        -false -set_min_max 4 8 ^
        -odir Ayutthaya\quality -o density_4ppm_8ppm.png

The last return density per square meter mapped to a color: blue is 5 or less, red is 10 or more.

The last return density image clearly shows how the density increases to over 10 pulses per square meter in all areas of flightline overlap. However, as there are large parts covered by only one flightline their density is the one that should be considered. We note great variations in density along the flightlines. Those have to do with aircraft movement and in particular with the pitch. When the nose of the plane goes up even slightly, the gigantic “fan of laser pulses” (that can be thought of as being rigidly attached at the bottom perpendicular to the aircraft flight direction) is moving faster forward on the ground far below and therefore covers it with fewer shots per square meter. Conversely when the nose of the plane goes down the spacing between scan lines far below the plane are condensed so that the density increases. Inclement weather and the resulting airplane pitch turbulence can have a big impact on how regular the laser pulse spacing is on the ground. Read this article for more on LiDAR pulse density and spacing.

LiDAR Preparation

When you have airborne LiDAR in flightlines the first step is to tile the data into square tiles that are typically 1000 by 1000 or – for higher density surveys – 500 by 500 meters in size. This can be done with lastile. We also add a buffer of 30 meters to each tile. Why buffers are important for tile-based processing is explained here. We choose 30 meters as this is larger than any building we expect in this area and slightly larger than the ‘-step’ size we use later when classifying the points into ground and non-ground points with lasground.

lastile -i Ayutthaya\strips_raw\*.laz ^
        -tile_size 500 -buffer 30 -flag_as_withheld ^
        -odir Ayutthaya\tiles_raw -o ayu.laz

NOTE: Usually you will have to add ‘-files_are_flightlines’ or ‘-apply_file_source_ID’ to the lastile command shown above in order to preserve the information which points is from which flightline. We do not have to do this here as evident from the lasinfo reports we generated earlier. Not only is the file source ID in the LAS header is correctly set to 2, 3, or 4 reflecting the intended flightline numbering evident from the file names. Also the point source ID of each point is already set to the correct value 2, 3, or 4. For more info see this or this discussion from the LAStools user forum.

Next we classify isolated points that are far from most other points with lasnoise into the (default) classification code 7. See the README file for the meaning of the parameters and play around with different setting to get a feel for how to make this process more or less aggressive.

lasnoise -i Ayutthaya\tiles_raw\ayu*.laz ^
         -step_xy 4 -step_z 2 -isolated 5 ^
         -odir Ayutthaya\tiles_denoised -olaz ^
         -cores 4

Especially for ground classification it is important that low noise points are excluded. You should inspect a number of tiles with lasview to make sure the low noise are all pink now if you color them by classification.

lasview -i Ayutthaya\tiles_denoised\ayu*.laz -gui

While the algorithms in lasground are designed to withstand a few noise points below the ground, you will find that it will include them into the ground model if there are too many of them. Hence, it is important to tell lasground to ignore these noise points. For the other parameters used see the README file of lasground.

lasground -i Ayutthaya\tiles_denoised\ayu*.laz ^
          -ignore_class 7 ^
          -city -ultra_fine ^
          -compute_height ^
          -odir Ayutthaya\tiles_ground -olaz ^
          -cores 4

You should visually check the resulting ground classification of each tile with lasview by selecting smaller subsets (press ‘x’, draw a rectangle, press ‘x’ again, use arrow keys to walk) and then switch back and forth between a triangulation of the ground points (press ‘g’ and then press ‘t’) and a triangulation of last returns (press ‘l’ and then press ‘t’). See the README of lasview for more information on those hotkeys.

lasview -i Ayutthaya\tiles_ground\ayu*.laz -gui

This way I found at least one tile that should be reclassified with ‘-metro’ instead of ‘-city’ because it still contained one large building in the ground classification. Alternatively you can correct miss-classifications manually using lasview as shown in the next few screen shots.

This is an optional step for advanced users who have a license. In case you managed to do such a manual edit and saved it as a LAY file using LASlayers (see the README file of lasview) you can overwrite the old file with:

laslayers -i Ayutthaya\tiles_ground\ayu_669500_1586500.laz -ilay ^
          -o Ayutthaya\tiles_ground\ayu_669500_1586500_edited.laz

move Ayutthaya\tiles_ground\ayu_669500_1586500_edit.laz ^
     Ayutthaya\tiles_ground\ayu_669500_1586500.laz

The next step classifies those points that are neither ground (2) nor noise (7) into building (or rather roof) points (class 6) and high vegetation points (class 5). For this we use lasclassify with the default parameters that only considers points that are at least 2 meters above the classified ground points (see the README for details on all available parameters).

lasclassify -i Ayutthaya\tiles_ground\ayu*.laz ^
            -ignore_class 7 ^
            -odir Ayutthaya\tiles_classified -olaz ^
            -cores 4

We  check the classification of each tile with lasview by selecting smaller subsets (press ‘x’, draw a rectangle, press ‘x’ again) and by traversing with the arrow keys though the point cloud. You will find a number of miss-classifications. Boats are classified as buildings, water towers or complex temple roofs as vegetation, … and so on. You could use lasview to manually correct any classifications that are really important.

lasview -i Ayutthaya\tiles_classified\ayu*.laz -gui

Before delivering the classified LiDAR tiles to a customer or another user it is imperative to remove the buffers that were carried through all computations to avoid artifacts along the tile boundary. This can also be done with lastile.

lastile -i Ayutthaya\tiles_classified\ayu*.laz ^
        -remove_buffer ^
        -odir Ayutthaya\tiles_final -olaz ^
        -cores 4

Together with the tiling you may want to deliver a tile overview file in KML format (or in SHP format) that you can easily generate with lasboundary using this command line:

lasboundary -i Ayutthaya\tiles_final\ayu*.laz ^
            -use_bb ^
            -overview -labels ^
            -o Ayutthaya\tiles_overview.kml

The small KML file generated b lasboundary provides a quick overview where tiles are located, their names, their bounding box, and the number of points they contain.

Derivative production

The most common derivative product produced from LiDAR data is a Digital Terrain Model (DTM) in form of an elevation raster. This can be obtained by interpolating the ground points with a triangulation (i.e. a Delaunay TIN) and by sampling the TIN at the center of each raster cell. The pulse density of well over 4 shots per square meter definitely supports a resolution of 0.5 meter for the raster DTM. From the ground-classified tiles with buffer we compute the DTM using las2dem as follows:

las2dem -i Ayutthaya\tiles_ground\ayu*.laz ^
        -keep_class 2 ^
        -step 0.5 -use_tile_bb ^
        -odir Ayutthaya\tiles_dtm -obil ^
        -cores 4

It’s important to add ‘-use_tile_bb’ to the command line which limits the raster generation to the original tile sizes of 500 by 500 meters in order not to rasterize the buffers that are extending the tiles 30 meters in each direction. We used the BIL format so that we inspect the resulting elevation rasters with lasview:

lasview -i Ayutthaya\tiles_dtm\ayu*.bil -gui

To create a hillshaded version of the DTM you can use your favorite raster processing package such as GDAL or GRASS but you could also use the BLAST extension of LAStools and create a large seamless image with blast2dem as follows:

blast2dem -i Ayutthaya\tiles_dtm\ayu*.bil -merged ^
          -step 0.5 -hillshade -epsg 32647 ^
          -o Ayutthaya\dtm_hillshade.png

Because blast2dem does not parse the PRJ files that accompany the BIL rasters we have to specify the EPSG code explicitly to also get a KML file that allows us to visualize the LiDAR in Google Earth.

A a hillshading of the merged DTM rasters produced with blast2dem.

Next we generate a Digital Surface Model (DSM) that includes the highest objects that the laser has hit. We use the spike-free algorithm that is implemented in las2dem that creates a triangulation of the highest returns as follows:

las2dem -i Ayutthaya\tiles_denoised\ayu*.laz ^
        -drop_class 7 ^
        -step 0.5 -spike_free 1.2 -use_tile_bb ^
        -odir Ayutthaya\tiles_dsm -obil ^
        -cores 4

We used 1.0 as the freeze value for the spike free algorithm because this is about three times the average last return spacing reported in the individual lasinfo reports generated during quality checking. Again we inspect the resulting rasters with lasview:

lasview -i Ayutthaya\tiles_dsm\ayu*.bil -gui

For reason of comparison we also generate the DSM rasters using a simple first-return interpolation again with las2dem as follows:

las2dem -i Ayutthaya\tiles_denoised\ayu*.laz ^
        -drop_class 7 -keep_first ^
        -step 0.5 -use_tile_bb ^
        -odir Ayutthaya\tiles_dsm -obil ^
        -cores 4

A few direct side-by-side comparison between a spike-free DSM and a first-return DSM shows the difference that are especially noticeable along building edges and in large trees.

Another product that we can easily create are building footprints from the automatically classified roofs using lasboundary. Because the tool is quite scalable we can simply on-the-fly merge the final tiles. This also avoids including duplicate points from the tile buffer whose classifications are also often less accurate.

lasboundary -i Ayutthaya\tiles_final\ayu*.laz -merged ^
            -keep_class 6 ^
            -disjoint -concavity 1.1 ^
            -o Ayutthaya\buildings.shp

Similarly we can use lasboundary to create a vegetation layer from those points that were automatically classified as high vegetation.

lasboundary -i Ayutthaya\tiles_final\ayu*.laz -merged ^
             -keep_class 5 ^
             -disjoint -concavity 3 ^
             -o Ayutthaya\vegetation.shp

We can also produce 1.0 meter contour lines from the ground classified points. However, for nicer contours it is beneficial to first generate a subset of the ground points with lasthin using option ‘-contours 1.0’ as follows:

lasthin -i Ayutthaya\tiles_final\ayu*.laz ^
        -keep_class 2 ^
        -step 1.0 -contours 1.0 ^
        -odir Ayutthaya\tiles_temp -olaz ^
        -cores 4

We then merge all subsets of ground points from those temporary tiles on-the-fly into one (using the ‘-merged’ option) and let blast2iso from the BLAST extension of LAStools generate smoothed and simplified 1 meter contours as follows:

blast2iso -i Ayutthaya\tiles_temp\ayu*.laz -merged ^
          -iso_every 1.0 ^
          -smooth 2 -simplify_length 0.5 -simplify_area 0.5 -clean 5.0 ^
          -o Ayutthaya\contours_1m.shp

Finally we composite all of our derived LiDAR products into one map using QGIS and then fuse it with data from OpenStreetMap that we’ve downloaded from BBBike. Here you can download the OSM data that we use.

It’s in particular interesting to compare the building footprints that were automatically derived from our LiDAR processing pipeline with those mapped by OpenStreetMap volunteers. We immediately see that there is a lot of volunteering work left to do and the LiDAR-derived data can assist us in directing those mapping efforts. A closer look reveals the (expected) quality difference between hand-mapped and auto-generated data.

The OSM buildings are simpler. These polygons are drawn and divided into logical units by a human. They are individually verified and correspond to actual buildings. However, they are less aligned with the Google Earth satellite image. The LiDAR-derived buildings footprints are complex because lasboundary has no logic to simplify the complicated polygonal chains that enclose the points that were automatically classified as roof into rectilinear shapes or to break directly adjacent roof points into separate logical units. However, most buildings are found (but also objects that are not buildings) and their geospatial alignment is as good as that of the LiDAR data.