Removing Noise from Single Photon LiDAR to Generate a Smooth DTM

A while back we had a first look at the Single Photon LiDAR from Leica’s SPL100 sensor (that eventually turned out just to be an SPL99 because one beamlet or one receiver in the 10 by 10 array was broken and did not produce any returns). Today we are taking a closer look at a strategy to remove the excessive noise in the raw Single Photon LiDAR data from a “proper” SPL100 sensor (where all of the 100 beamlets are firing) that was flown in 2017 in Navarra, Spain.

navarra_spl_teaser

Profile through original points on top of generated DTM.

The data was provided as open data by the cartography section of Navarra’s Government and is available via a simple download FTP portal. We describe the LAStools processing steps that were used to eliminate the excessive noise and to generate a smooth DTM. In the following we are using the originally released version of the data, that we obtained shortly after the portal went online that seems to be a bit more “raw” than the current files available now. One starndard quality check with lasinfo was done with:

lasinfo -i 0_raw\*.laz ^
        -cd ^
        -histo intensity 1 ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -histo gps_time 10 ^
        -odir 1_quality -odix _info -otxt

Upon inspecting the lasinfo report we suggest a few changes in how to store this Single Photon LiDAR data for more efficient hosting via an online portal. We perform these changes here before starting the actual processing. First we use the las2las call shown below to fix an error in the global encoding bits, remove an irrelevant VLR, re-scale the coordinates from millimeter to centimeters, re-offset the coordinates to nice numbers, and – what is by far the most crucial change for better compression – remap the beamlet ID stored in the ‘user data’ field as described in an earlier article.

las2las -i 0_raw\*.laz ^
        -rescale 0.01 0.01 0.01 ^
        -auto_reoffset ^
        -set_global_encoding_gps_bit 1 ^
        -remove_vlr 1 ^
        -map_user_data beamlet_ID_map.txt ^
        -odir 2_fix_rescale_reoffset_remap -olaz ^
        -cores 3

Then we use two lassort calls, one to maximize compression and one to improve spatial coherence. One lassort call rearranges the points in increasing order first based on the GPS time stamps, then breaks ties based on the user data field (that stores the beamlet ID), and finally stores the returns of every beamlet ordered by return number. We also add spatial reference information in this step. The other lassort call rearranges the points into a spatially coherent layout. It uses a Z-order sort with the granularity of 50 meter by 50 meter buckets of points. Within each bucket the point order from the prior sort is kept.

lassort -i 2_fix_rescale_reoffset_remap\*.laz ^
        -epsg 25830 ^
        -gps_time ^
        -user_data ^
        -return_number ^
        -odir 2_maximum_compression -olaz ^
        -cores 3

lassort -i 2_maximum_compression\*.laz ^
        -bucket_size 50 ^
        -odir 2_spatial_coherence -olaz ^
        -cores 3

The resulting optimized nine tiles are around 200 MB each and can be downloaded as one file here or as individual tiles here:

Now we start the usual processing workflow by tiling the data with lastile into smaller 500 meter by 500 meter tiles with a 25 meter buffer. We also set the pre-existing point classification in the data to zero as we will compute our own later.

lastile -i 2_spatial_coherence\*.laz ^
        -set_classification 0 ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -odir 3_buffered -o yecora.laz

We notice that a large amount of the noise has intensity values below 1000. We are still a bit puzzled where those intensity values come from and what exactly they mean in a Single Photon LiDAR system. But it works. We run las2las with a “filtered transform” to set classification of all points whose intensity value is 1000 or less to the classification code 7 (aka “noise”).

las2las -i 3_buffered\*.laz ^
        -keep_intensity_below 1000 ^
        -filtered_transform ^
        -set_classification 7 ^
        -odir 4_intensity_denoised -olaz ^
        -cores 3

We then ignore this “easy-to-identify” noise and go after the remaining one with lasnoise by ignoring classification code 7 and setting the newly identified noise to classification code 9 – not because it’s “water” (the usual meaning of class 9) but because these points are drawn with a distinct blue color when checking the result with lasview.

 lasnoise -i 4_intensity_denoised\*.laz ^
         -ignore_class 7 ^
         -step_xy 1.0 -step_z 0.2 ^
         -isolated 5 ^
         -classify_as 9 ^
         -odir 4_isolation_denoised -olaz ^
         -cores 3

Of the surviving non-noise points we then use lasthin to reclassify the point closest to the 20th elevation percentile per 50 cm by 50 cm area with classification code 8 (for all areas that have more than 5 non-noise points per 50 cm by 50 cm area. We repeat the same for every 1 meter by 1 meter area.

lasthin -i 4_isolation_denoised\*.laz ^
        -ignore_class 7 9 ^
        -step 0.5 -percentile 20 5 ^
        -classify_as 8 ^
        -odir 5_thinned_p20_050cm -olaz ^
        -cores 3

lasthin -i 5_thinned_p20_050cm\*.laz ^
        -ignore_class 7 9 ^
        -step 1.0 -percentile 20 5 ^
        -classify_as 8 ^
        -odir 5_thinned_p20_100cm -olaz ^
        -cores 3

We then perform a more agressive second noise removal step one with lasnoise using only those points with classification code 8, namely those non-noise points that were the 20th elevation percentile in either a 50 cm by 50 cm cell or a 1 meter by 1 meter cell. This can be done by ignoring classification code 0, 7, and 9. We mark those noise points as 6 so they appear orange in the point cloud with lasview.

lasnoise -i 5_thinned_p20_100cm\*.laz ^
         -ignore_class 0 7 9 ^
         -step_xy 2.0 -step_z 0.2 ^
         -isolated 1 ^
         -classify_as 6 ^
         -odir 5_thinned_p20_100cm_denoised -olaz ^
         -cores 3

The 20th elevation percentile points that survive the last noise removal are then classified into ground (2) and non-ground (1) points with lasground_new by ignoring all other points, namely those with classification codes 0, 6, 7, and 9.

lasground_new -i 5_thinned_p20_100cm_denoised\*.laz ^
              -ignore_class 0 6 7 9 ^
              -town ^
              -odir 5_tiles_ground_050cm -olaz ^
              -cores 3

These images below illustrate the steps we took. They also show that not all data was used and might give you ideas where to tweak our workflow for even better results.

Finally we raster the ground points into 1 meter Digital Terrain Model (DTM) rasters with las2dem and store the result (without buffers) to the RasterLAZ format.

las2dem -i 5_tiles_ground_050cm\*.laz ^
        -keep_class 2 ^
        -step 1.0 ^
        -use_tile_bb ^
        -odir 6_tiles_dtm_100cm -olaz ^
        -cores 3

Finally we merged all RasterLAZ tiles into one and compute the final hillshaded DTM with blast2dem.

blast2dem -i 6_tiles_dtm_100cm\*.laz -merged ^
          -step 1.0 ^
          -hillshade ^
          -o yecora_dtm_100cm.png

The hillshaded DTM that is result of the entire sequence of processing steps described above is shown below.

DTM from ground classification created with LAStools

For comparison we generate the same DTM using the originally provided classification. According to the README file the original ground points are classified with code 22 in areas of flight line overlap and as the usual code 2 elsewhere. Hence we must use both classification codes to construct the DTM. We do this analogue to the earlier processing steps with the three LAStools commands lastile, las2dem, and blast2dem below.

lastile -i 2_spatial_coherence\*.laz ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -odir 3_tiles_buffered_orig -o yecora.laz

las2dem -i 3_tiles_buffered_orig\*.laz ^
        -keep_class 2 22 ^
        -step 1.0 ^
        -use_tile_bb ^
        -odir 6_tiles_dtm_100cm_orig -olaz ^
        -cores 3

blast2dem -i 6_tiles_dtm_100cm_orig\*.laz -merged ^
          -step 1.0 ^
          -hillshade ^
          -o yecora_dtm_100cm_orig.png

Below the hillshaded DTM generated from the ground classification that was provided with the LiDAR when it was originally released as open data.

DTM from ground classification of originally released data.

In the meantime Andorra’s SPL data have been updated with a newer version in the open data portal. The new version of the data contains a much better ground classification that might have been improved manually as the new files now have the the string ‘cam’ instead of ‘ca’ in the file name, which probably means ‘classified automatically and manually’ instead of the original ‘classified automatically’. We decided not to switch to the new data release as it seemed less “raw” than the original release. For example there are suddenly points with GPS times and returns counts and numbers of zero in the file that seem synthetic. But we also computed the hillshaded DTM for the new release which is shown below.

DTM from ground classification of newly released data.

We thank the cartography section of Navarra’s Government for providing their LiDAR as open data. This not only allows re-purposing expensive data paid for by public taxes but also generates additional value, encourages citizen science, and provides educational opportunity and insights such as this blog article.

Another German State Goes Open LiDAR: Saxony

Finally some really good news out of Saxony. 😊 After North Rhine-Westphalia and Thuringia released the first significant amounts of open geospatial data in Germany in a one-two punch in January 2017, we now have a third German state opening their entire tax-payer-funded geospatial data holdings to the tax-paying public via a simple and very easy-to-use online download portal. Welcome to the open data party, Saxony!!!

Currently available via the online portal are the LiDAR-derived raster Digital Terrain Model (DTM) at 1 meter resolution (DGM 1m) for everything flown since 2015 and and at 2 meter resolution (DGM 2m) or 20 meter resolution (DGM 20m) for the entire state. The horizontal coordinates use UTM zone 33 with ETRS89 (aka EPSG code 25833) and the vertical coordinate uses the “Deutsche Haupthöhennetz 2016” or “DHHN2016” (aka EPSG code 7837). Also available are orthophotos at 20 cm (!!!) resolution (DOP 20cm).

dgm_1000_rdax_87

Overview of current LiDAR holdings. Areas flown 2015 or later have LAS files and 1 meter rasters. Others have LiDAR as ASCII files and lower resolution rasters.

Offline – by ordering through either this online form or that online form – you can also get the 5 meter DTM and the 10 meter DTM, the raw LiDAR point clouds, LiDAR intensity rasters, hill-shaded DTM rasters, as well as the 1 meter and the 2 meter Digital Surface Model (DSM) for a small administrative fee that ranges between 25 EUR and 500 EUR depending on the effort involved.

Our immediate thought is to get a copy on the entire raw LiDAR points clouds (available as LAS 1.2 files for all  data acquired since 2015 and as ASCII text for earlier acquisitions) and find some portal willing to hosts this data online. We are already in contact with the land survey of Saxony to discuss this option and/or alternate plans.

Let’s have a look at the data. First we download four 2 km by 2 km tiles of the 1 meter DTM raster for an area surrounding the so called “Greifensteine” using the interactive map of the download portal, which are provided as simple XYZ text. Here a look at the contents of one ot these tiles:

more Greifensteine\333525612_dgm1.xyz
352000 5613999 636.26
352001 5613999 636.27
352002 5613999 636.28
352003 5613999 636.27
352004 5613999 636.24
[...]

Note that the elevation are not sampled in the center of every 1 meter by 1 meter cell but exactly on the full meter coordinate pair, which seems especially common  in German-speaking countries. Using txt2las we convert these XYZ rasters to LAZ format and add geo-referencing information for more efficient subsequent processing.

txt2las -i greifensteine\333*_dgm1.xyz ^
        -set_scale 1 1 0.01 ^
        -epsg 25833 ^
        -olaz

Below you see that going from XYZ to LAZ reduces the amount of  data from 366 MB to 10.4 MB, meaning that the data on disk becomes over 35 times smaller. The ability of LASzip to compress elevation rasters was first noted during the search for missing airliner MH370 and resulted in our new LAZ-based compressor for height grid called DEMzip.  The resulting LAZ files now also include geo-referencing information.

96,000,000 333525610_dgm1.xyz
96,000,000 333525612_dgm1.xyz
96,000,000 333545610_dgm1.xyz
96,000,000 333545612_dgm1.xyz
384,000,000 bytes

2,684,820 333525610_dgm1.laz
2,590,516 333525612_dgm1.laz
2,853,851 333545610_dgm1.laz
2,795,430 333545612_dgm1.laz
10,924,617 bytes

Using blast2dem we then create a hill-shaded version of the 1 meter DTM in order to overlay a visual representation of the DTM onto Google Earth.

blast2dem -i greifensteine\333*_dgm1.laz ^
          -merged ^
          -step 1 ^
          -hillshade ^
          -o greifensteine.png

Below the result that nicely shows how the penetrating laser of the LiDAR allows us to strip away the forest to see interesting geological features in the bare-earth terrain.

In a second exercise we use the available RGB orthophoto images to color one of the DTM tiles and explore it using lasview. For this we download the image for the top left of the four tiles that covers the area containing the “Greifensteine” from the interactive download portal for orthophotos. As the resolution of the TIF image is 20 cm and that of the DTM is only 1 meter, we first down-sample the TIF using gdalwarp of GDAL.

gdalwarp -tr 1 1 ^
         -r cubic ^
         greifensteine\dop20c_33352_5612.tif ^
         greifensteine\dop1m_33352_5612.tif

If you are not yet using GDAL today is a good day to start. It nicely complements the point cloud processing functionality of LAStools for raster inputs. Next we use lascolor to give each elevation pixel of the DTM stored in LAZ format its corresponding color from the orthophoto.

lascolor -i greifensteine\333525612_dgm1.laz ^
         -image greifensteine\dop1m_33352_5612.tif ^
         -odix _rgb -olaz

Now we can view the colored DTM in LAZ format interactively with lasview or any other LiDAR viewing software and turn on the RGB colors from the orthophoto as needed to understand the scene.

lasview -i greifensteine\333525612_dgm1_rgb.laz

We thank the “Staatsbetrieb Geobasisinformation und Vermessung Sachsen (GeoSN)” for giving us easy access to the 1 meter DTM and the 20 cm orthophoto that we have used in this article through their new open geodata portal as open data under the user-friendly license “Datenlizenz Deutschland – Namensnennung – Version 2.0.

Smooth DTM from Drone LiDAR off Velodyne HDL 32A mounted on DJI M600 UAV

Recently we attempted to do a small LiDAR survey by drone for a pet project of our CEO in our “code and surf camp” here in Samara, Costa Rica. But surveying is difficult when you are a novice and we ran into a trajectory issue. The dramatic “wobbles” were entirely our fault, but fortunately our mistakes also led to something useful: We found some LAS export bugs. Our laser scanner was a Velodyne HDL-32E integrated with a NovAtel INS into the Snoopy Series A HD made by LiDARUSA. The system was carried by a DJI Matrice 600 (M600) drone. We processed the trajectory with NovAtel Inertial Explorer (here we made the “wobbles” error) and finally exported the LAS and LAZ files with ScanLook PC (version 1.0.182) from LiDARUSA.

While we were investigating our “wobbles” (which clearly were our mistake) we also found five different LAS export bugs in ScanLook PC that seem to have started sometime after version 1.0.171 and will likely end with version 1.0.193. Below an illustration of a correct export from version 1.0.129 and a buggy export from version 1.0.182. In both instances you see the returns from one revolution of the Velodyne HDL-32E scanner head ordered by their GPS time stamps and colored to distinguish the 32 separate beams. In the buggy version, groups of around seven non-adjacent returns are given the same time stamp. This bug will only affect you, if correct GPS time stamps are important for your subsequent LiDAR processing or if your client explicitly asked for ASPRS specification compliant LAS files. We plan to publish another blog post detailing how to find this GPS time stamping bug (and the other four bugs we found).

During the many interactions we had working through “wobbles” and export bugs, we obtained a nice set of six flight lines from Seth Gulich of Bowman Consulting – a US American company based in Stuart, Florida – who flew an identical “Snoopy Series A HD” system also on a DJI Matrice 600 drone at approximately 100 feet above ground level above a model airplane airport in Palm Beach, Florida. You can download the data set here. In the following we will check the flight line alignment of this data set and then process it into a smooth DTM. All command lines used are summarized in this text file.

First we generate a lasinfo report that includes a number of histograms for on-the-fly merged flight lines with lasinfo and then use the z coordinate histogram from the lasinfo report to set reasonable min/max values for the elevation color ramp of lasview:

lasinfo -i 0_strips_raw\Velodyne*.laz -merged ^
        -cd ^
        -histo z 1 ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -o 1_quality\Velodyne_merged_info.txt

lasview -i 0_strips_raw\Velodyne*.laz ^
        -points 10000000 ^
        -set_min_max 25 75

The lasinfo report shows no information about the coordinate reference system. We found out experimentally that the horizontal coordinates seem to be EPSG code 2236 and that the vertical units are most likely be US survey feet. The warnings you will see in the lasinfo report have to do with the fact that the double-precision bounding box stored in the LAS header was populated with numbers that have many more decimal digits than the coordinates in the file, which only have millifeet resolution as all three scale factors are 0.001 (meaning coordinates have three decimal digits). The information which of the 32 lasers was collecting which point is stored in both the ‘user data’ and the ‘point source ID’ field which is evident from the histograms in the lasinfo report. We need to be careful not to override both fields in later processing.

Next we use lasoverlap to check how well the LiDAR points from the flight out and the flight back align vertically. This tool computes the difference of the lowest points for each square foot covered by multiple flight lines. Differences of less than a quarter of a foot are both times mapped to white, differences of more than one foot (more than half a foot) are mapped to saturated red or blue depending on whether the difference is positive or negative in the first run (in the second run):

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 1.00 -step 1 ^
           -odir 1_quality -o overlap_025_100.png

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 0.50 -step 1 ^
           -odir 1_quality -o overlap_025_050.png

We use a new feature of the LAStools GUI (as of version 180429) to closer inspect large red or blue areas. With lasmerge we clip out regions that looks suspect for closer examination with lasview. First we spatially index the flight lines to make this process faster. With the ‘-gui’ switch we start the tool in GUI mode with flight lines already loaded. Using the new PNG overlay roll-out on the left we add the ‘overlap_025_050_diff.png’ image from the quality folder created in the last step and clip out three areas.

lasindex -i 0_strips_raw\Velodyne*.laz
         -tile_size 10 -maximum -100 ^
         -cores 3

lasmerge -i 0_strips_raw\Velodyne*.laz -gui

You can also clip out these three areas using the command lines below:

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 939500 889860 100 ^
         -o 1_quality\939500_889860.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940400 889620 100 ^
         -o 1_quality\940400_889620.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940500 890180 100 ^
         -o 1_quality\940500_890180.laz

The reader may inspect the areas 939500_889860.laz, 940400_889620.laz, and 940500_890180.laz with lasview using profile views via hot keys ‘x’ and switching back and forth between the points from different flight lines via hot keys ‘0’, ‘1’, ‘2’, ‘3’, … for individual and ‘a’ for all flight lines as we have done it in previous tutorials [1,2,3]. Using drop-lines or rise-lines via the pop-up menu gives you a sense of scale. Removing points with lastrack that are horizontally too far from the trajectory could be one strategy to use fewer outliers. But as our surfaces are expected to be “fluffy” (because we have a Velodyne LiDAR system), we accept these flight line differences and continue processing.

Here the complete LAStools processing pipeline for creating an average ground model from the set of six flight lines that results in the hillshaded DTM shown below. The workflow is similar to those we have developed in earlier blog posts for Velodyne Puck based systems like the Hovermap and the Yellowscan and in the other Snoopy tutorial. All command lines used are summarized in this text file.

Hillshaded DTM with half foot resolution generated via average ground computation with LAStools.

In the first step we lastile the six flight lines into 250 by 250 feet tiles with 25 feet buffer while preserving flight line information. The flight line information will be stored in the “point source ID” field of each point and therefore override the beam ID that is currently stored there. But the beam ID is also stored in the “user data” field as the  lasinfo report had told us. We set all classifications to zero and add information about the horizontal coordinate reference system EPSG code 2236 and the vertical units (US Survey Feet).

lastile -i 0_strips_raw\*.laz ^
        -faf ^
        -set_classification 0 ^
        -epsg 2236 -elevation_survey_feet ^
        -tile_size 250 -buffer 25 -flag_as_withheld ^
        -odir 2_tiles_raw -o pb.laz

On three cores in parallel we then lassort the points in the tiles into a space-filling curve order which will accelerate later operations.

lassort -i 2_tiles_raw\*.laz ^
        -odir 2_tiles_sorted -olaz ^
        -cores 3

Next we use lasthin to classify the point whose elevation is closest to the 5th elevation percentile among all points falling into its cell with classification code 8. We run lasthin multiple times and each time increase the cell size from 1, 2, 4, 8 to 16 foot. We do this because we have requested the 5th elevation percentile to only be computed when there are at least 20 points in the cell. Percentiles are statistical measures and need a reasonable sample size to be stable. Because drone flights are very dense in the center and more sparse at the edges this increase in cell size assures that we have a good selection of points classified with classification code 8 across the entire survey area.

lasthin -i 2_tiles_sorted\*.laz ^
        -step 1 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step01 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step01\*.laz ^
        -step 2 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step02 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step02\*.laz ^
        -step 4 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step04 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step04\*.laz ^
        -step 8 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step08 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step08\*.laz ^
        -step 16 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step16 -olaz ^
        -cores 3

Then we let lasground_new run on only the points classified with classification code 8 (i.e. by ignoring the points still classified with code 0) which classifies them into ground (code 2) and non-ground (code 1).

lasground_new -i 3_tiles_thinned_p05_step16\*.laz ^
              -ignore_class 0 ^
              -town ^
              -odir 4_tiles_ground_low -olaz ^
              -cores 3

The ground points we have computed form somewhat of a lower envelope of the “fluffy” points of a Velodyne scanner. With lasheight we now draw all the points near the ground – namely those from 0.1 foot below to 0.4 foot above the ground – into a new classification code 6 that we term “thick ground”. The ‘-do_not_store_in_user_data’ switch prevent the default behavior of lasheight from happening, which would override the beam ID information that it stored in the ‘user data’ field with approximate height value.

lasheight -i 4_tiles_ground_low\*.laz ^
          -classify_between -0.1 0.4 6 ^
          -do_not_store_in_user_data ^
          -odir 4_tiles_ground_thick -olaz ^
          -cores 3

A few close-up shots of the resulting “thick ground” are shown in the picture gallery below.

We then use lasgrid to average the (orange) thick ground points onto a regular grid with a cell spacing of half a foot. We do not grid the tile buffers by adding the ‘-use_tile_bb’ switch.

lasgrid -i 4_tiles_ground_thick\*.laz ^
        -keep_class 6 ^
        -step 0.5 -average ^
        -use_tile_bb ^
        -odir 5_tiles_gridded_mean_ground -olaz ^
        -cores 3

Finally we use blast2dem to merge all the averaged ground point grids into one file, interpolate across open areas without ground points, and compute the hillshaded DTM shown above. All command lines used are summarized in this text file.

blast2dem -i 5_tiles_gridded_mean_ground\*.laz ^
          -merged ^
          -step 0.5 ^
          -hillshade ^
          -o dtm.png

We thank Seth Gulich of Bowman Consulting for sharing this LiDAR data set with us. It was flown with a DJI Matrice 600 drone carrying a “Snoopy A series HD” LiDAR system from LidarUSA.

Using Open LiDAR to Remove Low Noise from Photogrammetric UAV Point Clouds

We collected drone imagery of the restored “Kance” tavern during the lunch stop of the UAV Tartu summer school field trip (actually the organizer Marko Kohv did that, as I was busy introducing students to SUP boarding). With Agisoft PhotoScan we then processed the images into point clouds below the deck of the historical “Jõmmu” barge on the way home (actually Marko did that, because I was busy enjoying the view of the wetlands in the afternoon sun). The resulting data set with 7,855,699 points is shown below and can be downloaded here.

7,855,699 points produced with Agisoft Photoscan

Generating points using photogrammetric techniques in scenes containing water bodies tends to be problematic as dense blotches of noise points above and below the water surface are common as you can see in the picture below. Especially the low points are troublesome as they adversely affect ground classification which results in poor Digital Elevation Models (DTMs).

Clusters of low noise points nearly 2 meters below the actual surface in water areas.

In a previous article we have described a LAStools workflow that can remove excessive low noise. In this article here we use external information about the topography of the area to clean our photogrammetry points. How convenient that the Estonian Land Board has just released their entire LiDAR archives as open data.

Following these instructions here you can download the available open LiDAR for this area, which has the map sheet index 475681. Alternatively you can download the four currently available data sets here flown in spring 2010, in summer 2013, in spring 2014, and in summer 2017. In the following we will use the one flown in spring 2014.

We can view both data sets simultaneously in lasview. By adding ‘-faf’ to the command-line we can switch back and forth between the two data sets by pressing ‘0’ and ‘1’.

lasview -i Kantsi.laz ^
        -i 475681_2014_tava.laz ^
        -points 10000000 ^
        -faf

We find cut the 1 km by 1 km LiDAR tile down to a 250 m by 250 m tile that nicely surrounds our photogrammetric point set using the following las2las command-line:

las2las -i 475681_2014_tava.laz ^
        -inside_tile 681485 6475375 250 ^
        -o LiDAR_Kantsi.laz

lasview -i Kantsi.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

Scrutinizing the two data sets we quickly find that there is a miss-alignment between the dense imagery-derived and the comparatively sparse LiDAR point clouds. With lasview we investigate the difference between the two point clouds by hovering over a point from one point cloud and pressing <i> and then hovering over a somewhat corresponding point from the other point cloud and pressing <SHIFT>+<i>. We measure displacements of around 2 meters vertically and of around 3 to 3.5 meter in total.

Before we can use the LiDAR points to remove the low noise from the photogrammetric points we must align them properly. For simple translation errors this can be done with a new feature that was recently added to lasview. Make sure to download the latest version (190404 or newer) of LAStools to follow the steps shown in the image sequence below.

las2las -i Kantsi.laz ^
        -translate_xyz 0.89 -1.90 2.51 ^
        -o Kantsi_shifted.laz

lasview -i Kantsi_shifted.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result looks good in the sense that both sides of the photogrammetric roof are reasonably well aligned with the LiDAR. But there is still a shift along the roof so we repeat the same thing once more as shown in the next image sequence:

We use a suitable displacement vector and apply it to the photogrammetry points, shifting them again:

las2las -i Kantsi_shifted.laz ^
        -translate_xyz -1.98 -0.95 0.01 ^
        -o Kantsi_shifted_again.laz

lasview -i Kantsi_shifted_again.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result is still not perfect as there is also some rotational error and you may find another software such as Cloud Compare more suited to align the two point clouds, but for this exercise the alignment shall suffice. Below you see the match between the photogrammetry points and the LiDAR TIN before and after shifting the photogrammetry points with the two (interactively determined) displacement vectors.

The final steps of this exercise use las2dem and the already ground-classified LiDAR compute a 1 meter DTM, which we then use as input to lasheight. We classify the photogrammetry points using their height above this set of ground points with 1 meter spacing: points that are 40 centimeter or more below the LiDAR DTM are classified as noise (7), points that are between 40 below to 1 meter above the LiDAR DTM are classified to a temporary class (here we choose 8) that has those points that could potentially be ground points. This will help, for example, with subsequent ground classification as large parts of the photogrammetry points – namely those on top of buildings and in higher vegetation – can be ignored from the start by a ground classification algorithm such as lasground.

las2dem -i LiDAR_Kantsi.laz ^
        -keep_class 2 ^
        -kill 1000 ^
        -o LiDAR_Kantsi_dtm_1m.bil
lasheight -i Kantsi_shifted_again.laz ^
          -ground_points LiDAR_Kantsi_dtm_1m.bil ^
          -classify_below -0.4 7 ^
          -classify_between -0.4 1.0 8 ^
          -o Kantsi_cleaned.laz

Below the results we have achieved after “roughly” aligning the two point clouds with some new lasview tricks and then using the LiDAR elevations to classify the photogrammetry points into “low noise”, “potential ground”, and “all else”.

We thank the Estonian Land Board for providing open data with a permissive license. Special thanks also go to the organizers of the UAV Summer School in Tartu, Estonia and the European Regional Development Fund for funding this event. Especially fun was the fabulous excursion to the Emajõe-Suursoo Nature Reserve and through to Lake Peipus aboard, overboard and aboveboard the historical barge “Jõmmu”. If you look carefully you can also find the barge in the photogrammetry point cloud. The photogrammetry data used here was acquired during our lunch stop.

Fun aboard and overboard the historical barge “Jõmmu”.

No Sugarcoating: Sweet LiDAR from RiCOPTER carrying VUX-1UAV over Sugarcane

Recently we saw an interesting LiDAR data set talked about on social media by Chad Netto from Chustz Surveying in New Roads, Louisiana and asked for a copy. It is a LiDAR scan of a sugarcane plantation in Assumption Parish, Louisiana carried out with the VUX-1UAV by RIEGL mounted onto a RiCOPTER and guided by an Applanix IMU and a Trimble base station. That is probably one of the sweetest (but also one of the most expensive) UAV LiDAR system you can buy today. I received this LiDAR file and this trajectory file. In the following we talk a detailed look at this data set.

First we run lasinfo to get an idea of the contents of the data set. We create various histograms as those can often help understand an unfamiliar data set:

lasinfo -i sugarcane\181026_163424.laz ^
        -cd ^
        -histo gps_time 5 ^
        -histo intensity 64 ^
        -histo point_source 1 ^
        -histo z 5 ^
        -odix _info -otxt

You can download the resulting report here. For the 84,751,955 points we notice that

  1. both horizontal and vertical coordinates are stored in US survey feet
  2. with scale factors of 0.00025 this means a resolution of 76 micrometer
  3. there is no explicit flight line information (all point source IDs are zero)
  4. gaps in the GPS time stamp histogram are suggesting multiple lines

First we use las2las to lower the insanely high resolution from 0.00025 US survey feet to something more reasonable for an airborne UAV scan, namely to 0.01 or 1 hundredths of a US survey foot or centi-US-survey-feet:

las2las -i sugarcane\181026_163424.laz ^
        -rescale 0.01 0.01 0.01 ^
        -odix _cft -olaz

I have already done this for you. The file that is online is already in “centi-US-survey-feet” because it reduced the file size from the original 678 MB file that we got from Netto to the 518 MB file that is online, meaning that you had 160 MB less data to download.

Next we use lassplit to recover the original flight lines as follows:

lassplit -i sugarcane\181026_163424.laz ^
         -recover_flightlines ^
         -odir sugarcane\0_recovered_strips ^
         -o assumption.laz

This results in 5 strips. We then use lassort to bring the strips back into their original acquisition order by sorting first based on the GPS time stamp (which brings all returns of one pulse together) and second on the return number (which sorts them in ascending order). We do this on 3 cores in parallel with this command:

lassort -i sugarcane\0_recovered_strips\*.laz ^
        -gps_time ^
        -return_number ^
        -odir sugarcane\1_sorted_strips -olaz ^
        -cores 3

We also create a spatial index for each of these strips using lasindex so that any area-of-interest query that we do later will be significantly accelerated. See the README file for the meaning of the parameters:

lasindex -i sugarcane\1_sorted_strips\*.laz ^
         -tile_size 10 -maximum -100 ^
         -cores 3

Then we check for flight line alignment using lasoverlap by comparing – per 2 feet by 2 feet area – the lowest elevation value of points from different strips wherever there is overlap. Cells with an absolute vertical difference of less than a quarter of a foot are mapped to white. Cells with vertical differences of more (or less) than a quarter foot are mapped to an increasingly red (or blue) color that is saturated red (or blue) when one full foot is reached.

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap.png

The resulting image looks dramatic at first glance. But we have to remember that this is sugarcane. The penetration of the laser can vary greatly depending on the direction from which the beam hits the densely standing stalks. Large differences between flight lines can be expected where sugarcane stands tall. We need to focus our visual quality checks on the few open areas, namely the access roads and harvested areas.

Color-mapped highest vertical difference in lowest point per 2 feet by 2 feet area between overlapping flight lines.

We use las2las via its native GUI to cut out several suspicious-looking open areas with overly red or overly blue shading. By loading the resulting image into the GUI these areas-of-interest are easy to target and cut out.

las2las -i sugarcane\1_sorted_strips\*.laz -gui

Overlaying the difference image in the GUI of las2las to cut out suspicious areas for closer inspection.

We cut out four square 100 by 100 meter tiles in open areas that show a suspiciously strong pattern of red or blue colors for closer inspection. The command lines for these four square areas are given below and you can download them here:

  1. assumption_3364350_534950_100.laz
  2. assumption_3365600_535750_100.laz
  3. assumption_3364900_535500_100.laz
  4. assumption_3365500_535600_100.laz
las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3364350 534950 100 ^
        -o sugarcane\assumption_3364350_534950_100.laz

las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3365600 535750 100 ^
        -o sugarcane\assumption_3365600_535750_100.laz

las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3364900 535500 100 ^
        -o sugarcane\assumption_3364900_535500_100.laz

las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3365500 535600 100 ^
        -o sugarcane\assumption_3365500_535600_100.laz

In the image sequence below we scrutinize these differences which will lead us to notice two things:

  1. There are vertical miss-alignments of around one foot. These big difference can especially be observed between flight lines that cover an area with a very high point density and those that cover the very same area with a very low point density.
  2. There are horizontal miss-alignments of around one foot. Again these differences seem somewhat correlated to the density that these flight lines cover a particular area with.

For the most part the miss-aligned points come from a flight line that has only sparse coverage in that area. In a flat terrain the return density per area goes down the farther we are from the drone as those areas are only reached with higher and higher scan angles. Hence an immediate idea that comes to mind is to limit the scan angle to those closer to nadir and lower the range from -81 to 84 degrees reported in the lasinfo report to something like -75 to 75, -70 to 70, or -65 to 65 degrees. We can check how this will improve the alignment with these lasoverlap command lines:

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -75 75 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap75.png

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -70 70 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap70.png

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -65 65 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap65.png

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -60 60 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap60.png

This simple technique significantly improves the difference image. Based on these images would suggest to only use returns with a scan angle between -70 and 70 degrees for any subsequent analysis. This seems to remove most of the discoloring in open areas without loosing too many points. Note that only using returns with a scan angle between -60 and 60 degrees means that some flight lines are no longer overlapping each other.

Note also that by limiting the scan angle we get suddenly get white areas even in incredible dense vegetation. The more horizontal a laser shoot is the more likely it will only hit higher up sugarcane plants and the less likely it will penetrate all the way to the ground. The white areas coincide with where laser pulses are close to nadir which is in the flight line overlap areas that directly below the drone’s trajectory.

Can we improve alignment further? Not with LAStools, so I turned to Andre Jalobeanu, a specialist on that particular issue, who I have known for many years. Andre has developed BayesStripAlign – a software by his company BayesMap that is quite complementary to LAStools and does exactly what the name suggests: it align strips. I gave Andre the five flight lines and he aligned them for me. Below the new difference images:

We cut out the very same four square areas from the realigned strips for closer inspection and you may investigate them on your own. You can download them here.

  1. assumption_3364350_534950_100_realigned.laz
  2. assumption_3365600_535750_100_realigned.laz
  3. assumption_3364900_535500_100_realigned.laz
  4. assumption_3365500_535600_100_realigned.laz

In the image sequence below we are just looking at the last of the four square areas and you can see that most of the miss-alignment we saw earlier between the flight lines was removed.

We would like to thank Chad Netto from Chustz Surveying to make this data set available to us and Andre Jalobeanu from BayesMap to align the flight lines for us.

City of Guadalajara creates first Open LiDAR Portal of Latin America

Small to medium sized LiDAR data sets can easily be published online for exploration and download with laspublish of LAStools, which is an easy-to-use wrapper around the powerful Potree open source software for which rapidlasso GmbH has been a major sponsor. During a workshop on LiDAR processing at CICESE in Ensenada, Mexico we learned that Guadalajara – the city with five “a” in its name – has recently published its LiDAR holdings online for download using an interactive 3D portal based on Potree.

There is a lot more data available in Mexico but only Guadalajara seems to have an interactive download portal at the moment with open LiDAR. Have a look at the map below to get an idea of the LiDAR holdings that are held in the archives of the Instituto Nacional de Estadística y Geografía (INEGI). You can request this data either by filling out this form or by sending an email to atencion.usuarios@inegi.org.mx. You will need to explain the use of the information, but apparently INEGI has a fast response time. I was given the KML files you see below and told that each letter in scale 1: 50,000 is divided into 6 regions (a-f) and each region subdivided into 4 parts. Contact me if you want the KML files or if you can provide further clarification on this indexing scheme and/or the data license.

LiDAR available at the Instituto Nacional de Estadística y Geografía (INEGI)

But back to Guadalajara’s open LiDAR. The tile names become visible when you zoom in closer on the map with the tiling overlay as seen below. An individual tile can easily be downloaded by first clicking so that it becomes highlighted and then pressing the “D” button in the lower left corner. We download the two tiles called ‘F08C04.laz’ and ‘F08C05.laz’ and use lasinfo to determine that their average density is 9.0 and 8.9 last returns per per square meter. This means on average 9 laser pulses were fired at each square meter in those two tiles.

lasinfo -i F08C04.laz -cd
lasinfo -i F08C05.laz -cd

Selecting a tile on the map and pressing the “D” button will download the highlighted tile.

The minimal quality check that we recommend doing for any newly obtained LiDAR data is to verify proper alignment of the flightlines using lasoverlap. For tiles with properly populated ‘point source ID’ fields this can be done using the command line shown below.

lasoverlap -i F08C04.laz F08C05.laz ^
           -min_diff 0.1 -max_diff 0.3 ^
           -odir quality -opng ^
           -cores 2

We notice some slight miss-alignments in the difference image (see other tutorials such as this one for how to interpret the resulting color images). We suggest you follow the steps done there to take a closer look at some of the larger strip-like areas that exhibit some systematic disscolorization (compared to other areas) into overly blueish or reddish tones of with lasview. Overlaying one of the resulting *_diff.png files in the GUI of LAStools makes it easy to pick a suspicious area.

We use the “pick” functionality to view only the building of interest.

Unusual are also the large red and blue areas where some of the taller buildings are. Usually those are just one pixel wide which has to do with the laser of one flightline not being able to see the lower area seen by the laser of the other flightline because the line-of-sight is blocked by the structure. We have a closer look at one of these unusual building colorization by picking the building shown above and viewing it with the different visualization options that are shown in the images below.

No. Those are not the “James Bond movie” kind of lasers that burn holes into the building to get ground returns through several floors. The building facade is covered with glass so that the lasers do not scatter photons when they hit the side of the building. Instead they reflect by the usual rule “incidence angle equals reflection angle” of perfectly specular surfaces and eventually hit the ground next to the building. Some of the photons travel back the same way to the receiver on the plane where they get registered as returns. The LiDAR system has no way to know that the photons did not travel the usual straight path. It only measures the time until the photons return and generates a return at the range corresponding to this time along the direction vector that this laser shot was fired at. If the specular reflection of the photons hits a truck or a tree situated next to to building, then we should find that truck or that tree – mirrored by the glossy surface of the building – on the inside of the building. If you look careful at the “slice” through the building below you may find an example … (-:

Some objects located outside the building are mirrored into the building due to its glossy facade.

Kudos to the City of Guadalajara for becoming – to my knowledge – the first city in Latin America to both open its entire LiDAR holdings and also making it available for download in form of a nice and functional interactive 3D portal.

Complete LiDAR Processing Pipeline: from raw Flightlines to final Products

This tutorial serves as an example for a complete end-to-end workflow that starts with raw LiDAR flightlines (as they may be delivered by a vendor) to final classified LiDAR tiles and derived products such as raster DTM, DSM, and SHP files with contours, building footprint and vegetation layers. The three example flightlines we are using here were flown in Ayutthaya, Thailand with a RIEGL LMS Q680i LiDAR scanner by Asian Aerospace Services who are based at the Don Mueang airport in Bangkok from where they are serving South-East-Asia and beyond. You can download them here:

Quality Checking

The minimal quality checks consist of generating textual reports (lasinfo & lasvalidate), inspecting the data visually (lasview), making sure alignment and overlap between flightlines fulfill expectations (lasoverlap), and measuring pulse density per square meter (lasgrid). Additional checks for points replication (lasduplicate), completeness of all returns per pulse (lasreturn), and validation against external ground control points (lascontrol) may also be performed.

lasinfo -i Ayutthaya\strips_raw\*.laz ^
        -cd ^
        -histo z 5 ^
        -histo intensity 64 ^
        -odir Ayutthaya\quality -odix _info -otxt ^
        -cores 3

lasvalidate -i Ayutthaya\strips_raw\*.laz ^
            -o Ayutthaya\quality\validate.xml

The lasinfo report generated with the command line shown computes the average density for each flightline and also generates two histograms, one for the z coordinate with a bin size of 5 meter and one for the intensity with a bin size of 64. The resulting textual descriptions are output into the specified quality folder with an appendix ‘_info’ added to the original file name. Perusing these reports tells you that there are up to 7 returns per pulse, that the average pulse density per flightline is between 7.1 to 7.9 shots per square meter, that the point source IDs of the points are already populated correctly, that there are isolated points far above and far below the scanned area, and that the intensity values range from 0 to 1023 with the majority being below 400. The warnings in the lasinfo and the lasvalidate reports about the presence of return numbers 6 and 7 have to do with the history of the LAS format and can safely be ignored.

lasoverlap -i Ayutthaya\strips_raw\*.laz ^
           -files_are_flightlines ^
           -min_diff 0.1 -max_diff 0.3 ^
           -odir Ayutthaya\quality -o overlap.png

This results in two color illustrations. One image shows the flightline overlap with blue indicating one flightline, turquoise indicating two, and yellow indicating three flightlines. Note that wet areas (rivers, lakes, rice paddies, …) without LiDAR returns affect this visualization. The other image shows how well overlapping flightlines align. Their vertical difference is color coded with while meaning less than 10 cm error while saturated red and blue indicate areas with more than 30 cm positive or negative difference.

One pixel wide red and blue along building edges and speckles of red and blue in vegetated areas are normal. We need to look-out for large systematic errors where terrain features or flightline outlines become visible. If you click yourself through this photo album you will eventually see typical examples (make sure to read the comments too). One area slightly below the center looks suspicious. We load the PNG into the GUI to pick this area for closer inspection with lasview.

lasview -i Ayutthaya\strips_raw\*.laz -gui

Why these flightline differences exist and whether they are detrimental to your purpose are questions that you will have to explore further. For out purpose this isolated difference was noted but will not prevent us from proceeding further. Next we want to investigate the pulse density. We do this with lasgrid. We know that the average last return density per flightline is between 7.1 to 7.9 shots per square meter. We set up the false color map for lasgrid such that it is blue when the last return density drops to 5 shots (or less) per square meter and such that it is red when the last return density reaches 10 shots (or more).

lasgrid -i Ayutthaya\strips_raw\*.laz -merged ^
        -keep_last ^
        -step 2 -density ^
        -false -set_min_max 4 8 ^
        -odir Ayutthaya\quality -o density_4ppm_8ppm.png

The last return density per square meter mapped to a color: blue is 5 or less, red is 10 or more.

The last return density image clearly shows how the density increases to over 10 pulses per square meter in all areas of flightline overlap. However, as there are large parts covered by only one flightline their density is the one that should be considered. We note great variations in density along the flightlines. Those have to do with aircraft movement and in particular with the pitch. When the nose of the plane goes up even slightly, the gigantic “fan of laser pulses” (that can be thought of as being rigidly attached at the bottom perpendicular to the aircraft flight direction) is moving faster forward on the ground far below and therefore covers it with fewer shots per square meter. Conversely when the nose of the plane goes down the spacing between scan lines far below the plane are condensed so that the density increases. Inclement weather and the resulting airplane pitch turbulence can have a big impact on how regular the laser pulse spacing is on the ground. Read this article for more on LiDAR pulse density and spacing.

LiDAR Preparation

When you have airborne LiDAR in flightlines the first step is to tile the data into square tiles that are typically 1000 by 1000 or – for higher density surveys – 500 by 500 meters in size. This can be done with lastile. We also add a buffer of 30 meters to each tile. Why buffers are important for tile-based processing is explained here. We choose 30 meters as this is larger than any building we expect in this area and slightly larger than the ‘-step’ size we use later when classifying the points into ground and non-ground points with lasground.

lastile -i Ayutthaya\strips_raw\*.laz ^
        -tile_size 500 -buffer 30 -flag_as_withheld ^
        -odir Ayutthaya\tiles_raw -o ayu.laz

NOTE: Usually you will have to add ‘-files_are_flightlines’ or ‘-apply_file_source_ID’ to the lastile command shown above in order to preserve the information which points is from which flightline. We do not have to do this here as evident from the lasinfo reports we generated earlier. Not only is the file source ID in the LAS header is correctly set to 2, 3, or 4 reflecting the intended flightline numbering evident from the file names. Also the point source ID of each point is already set to the correct value 2, 3, or 4. For more info see this or this discussion from the LAStools user forum.

Next we classify isolated points that are far from most other points with lasnoise into the (default) classification code 7. See the README file for the meaning of the parameters and play around with different setting to get a feel for how to make this process more or less aggressive.

lasnoise -i Ayutthaya\tiles_raw\ayu*.laz ^
         -step_xy 4 -step_z 2 -isolated 5 ^
         -odir Ayutthaya\tiles_denoised -olaz ^
         -cores 4

Especially for ground classification it is important that low noise points are excluded. You should inspect a number of tiles with lasview to make sure the low noise are all pink now if you color them by classification.

lasview -i Ayutthaya\tiles_denoised\ayu*.laz -gui

While the algorithms in lasground are designed to withstand a few noise points below the ground, you will find that it will include them into the ground model if there are too many of them. Hence, it is important to tell lasground to ignore these noise points. For the other parameters used see the README file of lasground.

lasground -i Ayutthaya\tiles_denoised\ayu*.laz ^
          -ignore_class 7 ^
          -city -ultra_fine ^
          -compute_height ^
          -odir Ayutthaya\tiles_ground -olaz ^
          -cores 4

You should visually check the resulting ground classification of each tile with lasview by selecting smaller subsets (press ‘x’, draw a rectangle, press ‘x’ again, use arrow keys to walk) and then switch back and forth between a triangulation of the ground points (press ‘g’ and then press ‘t’) and a triangulation of last returns (press ‘l’ and then press ‘t’). See the README of lasview for more information on those hotkeys.

lasview -i Ayutthaya\tiles_ground\ayu*.laz -gui

This way I found at least one tile that should be reclassified with ‘-metro’ instead of ‘-city’ because it still contained one large building in the ground classification. Alternatively you can correct miss-classifications manually using lasview as shown in the next few screen shots.

This is an optional step for advanced users who have a license. In case you managed to do such a manual edit and saved it as a LAY file using LASlayers (see the README file of lasview) you can overwrite the old file with:

laslayers -i Ayutthaya\tiles_ground\ayu_669500_1586500.laz -ilay ^
          -o Ayutthaya\tiles_ground\ayu_669500_1586500_edited.laz

move Ayutthaya\tiles_ground\ayu_669500_1586500_edit.laz ^
     Ayutthaya\tiles_ground\ayu_669500_1586500.laz

The next step classifies those points that are neither ground (2) nor noise (7) into building (or rather roof) points (class 6) and high vegetation points (class 5). For this we use lasclassify with the default parameters that only considers points that are at least 2 meters above the classified ground points (see the README for details on all available parameters).

lasclassify -i Ayutthaya\tiles_ground\ayu*.laz ^
            -ignore_class 7 ^
            -odir Ayutthaya\tiles_classified -olaz ^
            -cores 4

We  check the classification of each tile with lasview by selecting smaller subsets (press ‘x’, draw a rectangle, press ‘x’ again) and by traversing with the arrow keys though the point cloud. You will find a number of miss-classifications. Boats are classified as buildings, water towers or complex temple roofs as vegetation, … and so on. You could use lasview to manually correct any classifications that are really important.

lasview -i Ayutthaya\tiles_classified\ayu*.laz -gui

Before delivering the classified LiDAR tiles to a customer or another user it is imperative to remove the buffers that were carried through all computations to avoid artifacts along the tile boundary. This can also be done with lastile.

lastile -i Ayutthaya\tiles_classified\ayu*.laz ^
        -remove_buffer ^
        -odir Ayutthaya\tiles_final -olaz ^
        -cores 4

Together with the tiling you may want to deliver a tile overview file in KML format (or in SHP format) that you can easily generate with lasboundary using this command line:

lasboundary -i Ayutthaya\tiles_final\ayu*.laz ^
            -use_bb ^
            -overview -labels ^
            -o Ayutthaya\tiles_overview.kml

The small KML file generated b lasboundary provides a quick overview where tiles are located, their names, their bounding box, and the number of points they contain.

Derivative production

The most common derivative product produced from LiDAR data is a Digital Terrain Model (DTM) in form of an elevation raster. This can be obtained by interpolating the ground points with a triangulation (i.e. a Delaunay TIN) and by sampling the TIN at the center of each raster cell. The pulse density of well over 4 shots per square meter definitely supports a resolution of 0.5 meter for the raster DTM. From the ground-classified tiles with buffer we compute the DTM using las2dem as follows:

las2dem -i Ayutthaya\tiles_ground\ayu*.laz ^
        -keep_class 2 ^
        -step 0.5 -use_tile_bb ^
        -odir Ayutthaya\tiles_dtm -obil ^
        -cores 4

It’s important to add ‘-use_tile_bb’ to the command line which limits the raster generation to the original tile sizes of 500 by 500 meters in order not to rasterize the buffers that are extending the tiles 30 meters in each direction. We used the BIL format so that we inspect the resulting elevation rasters with lasview:

lasview -i Ayutthaya\tiles_dtm\ayu*.bil -gui

To create a hillshaded version of the DTM you can use your favorite raster processing package such as GDAL or GRASS but you could also use the BLAST extension of LAStools and create a large seamless image with blast2dem as follows:

blast2dem -i Ayutthaya\tiles_dtm\ayu*.bil -merged ^
          -step 0.5 -hillshade -epsg 32647 ^
          -o Ayutthaya\dtm_hillshade.png

Because blast2dem does not parse the PRJ files that accompany the BIL rasters we have to specify the EPSG code explicitly to also get a KML file that allows us to visualize the LiDAR in Google Earth.

A a hillshading of the merged DTM rasters produced with blast2dem.

Next we generate a Digital Surface Model (DSM) that includes the highest objects that the laser has hit. We use the spike-free algorithm that is implemented in las2dem that creates a triangulation of the highest returns as follows:

las2dem -i Ayutthaya\tiles_denoised\ayu*.laz ^
        -drop_class 7 ^
        -step 0.5 -spike_free 1.2 -use_tile_bb ^
        -odir Ayutthaya\tiles_dsm -obil ^
        -cores 4

We used 1.0 as the freeze value for the spike free algorithm because this is about three times the average last return spacing reported in the individual lasinfo reports generated during quality checking. Again we inspect the resulting rasters with lasview:

lasview -i Ayutthaya\tiles_dsm\ayu*.bil -gui

For reason of comparison we also generate the DSM rasters using a simple first-return interpolation again with las2dem as follows:

las2dem -i Ayutthaya\tiles_denoised\ayu*.laz ^
        -drop_class 7 -keep_first ^
        -step 0.5 -use_tile_bb ^
        -odir Ayutthaya\tiles_dsm -obil ^
        -cores 4

A few direct side-by-side comparison between a spike-free DSM and a first-return DSM shows the difference that are especially noticeable along building edges and in large trees.

Another product that we can easily create are building footprints from the automatically classified roofs using lasboundary. Because the tool is quite scalable we can simply on-the-fly merge the final tiles. This also avoids including duplicate points from the tile buffer whose classifications are also often less accurate.

lasboundary -i Ayutthaya\tiles_final\ayu*.laz -merged ^
            -keep_class 6 ^
            -disjoint -concavity 1.1 ^
            -o Ayutthaya\buildings.shp

Similarly we can use lasboundary to create a vegetation layer from those points that were automatically classified as high vegetation.

lasboundary -i Ayutthaya\tiles_final\ayu*.laz -merged ^
             -keep_class 5 ^
             -disjoint -concavity 3 ^
             -o Ayutthaya\vegetation.shp

We can also produce 1.0 meter contour lines from the ground classified points. However, for nicer contours it is beneficial to first generate a subset of the ground points with lasthin using option ‘-contours 1.0’ as follows:

lasthin -i Ayutthaya\tiles_final\ayu*.laz ^
        -keep_class 2 ^
        -step 1.0 -contours 1.0 ^
        -odir Ayutthaya\tiles_temp -olaz ^
        -cores 4

We then merge all subsets of ground points from those temporary tiles on-the-fly into one (using the ‘-merged’ option) and let blast2iso from the BLAST extension of LAStools generate smoothed and simplified 1 meter contours as follows:

blast2iso -i Ayutthaya\tiles_temp\ayu*.laz -merged ^
          -iso_every 1.0 ^
          -smooth 2 -simplify_length 0.5 -simplify_area 0.5 -clean 5.0 ^
          -o Ayutthaya\contours_1m.shp

Finally we composite all of our derived LiDAR products into one map using QGIS and then fuse it with data from OpenStreetMap that we’ve downloaded from BBBike. Here you can download the OSM data that we use.

It’s in particular interesting to compare the building footprints that were automatically derived from our LiDAR processing pipeline with those mapped by OpenStreetMap volunteers. We immediately see that there is a lot of volunteering work left to do and the LiDAR-derived data can assist us in directing those mapping efforts. A closer look reveals the (expected) quality difference between hand-mapped and auto-generated data.

The OSM buildings are simpler. These polygons are drawn and divided into logical units by a human. They are individually verified and correspond to actual buildings. However, they are less aligned with the Google Earth satellite image. The LiDAR-derived buildings footprints are complex because lasboundary has no logic to simplify the complicated polygonal chains that enclose the points that were automatically classified as roof into rectilinear shapes or to break directly adjacent roof points into separate logical units. However, most buildings are found (but also objects that are not buildings) and their geospatial alignment is as good as that of the LiDAR data.