Converting Rasters from inefficient ASCII XYZ to more compact LAZ or TIF Formats

The German state of Brandenburg has recently started to provide many of their basic geospatial data as open data, such as digital ortophotos in TIF and JPG formats, vertical and horizontal control points in gzipped XML format, LOD1 and LOD2 building models in zipped GML format, topographic maps from 1:10000 to 1:100000 in zipped TIF and PDF formats, cadastral data in zipped XML and TIF formats, as well as LiDAR-derived 1m DTM rasters and image-derived 1m DSM rasters both in zipped XYZ ASCII format. All this data is provided with the user-friendly license called “Datenlizenz Deutschland Namensnennung 2.0“. In this article we show how to convert the 1m DTM rasters and the 1m DSM rasters  from verbose XYZ ASCII to more compact LAZ or TIF rasters.

brandenburg_dgm_258_5888_4000

Four 2000 by 2000 meter tiles of the Brandenburg 1m DTM. 

One particularity about most official German and Austrian rasters (anywhere else?) is that they sample the elevations in the corners rather than in the center of each raster cell. Here a one square kilometer raster tile of 1 meter resolution will have 1001 columns by 1001 rows instead of the more familiar 1000 by 1000 layout. While this corner-based representation does have some benefits, we convert these rasters in to the more common area-based representation using new functionality recently added to lasgrid.

After downloading one sample DTM tile such as dgm_33250-5886.zip we find three files in the zip folder. Two files with meta data and license information and the actual data file, which is a 2 km by 2km corner-based raster tile called “dgm_33250-5886.xyz” with 2001 columns by 2001 rows. Here is how the 4004001 lines looks:

more DGM_33250-5886.xyz
250000.0 5886000.0 15.284
250001.0 5886000.0 15.277
250002.0 5886000.0 15.273
250003.0 5886000.0 15.275
250004.0 5886000.0 15.289
250005.0 5886000.0 15.314
[...]
251994.0 5888000.0 13.565
251995.0 5888000.0 13.567
251996.0 5888000.0 13.565
251997.0 5888000.0 13.565
251998.0 5888000.0 13.564
251999.0 5888000.0 13.564
252000.0 5888000.0 13.565

The first step is to convert these XYZ rasters to LAZ format. We do this with txt2las as shown below. In case the vertical datum is the “Deutsches Haupthoehennetz 2016” we should also add ‘-vertical_dhhn2016’ but not sure at the moment:

txt2las -i dgm\*.xyz ^
        -set_scale 1.0 1.0 0.001 ^
        -epsg 25833 ^
        -odir temp -olaz ^
        -cores 4

For 84 files this reduces the size by a factor of 31 or compresses it down to 3.2 percent of the original, namely from 8.45 GB for raw XYZ to 277 MB for LAZ. So far we have really just converted a list of x, y and z coordinates from verbose ASCII to more compact LAZ. We can easily go back to ASCII with las2txt whenever needed:

txt2las -i temp\*.laz ^
        -odir ascii -otxt ^
        -cores 4

Next we use lasgrid to convert from a corner-based raster to an area-based raster using the new option ‘-subsquare 0.2’ which replaces each input point by four points that are displaced by all possibilities of adding +/- 0.2 in x and y. We then average the exactly four points that fall into each relevant raster cell with option ‘-average’ and clip the output to the meaningful 2000 columns by 2000 rows with ‘-use_tile_size 2000’. You need to get the most recent version of LAStools to have these options.

lasgrid -i temp\*.laz ^
        -subsquare 0.25 ^
        -step 1 -average ^
        -use_tile_size 2000 ^
        -odir dgm -olaz ^
        -cores 4

Instead of RasterLAZ you can also choose the TIF, BIL, IMG, or ASC format here. The final result are standard 1 meter elevation products with 2000 columns by 2000 rows with the averaged elevation sample being associated with the center of the raster cell. The lasinforeport for a sample tile is shown at the end of this article.

You may proceed to optimize the RasterLAZ for area-of-interest queries by reordering the raster into a space-filling curve with lassort or lasoptimize and compute a spatial index. You may also classify the RasterLAZ elevation samples, for example, into building, high, medium, and low vegetation, ground, and other common classifications with lasclip or lascolor. You may also add RGB or intensity values to the RasterLAZ elevation samples using the orthophotos that are also available as open data with lascolor. These are some of the benefits of RasterLAZ beyond efficient storage and access.

We like to acknowledge the LGB (Landesvermessung und Geobasisinformation Brandenburg) for providing state-wide coverage of their geospatial data holdings as easily downloadable open data with the user-friendly Deutschland Namensnennung 2.0 license. But we also would like to ask to please add the raw LiDAR point clouds to the open data portal. The storage savings in going from ASCII XYZ to LAZ for the DTM and DSM rasters should  free enough space to host the LiDAR … (-;

lasinfo (200112) report for 'dgm_33\DGM_33250-5886.laz'
reporting all LAS header entries:
  file signature:             'LASF'
  file source ID:             0
  global_encoding:            0
  project ID GUID data 1-4:   00000000-0000-0000-0000-000000000000
  version major.minor:        1.2
  system identifier:          'raster compressed as LAZ points'
  generating software:        'LAStools (c) by rapidlasso GmbH'
  file creation day/year:     13/20
  header size:                227
  offset to point data:       455
  number var. length records: 2
  point data format:          0
  point data record length:   20
  number of point records:    4000000
  number of points by return: 4000000 0 0 0 0
  scale factor x y z:         0.5 0.5 0.001
  offset x y z:               200000 5800000 0
  min x y z:                  250000.5 5886000.5 13.419
  max x y z:                  251999.5 5887999.5 33.848
variable length header record 1 of 2:
  reserved             0
  user ID              'Raster LAZ'
  record ID            7113
  length after header  80
  description          'by LAStools of rapidlasso GmbH'
    ncols   2000
    nrows   2000
    llx   250000
    lly   5886000
    stepx    1
    stepy    1
    sigmaxy <not set>
variable length header record 2 of 2:
  reserved             0
  user ID              'LASF_Projection'
  record ID            34735
  length after header  40
  description          'by LAStools of rapidlasso GmbH'
    GeoKeyDirectoryTag version 1.1.0 number of keys 4
      key 1024 tiff_tag_location 0 count 1 value_offset 1 - GTModelTypeGeoKey: ModelTypeProjected
      key 3072 tiff_tag_location 0 count 1 value_offset 25833 - ProjectedCSTypeGeoKey: ETRS89 / UTM 33N
      key 3076 tiff_tag_location 0 count 1 value_offset 9001 - ProjLinearUnitsGeoKey: Linear_Meter
      key 4099 tiff_tag_location 0 count 1 value_offset 9001 - VerticalUnitsGeoKey: Linear_Meter
LASzip compression (version 3.4r3 c2 50000): POINT10 2
reporting minimum and maximum for all LAS point record entries ...
  X              100001     103999
  Y              172001     175999
  Z               13419      33848
  intensity           0          0
  return_number       1          1
  number_of_returns   1          1
  edge_of_flight_line 0          0
  scan_direction_flag 0          0
  classification      0          0
  scan_angle_rank     0          0
  user_data           0          0
  point_source_ID     0          0
number of first returns:        4000000
number of intermediate returns: 0
number of last returns:         4000000
number of single returns:       4000000
overview over number of returns of given pulse: 4000000 0 0 0 0 0 0
histogram of classification of points:
         4000000  never classified (0)

LASmoons: Volga Lipwoni

Volga Lipwoni (recipient of three LASmoons)
Department of Geography, School of Earth and Environment
University of Canterbury, NEW ZEALAND

Background:
Structure from motion (SfM) photogrammetry, has emerged as an effective tool to accurately extract three-dimensional (3D) structures from a series of overlapping two-dimensional (2D) Unmanned aerial vehicles (UAVs) images. The bid to switch from the current labour-intensive, and time consuming forestry inventory practices has seen a lot of interest geared towards understanding the use of SfM photogrammetry to derive forest metrics (Iglhaut et al., 2019). There are a range of commercial, free and open source SfM photogrammetric software packages that can be used to process UAV images into 3D point clouds. Selection of the most appropriate package has become an important issue for most projects (Turner, Lucieer, & Wallace, 2013). A comparison of software performance in terms of accuracy, processing times and related costs would help foresters in deciding the best tool for the job.

lasmoons_Volga_Lipwoni

Typical point cloud derived with SfM software from UAV imagery.

Goal:
The study will generate 3D point clouds of images of a young forest trial and LAStools will be used to derive canopy height models (CHM) for computing tree heights. Tree heights from LiDAR data will serve as a baseline for accuracy assessment of heights derived from the point clouds.

Data:
+
422 UAV images processed into 3D point clouds using ten (10) different commercial and open source SfM software packages

LAStools processing:
1) tile large point cloud into tiles with buffer [lastile]
2) remove noise points [lasthin, lasnoise]
3) classify points into ground and non-ground [lasground]
4) create Digital Terrain Modelsand Digital Surface Models [lasthin, las2dem]
5) produce Canopy Height Models for computing tree heights [lasheight, las2dem]

References:
Iglhaut, J., Cabo, C., Puliti, S., Piermattei, L., O’Connor, J., & Rosette, J. (2019). Structure from motion photogrammetry in forestry: A review. Current Forestry Reports, 5(3), 155-168. doi:https://doi.org/10.1007/s40725-019-00094-3
Turner, D., Lucieer, A., & Wallace, L. (2013). Direct georeferencing of ultrahigh-resolution UAV imagery. EEE Transactions on Geoscience and Remote Sensing, 52(5), 2738-2745. doi:10.1109/TGRS.2013.2265295

Removing Noise from Single Photon LiDAR to Generate a Smooth DTM

A while back we had a first look at the Single Photon LiDAR from Leica’s SPL100 sensor (that eventually turned out just to be an SPL99 because one beamlet or one receiver in the 10 by 10 array was broken and did not produce any returns). Today we are taking a closer look at a strategy to remove the excessive noise in the raw Single Photon LiDAR data from a “proper” SPL100 sensor (where all of the 100 beamlets are firing) that was flown in 2017 in Navarra, Spain.

navarra_spl_teaser

Profile through original points on top of generated DTM.

The data was provided as open data by the cartography section of Navarra’s Government and is available via a simple download FTP portal. We describe the LAStools processing steps that were used to eliminate the excessive noise and to generate a smooth DTM. In the following we are using the originally released version of the data, that we obtained shortly after the portal went online that seems to be a bit more “raw” than the current files available now. One starndard quality check with lasinfo was done with:

lasinfo -i 0_raw\*.laz ^
        -cd ^
        -histo intensity 1 ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -histo gps_time 10 ^
        -odir 1_quality -odix _info -otxt

Upon inspecting the lasinfo report we suggest a few changes in how to store this Single Photon LiDAR data for more efficient hosting via an online portal. We perform these changes here before starting the actual processing. First we use the las2las call shown below to fix an error in the global encoding bits, remove an irrelevant VLR, re-scale the coordinates from millimeter to centimeters, re-offset the coordinates to nice numbers, and – what is by far the most crucial change for better compression – remap the beamlet ID stored in the ‘user data’ field as described in an earlier article.

las2las -i 0_raw\*.laz ^
        -rescale 0.01 0.01 0.01 ^
        -auto_reoffset ^
        -set_global_encoding_gps_bit 1 ^
        -remove_vlr 1 ^
        -map_user_data beamlet_ID_map.txt ^
        -odir 2_fix_rescale_reoffset_remap -olaz ^
        -cores 3

Then we use two lassort calls, one to maximize compression and one to improve spatial coherence. One lassort call rearranges the points in increasing order first based on the GPS time stamps, then breaks ties based on the user data field (that stores the beamlet ID), and finally stores the returns of every beamlet ordered by return number. We also add spatial reference information in this step. The other lassort call rearranges the points into a spatially coherent layout. It uses a Z-order sort with the granularity of 50 meter by 50 meter buckets of points. Within each bucket the point order from the prior sort is kept.

lassort -i 2_fix_rescale_reoffset_remap\*.laz ^
        -epsg 25830 ^
        -gps_time ^
        -user_data ^
        -return_number ^
        -odir 2_maximum_compression -olaz ^
        -cores 3

lassort -i 2_maximum_compression\*.laz ^
        -bucket_size 50 ^
        -odir 2_spatial_coherence -olaz ^
        -cores 3

The resulting optimized nine tiles are around 200 MB each and can be downloaded as one file here or as individual tiles here:

Now we start the usual processing workflow by tiling the data with lastile into smaller 500 meter by 500 meter tiles with a 25 meter buffer. We also set the pre-existing point classification in the data to zero as we will compute our own later.

lastile -i 2_spatial_coherence\*.laz ^
        -set_classification 0 ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -odir 3_buffered -o yecora.laz

We notice that a large amount of the noise has intensity values below 1000. We are still a bit puzzled where those intensity values come from and what exactly they mean in a Single Photon LiDAR system. But it works. We run las2las with a “filtered transform” to set classification of all points whose intensity value is 1000 or less to the classification code 7 (aka “noise”).

las2las -i 3_buffered\*.laz ^
        -keep_intensity_below 1000 ^
        -filtered_transform ^
        -set_classification 7 ^
        -odir 4_intensity_denoised -olaz ^
        -cores 3

We then ignore this “easy-to-identify” noise and go after the remaining one with lasnoise by ignoring classification code 7 and setting the newly identified noise to classification code 9 – not because it’s “water” (the usual meaning of class 9) but because these points are drawn with a distinct blue color when checking the result with lasview.

 lasnoise -i 4_intensity_denoised\*.laz ^
         -ignore_class 7 ^
         -step_xy 1.0 -step_z 0.2 ^
         -isolated 5 ^
         -classify_as 9 ^
         -odir 4_isolation_denoised -olaz ^
         -cores 3

Of the surviving non-noise points we then use lasthin to reclassify the point closest to the 20th elevation percentile per 50 cm by 50 cm area with classification code 8 (for all areas that have more than 5 non-noise points per 50 cm by 50 cm area. We repeat the same for every 1 meter by 1 meter area.

lasthin -i 4_isolation_denoised\*.laz ^
        -ignore_class 7 9 ^
        -step 0.5 -percentile 20 5 ^
        -classify_as 8 ^
        -odir 5_thinned_p20_050cm -olaz ^
        -cores 3

lasthin -i 5_thinned_p20_050cm\*.laz ^
        -ignore_class 7 9 ^
        -step 1.0 -percentile 20 5 ^
        -classify_as 8 ^
        -odir 5_thinned_p20_100cm -olaz ^
        -cores 3

We then perform a more agressive second noise removal step one with lasnoise using only those points with classification code 8, namely those non-noise points that were the 20th elevation percentile in either a 50 cm by 50 cm cell or a 1 meter by 1 meter cell. This can be done by ignoring classification code 0, 7, and 9. We mark those noise points as 6 so they appear orange in the point cloud with lasview.

lasnoise -i 5_thinned_p20_100cm\*.laz ^
         -ignore_class 0 7 9 ^
         -step_xy 2.0 -step_z 0.2 ^
         -isolated 1 ^
         -classify_as 6 ^
         -odir 5_thinned_p20_100cm_denoised -olaz ^
         -cores 3

The 20th elevation percentile points that survive the last noise removal are then classified into ground (2) and non-ground (1) points with lasground_new by ignoring all other points, namely those with classification codes 0, 6, 7, and 9.

lasground_new -i 5_thinned_p20_100cm_denoised\*.laz ^
              -ignore_class 0 6 7 9 ^
              -town ^
              -odir 5_tiles_ground_050cm -olaz ^
              -cores 3

These images below illustrate the steps we took. They also show that not all data was used and might give you ideas where to tweak our workflow for even better results.

Finally we raster the ground points into 1 meter Digital Terrain Model (DTM) rasters with las2dem and store the result (without buffers) to the RasterLAZ format.

las2dem -i 5_tiles_ground_050cm\*.laz ^
        -keep_class 2 ^
        -step 1.0 ^
        -use_tile_bb ^
        -odir 6_tiles_dtm_100cm -olaz ^
        -cores 3

Finally we merged all RasterLAZ tiles into one and compute the final hillshaded DTM with blast2dem.

blast2dem -i 6_tiles_dtm_100cm\*.laz -merged ^
          -step 1.0 ^
          -hillshade ^
          -o yecora_dtm_100cm.png

The hillshaded DTM that is result of the entire sequence of processing steps described above is shown below.

DTM from ground classification created with LAStools

For comparison we generate the same DTM using the originally provided classification. According to the README file the original ground points are classified with code 22 in areas of flight line overlap and as the usual code 2 elsewhere. Hence we must use both classification codes to construct the DTM. We do this analogue to the earlier processing steps with the three LAStools commands lastile, las2dem, and blast2dem below.

lastile -i 2_spatial_coherence\*.laz ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -odir 3_tiles_buffered_orig -o yecora.laz

las2dem -i 3_tiles_buffered_orig\*.laz ^
        -keep_class 2 22 ^
        -step 1.0 ^
        -use_tile_bb ^
        -odir 6_tiles_dtm_100cm_orig -olaz ^
        -cores 3

blast2dem -i 6_tiles_dtm_100cm_orig\*.laz -merged ^
          -step 1.0 ^
          -hillshade ^
          -o yecora_dtm_100cm_orig.png

Below the hillshaded DTM generated from the ground classification that was provided with the LiDAR when it was originally released as open data.

DTM from ground classification of originally released data.

In the meantime Andorra’s SPL data have been updated with a newer version in the open data portal. The new version of the data contains a much better ground classification that might have been improved manually as the new files now have the the string ‘cam’ instead of ‘ca’ in the file name, which probably means ‘classified automatically and manually’ instead of the original ‘classified automatically’. We decided not to switch to the new data release as it seemed less “raw” than the original release. For example there are suddenly points with GPS times and returns counts and numbers of zero in the file that seem synthetic. But we also computed the hillshaded DTM for the new release which is shown below.

DTM from ground classification of newly released data.

We thank the cartography section of Navarra’s Government for providing their LiDAR as open data. This not only allows re-purposing expensive data paid for by public taxes but also generates additional value, encourages citizen science, and provides educational opportunity and insights such as this blog article.

Another German State Goes Open LiDAR: Saxony

Finally some really good news out of Saxony. 😊 After North Rhine-Westphalia and Thuringia released the first significant amounts of open geospatial data in Germany in a one-two punch in January 2017, we now have a third German state opening their entire tax-payer-funded geospatial data holdings to the tax-paying public via a simple and very easy-to-use online download portal. Welcome to the open data party, Saxony!!!

Currently available via the online portal are the LiDAR-derived raster Digital Terrain Model (DTM) at 1 meter resolution (DGM 1m) for everything flown since 2015 and and at 2 meter resolution (DGM 2m) or 20 meter resolution (DGM 20m) for the entire state. The horizontal coordinates use UTM zone 33 with ETRS89 (aka EPSG code 25833) and the vertical coordinate uses the “Deutsche Haupthöhennetz 2016” or “DHHN2016” (aka EPSG code 7837). Also available are orthophotos at 20 cm (!!!) resolution (DOP 20cm).

dgm_1000_rdax_87

Overview of current LiDAR holdings. Areas flown 2015 or later have LAS files and 1 meter rasters. Others have LiDAR as ASCII files and lower resolution rasters.

Offline – by ordering through either this online form or that online form – you can also get the 5 meter DTM and the 10 meter DTM, the raw LiDAR point clouds, LiDAR intensity rasters, hill-shaded DTM rasters, as well as the 1 meter and the 2 meter Digital Surface Model (DSM) for a small administrative fee that ranges between 25 EUR and 500 EUR depending on the effort involved.

Our immediate thought is to get a copy on the entire raw LiDAR points clouds (available as LAS 1.2 files for all  data acquired since 2015 and as ASCII text for earlier acquisitions) and find some portal willing to hosts this data online. We are already in contact with the land survey of Saxony to discuss this option and/or alternate plans.

Let’s have a look at the data. First we download four 2 km by 2 km tiles of the 1 meter DTM raster for an area surrounding the so called “Greifensteine” using the interactive map of the download portal, which are provided as simple XYZ text. Here a look at the contents of one ot these tiles:

more Greifensteine\333525612_dgm1.xyz
352000 5613999 636.26
352001 5613999 636.27
352002 5613999 636.28
352003 5613999 636.27
352004 5613999 636.24
[...]

Note that the elevation are not sampled in the center of every 1 meter by 1 meter cell but exactly on the full meter coordinate pair, which seems especially common  in German-speaking countries. Using txt2las we convert these XYZ rasters to LAZ format and add geo-referencing information for more efficient subsequent processing.

txt2las -i greifensteine\333*_dgm1.xyz ^
        -set_scale 1 1 0.01 ^
        -epsg 25833 ^
        -olaz

Below you see that going from XYZ to LAZ reduces the amount of  data from 366 MB to 10.4 MB, meaning that the data on disk becomes over 35 times smaller. The ability of LASzip to compress elevation rasters was first noted during the search for missing airliner MH370 and resulted in our new LAZ-based compressor for height grid called DEMzip.  The resulting LAZ files now also include geo-referencing information.

96,000,000 333525610_dgm1.xyz
96,000,000 333525612_dgm1.xyz
96,000,000 333545610_dgm1.xyz
96,000,000 333545612_dgm1.xyz
384,000,000 bytes

2,684,820 333525610_dgm1.laz
2,590,516 333525612_dgm1.laz
2,853,851 333545610_dgm1.laz
2,795,430 333545612_dgm1.laz
10,924,617 bytes

Using blast2dem we then create a hill-shaded version of the 1 meter DTM in order to overlay a visual representation of the DTM onto Google Earth.

blast2dem -i greifensteine\333*_dgm1.laz ^
          -merged ^
          -step 1 ^
          -hillshade ^
          -o greifensteine.png

Below the result that nicely shows how the penetrating laser of the LiDAR allows us to strip away the forest to see interesting geological features in the bare-earth terrain.

In a second exercise we use the available RGB orthophoto images to color one of the DTM tiles and explore it using lasview. For this we download the image for the top left of the four tiles that covers the area containing the “Greifensteine” from the interactive download portal for orthophotos. As the resolution of the TIF image is 20 cm and that of the DTM is only 1 meter, we first down-sample the TIF using gdalwarp of GDAL.

gdalwarp -tr 1 1 ^
         -r cubic ^
         greifensteine\dop20c_33352_5612.tif ^
         greifensteine\dop1m_33352_5612.tif

If you are not yet using GDAL today is a good day to start. It nicely complements the point cloud processing functionality of LAStools for raster inputs. Next we use lascolor to give each elevation pixel of the DTM stored in LAZ format its corresponding color from the orthophoto.

lascolor -i greifensteine\333525612_dgm1.laz ^
         -image greifensteine\dop1m_33352_5612.tif ^
         -odix _rgb -olaz

Now we can view the colored DTM in LAZ format interactively with lasview or any other LiDAR viewing software and turn on the RGB colors from the orthophoto as needed to understand the scene.

lasview -i greifensteine\333525612_dgm1_rgb.laz

We thank the “Staatsbetrieb Geobasisinformation und Vermessung Sachsen (GeoSN)” for giving us easy access to the 1 meter DTM and the 20 cm orthophoto that we have used in this article through their new open geodata portal as open data under the user-friendly license “Datenlizenz Deutschland – Namensnennung – Version 2.0.

Removing Low Noise in LiDAR Points with Median Ground Surface

Recently a user of LAStools asked a question in our user forum about how to classify LiDAR data that contains lots of low noise. A sample screen shot of the user’s failed attempt to correctly classify the noise using lasnoise and the ground with lasground is shown below: red points are noise, brown points are ground, and grey points are unclassified. In this article we show how to remove this low noise using a temporary ground surface that we construct from a subset of points at a certain elevation percentile. You can follow along by downloading the data and the sequence of command lines used.

example of miss-classified low noise points: ground points (brown) below ground

Download the LiDAR data set that was apparently flown with a RIEGL “crossfire” Q1560. You can also download the command line sequence here. We first run lasinfo with option ‘-compute_density’ (or ‘-cd’ for short) to get a rough idea about the last return density which is quite high with an average of over 31 last returns per square meter. We then use lasthin to classify one last return per square meter with the temporary classification code 8, namely the one whose elevation is closest to the 20th percentile per 1 meter by 1 meter grid cell. We then repeat this command line for the 30th, 40th, 50th percentile modifying the command line accordingly. You must use this version of lasthin that will part of a future LAStools release as options ‘-ignore_first_of_many’ and ‘-ignore_intermediate’ were just added this weekend.

lasthin -i crossfire.laz ^
        -ignore_first_of_many -ignore_intermediate ^
        -step 1 ^
        -percentile 20 15 ^
        -classify_as 8 ^
        -odix _p20 -olaz

Below you see the resulting subset of points marked with the temporary classification code 8 for the four different percentiles 20th, 30th, 40th, and 50th triangulated into a surface and hill-shaded.

Next we reclassify only those points marked with the temporary classification code 8 into ground (2) and unclassified (1) points using lasground by ignoring all points that still have the original classification code 0.

lasground -i crossfire_p20.laz ^
          -ignore_class 0 ^
          -wilderness ^
          -odix g -olaz

Below you see the resulting ground points computed from the subsets of points at four different percentiles 20th, 30th, 40th, and 50th triangulated into a surface and hill-shaded.

Both the ground classification of the 40th and the 50th percentile look reasonable. Only a few down spikes remain in the 40th percentile surface and a few additional bumps appear in the 50th percentile surface. Next we use lasheight with those two reasonable-looking ground surfaces to classify all points that are 20 centimeter below the triangulated ground surface into the noise classification code 7.

lasheight -i crossfire_p40g.laz ^
          -classify_below -0.2 7 ^
          -do_not_store_in_user_data ^
          -odix h -olaz

Now that the low noise points were removed (or rather classified as noise) we start the actual ground classification process. In this example we want to create a 50 cm DTM, hence it is more than sufficient to find one ground point per 25 cm cell. Therefore we first move all lowest non-noise last return per 25 cm cell to the temporary classification code 8.

Side note: One might also consider to modify the following workflow to run the ground classification on more than just the last returns by omitting ‘-ignore_first_of_many’ and ‘-ignore_intermediate’ from the lasthin call and by adding ‘-all_returns’ to the lasground call. Why? Because for all laser shots that resulted in a low noise point, this noise point will usually be the last return, so that the true ground hit could be the second to last return.

lasthin -i crossfire_p40gh.laz ^
        -ignore_first_of_many -ignore_intermediate ^
        -ignore_class 7 ^
        -step 0.25 ^
        -lowest ^
        -classify_as 8 ^
        -odix _low25 -olaz

The final ground classification is obtained by running lasground only on the points with temporary classification code 8 by ignoring all others, namely the noise points (7) and the unclassified points (0 and 1).

lasground -i crossfire_p40gh_low25.laz ^
          -ignore_class 0 1 7 ^
          -wilderness ^
          -odix g -olaz

We then use las2dem to create the 50 cm DTM from the points classified as ground. We store this DTM raster to the LAZ format which has shown to be the most efficient format for storing elevation or height rasters. We have started calling this format RasterLAZ. It is supported by all LAStools and the new DEMzip tool. One advantage is that we can feed RasterLAZ directly back into LAStools, for example as done below, for a second call to las2dem that computes a hill-shaded DTM.

las2dem -i crossfire_p40gh_low25g.laz ^
        -keep_class 2 ^
        -step 0.5 ^
        -ocut 9 -odix _dtm50 -olaz

las2dem -i crossfire_p40_dtm50.laz ^
        -step 0.5 ^
        -hillshade ^
        -odix _hill -opng

Below the resulting hill-shaded DTMs computed for the 40th and the 50th elevation percentile – as well as for the 45th elevation percentile that we’ve added for comparison.

Below we finally take a closer look at an example 1 meter profile line through the LiDAR classified by the 45th percentile workflow. There is a small stretch of ground points that was incorrectly classified as noise points (find the mouse cursor) so it might be worthwhile to change parameters slightly to make the noise classification less aggressive.

Side note follow-up: The return coloring shows there are indeed some ‘intermediate’ as well some ‘first of many returns’ just where we expect the bare terrain to be. However, there are not so many that the results can be expected to drastically change by including them into the ground finding process.

Removing Excessive Low Noise from Dense-Matching Point Clouds

Point clouds produced with dense-matching by photogrammetry software such as SURE, Pix4D, or Photoscan can include a fair amount of the kind of “low noise” as seen below. Low noise causes trouble when attempting to construct a Digital Terrain Model (DTM) from the points as common algorithm for classifying points into ground and non-ground points – such as lasground – tend to “latch onto” those low points, thereby producing a poor representation of the terrain. This blog post describes one possible LAStools workflow for eliminating excessive low noise. It was developed after a question in the LAStools user forum by LASmoons holder Muriel Lavy who was able to share her noisy data with us. See this, this, this, thisthis, and this blog post for further reading on this topic.

Here you can download the dense matching point cloud that we are using in the following work flow:

We leave the usual inspection of the content with lasinfolasview, and lasvalidate that we always recommend on newly obtained data as an exercise to the reader. Note that a check for proper alignment of flightlines with lasoverlap that we consider mandatory for LiDAR data is not applicable for dense-matching points.

With lastile we turn the original file with 87,261,083 points into many smaller 500 by 500 meter tiles for efficient multi-core processing. Each tile is given a 25 meter buffer to avoid edge artifacts. The buffer points are marked as withheld for easier on-the-fly removal. We add a (terser) description of the WGS84 UTM zone 32N to each tile via the corresponding EPSG code 32632:
lastile -i muriel\20161127_Pancalieri_UTM.laz ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -epsg 32632 ^
        -odir muriel\tiles_raw -o panca.laz
Because dense-matching points often have a poor point order in the files they get delivered in we use lassort to rearrange them into a space-filling curve order as this will speed up most following processing steps:
lassort -i muriel\tiles_raw\panca*.laz ^
        -odir muriel\tiles_sorted -olaz ^
        -cores 7
We then run lasthin to reclassify the highest point of every 2.5 by 2.5 meter grid cell with classification code 8. As the spacing of the dense-matched points is around 40 cm in both x and y, around 40 points will fall into each such grid cell from which the highest is then classified as 8:
lasthin -i muriel\tiles_sorted\panca*.laz ^
        -step 2.5 ^
        -highest -classify_as 8 ^
        -odir muriel\tiles_thinned -olaz ^
        -cores 7
Considering only those points classified as 8 in the last step we then run lasnoise to find points that are highly isolated in wide and flat neighborhoods that are then reclassified as 7. See the README file of lasnoise for a detailed explanation of the different parameters:
lasnoise -i muriel\tiles_thinned\panca*.laz ^
         -ignore_class 0 ^
         -step_xy 5 -step_z 0.1 -isolated 4 ^
         -classify_as 7 ^
         -odir muriel\tiles_isolated -olaz ^
         -cores 7
Now we run a temporary ground classification of only (!!!) on those points that are still classified as 8 using the default parameters of lasground. Hence we only use the points that were the highest points on the 2.5 by 2.5 meter grid and that were not classified as noise in the previous step. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_isolated\panca*.laz ^
          -city -ultra_fine -ignore_class 0 7 ^
          -odir muriel\tiles_temp_ground -olaz ^
          -cores 7
The result of this temporary ground filtering is then merely used to mark all points that are 0.5 meter below the triangulated TIN of these temporary ground points with classification code 12 using lasheight. See the README file of lasheight for a detailed explanation of the different parameters:
lasheight -i muriel\tiles_temp_ground\panca*.laz ^
          -do_not_store_in_user_data ^
          -classify_below -0.5 12 ^
          -odir muriel\tiles_temp_denoised -olaz ^
          -cores 7
In the resulting tiles the low noise (but also many points above the ground) are now marked and in a final step we produce properly classified denoised tiles by re-mapping the temporary classification codes to conventions that are more consistent with the ASPRS LAS specification using las2las:
las2las -i muriel\tiles_temp_denoised\panca*.laz ^
        -change_classification_from_to 1 0 ^
        -change_classification_from_to 2 0 ^
        -change_classification_from_to 7 0 ^
        -change_classification_from_to 12 7 ^
        -odir muriel\tiles_denoised -olaz ^
        -cores 7
Let us visually check what each of the above steps has produced by zooming in on a 300 meter by 100 meter strip of points with the bounding box (388500,4963125) to (388800,4963225) in tile ‘panca_388500_4963000.laz’:
The final classification of all points that are not already classified as noise (7) into ground (2) or non-ground (1) was done with a final run of lasground. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_denoised\panca*.laz ^
          -ignore_class 7 ^
          -city -ultra_fine ^
          -odir muriel\tiles_ground -olaz ^
          -cores 7
Then we create a seamless hill-shaded DTM tiles by triangulating all the points classified as ground into a temporary TIN (including those in the 25 meter buffer) and then rasterizing only the inner 500 meter by 500 meter of each tile with option ‘-use_tile_bb’ of las2dem. For more details on the importance of buffers in tile-based processing see this blog post here.
las2dem -i muriel\tiles_ground\panca*.laz ^
        -keep_class 2 ^
        -step 1 -hillshade ^
        -use_tile_bb ^
        -odir muriel\tiles_dtm -opng ^
        -cores 7

And here the original DSM side-by-side with resulting DTM after low noise removal. One dense forested area near the center of the data was not entirely removed due to the lack of ground points in this area. Integrating external ground points or manual editing with lasview are two possible way to rectify these few remaining errors …

Integrating External Ground Points in Forests to Improve DTM from Dense-Matching Photogrammetry

The biggest problem of generating a Digital Terrain Model (DTM) from the photogrammetric point clouds that are produced from aerial imagery with dense-matching software such as SURE, Pix4D, or Photoscan is dense vegetation: when plants completely cover the terrain not a single point is generated on the ground. This is different for LiDAR point clouds as the laser can even penetrate dense multi-level tropical forests. The complete lack of ground points in larger vegetated areas such as closed forests or dense plantations means that the many processing workflows for vegetation analysis that have been developed for LiDAR cannot be used for photogrammetric point clouds  … unless … well unless we are getting those missing ground points some other way. In the following we see how to integrate external ground points to generate a reasonable DTM under a dense forest with LAStools. See this, this, this, this, and this article for further reading.

Here you can download the dense matching point cloud, the manually collected ground points, and the forest stand delineating polygon that we are using in the following example work flow:

We leave the usual inspection of the content with lasinfo and lasview that we always recommend on newly obtained data as an exercise to the reader. Using las2dem and lasgrid we created the Google Earth overlays shown above to visualize the extent of the dense matched point cloud and the distribution of the manually collected ground points:

las2dem -i DenseMatching.laz ^
        -thin_with_grid 1.0 ^
        -extra_pass ^
        -step 2.0 ^
        -hillshade ^
        -odix _hill_2m -opng

lasgrid -i ManualGround.laz ^
        -set_RGB 255 0 0 ^
        -step 10 -rgb ^
        -odix _grid_10m -opng

Attempts to ground-classify the dense matching point cloud directly are futile as there are no ground points under the canopy in the heavily forested area. Therefore 558 ground points were manually surveyed in the forest of interest that are around 50 to 120 meters apart from another. We show how to integrate these points into the dense matching point cloud such that we can successfully extract bare-earth information from the data.

In the first step we “densify” the manually collected ground points by interpolating them with triangles onto a raster of 2 meter resolution that we store as LAZ points with las2dem. You could consider other interpolation schemes to “densify” the ground points, here we use simple linear interpolation to prove the concept. Due to the varying distance between the manually surveyed ground points we allow interpolating triangles with edge lengths of up to 125 meters. These triangles then also cover narrow open areas next to the forest, so we clip the interpolated ground points against the forest stand delineating polygon with lasclip to classify those points that are really in the forest as “key points” (class 8) and all others as “noise” (class 7).

las2dem -i ManualGround.laz ^
        -step 2 ^
        -kill 125 ^
        -odix _2m -olaz

lasclip -i ManualGround_2m.laz ^
        -set_classification 7 ^ 
        -poly forest.shp ^
        -classify_as 8 -interior ^
        -odix _forest -olaz

Below we show the resulting densified ground points colored by elevation that survive the clipping against the forest stand delineating polygon and were classified as “key points” (class 8). The interpolated ground points in narrow open areas next to the forest that fall outside this polygon were classified as “noise” (class 7) and are shown in violet. They will be dropped in the next step.

We then merge the dense matching points with the densified manual ground points (while dropping all the violet points marked as noise) as input to lasthin and reclassify the lowest point per 1 meter by 1 meter with a temporary code (here we use class 9 that usually refers to “water”). Only the subset of lowest points that receives the temporary classification code 9 will be used for ground classification later.

lasthin -i DenseMatching.laz ^
        -i ManualGround_2m_forest.laz ^
        -drop_class 7 ^
        -merged ^
        -lowest -step 1 -classify_as 9 ^
        -o DenseMatchingAndDensifiedGround.laz

We use the GUI of lasview to pick several interesting areas for visual inspection. The selected points load much faster when the LAZ file is spatially indexed and therefore we first run lasindex. For better orientation we also load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i DenseMatchingAndDensifiedGround.laz 

lasview -i DenseMatchingAndDensifiedGround.laz -gui

We pick the area shown below that contains the target forest with manually collected and densified ground points and a forested area with only dense matching points. The difference could not be more drastic as the visualizations show.

Now we run ground classification using lasground with option ‘-town’ using only the points with the temporary code 9 by ignoring all other classifications 0 and 8 in the file. We leave the temporary classification code 9 unchanged for all the points that were not classified with “ground” code 2 so we can visualize later which ones those are.

lasground -i DenseMatchingAndDensifiedGround.laz ^
          -ignore_class 0 8 ^
          -town ^
          -non_ground_unchanged ^
          -o GroundClassified.laz

We again use the GUI of lasview to pick several interesting areas after running lasindex and again load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i GroundClassified.laz 

lasview -i GroundClassified.laz -gui

We pick the area shown below that contains all three scenarios: the target forest with manually collected and densified ground points, an open area with only dense matching points, and a forested area with only dense matching points. The result is as expected: in the target forest the manually collected ground points are used as ground and in the open area the dense-matching points are used as ground. But there is no useful ground in the other forested area.

Now we can compute the heights of the points above ground for our target forest with lasheight and either replace the z elevations in the file of store them separately as “extra bytes”. Then we can compute, for example, a Canopy Height Model (CHM) that color codes the height of the vegetation above the ground with lasgrid. Of course this will only be correct in the target forest where we have “good” ground but not in the other forested areas. We also compute a hillshaded DTM to be able to visually inspect the topography of the generated terrain model.

lasheight -i GroundClassified.laz ^
          -store_as_extra_bytes ^
          -o GroundClassifiedWithHeights.laz

lasgrid -i GroundClassifiedWithHeights.laz ^
        -step 2 ^
        -highest -attribute 0 ^
        -false -set_min_max 0 25 ^
        -o chm.png

las2dem -i GroundClassified.laz ^
        -keep_class 2 -extra_pass ^
        -step 2 ^ 
        -hillshade ^
        -o dtm.png

Here you can download the resulting color-coded CHM and the resulting hill-shaded DTM as Google Earth KMZ overlays. Clearly the resulting CHM is only meaningful in the target forest where we used the manually collected ground points to create a reasonable DTM. In the other forested areas the ground is only correct near the forest edges and gets worse with increasing distance from open areas. The resulting DTM exhibits some interesting looking  bumps in the middle of areas with manually collected ground point. Those are a result of using the dense-matching points as ground whenever their elevation is lower than that of the manually collected points (which is decided in the lasthin step). Whether those bumps represent true elevations of are artifacts of low erroneous elevation from dense-matching remains to be investigated.

For forests on complex and steep terrain the number of ground points that needs to be manually collected may make such an approach infeasible in practice. However, maybe you have another source of elevation, such as a low-resolution DTM of 10 or 25 meter provided by your local government. Or maybe even a high resolution DTM of 1 or 2 meter from a LiDAR survey you did several years ago. While the forest may have grown a lot in the past years, the ground under the forest will probably not have changed much …