LASmoons: Nicolas Barth

Nicolas Barth (recipient of three LASmoons)
Department of Earth & Planetary Sciences
University of California, Riverside
UNITED STATES

Background:
The 850 km-long Alpine Fault (AF) is one of the world’s great laterally-slipping active faults (like California’s San Andreas Fault), which currently accommodates about 80% of the motion between the Australian and Pacific tectonic plates in the South Island of New Zealand (NZ). Well-dated sedimentary layers preserved in swamps and lakes adjacent to the AF currently provide one of the world’s most spatially and temporally complete record of large ground rupturing earthquakes (Howarth et al., 2018). Importantly these records reveal that major earthquakes occur with greater regularity on the AF than any other known fault, releasing a Magnitude (Mw) 7 to 8 earthquake on average every 249 ± 58 years and that the most recent earthquake was around Mw 8 in 1717 AD prior to European arrival. This computes to a conditional probability of 69% that the AF will rupture in the next 50 years. For a country that has recently had several notable earthquakes (e.g. 2010 Mw 7.1 Canterbury, 2016 Mw 7.8 Kaikoura) and has an economy heavily reliant on tourism, the next AF earthquake is the one NZ is trying to prepare for (note that a Mw 8 earthquake is about thirty times the energy release of a Mw 7).

The more data we can gather as scientists to constrain (1) the magnitude of the next AF earthquake, (2) the amount of lateral and vertical slip (offset roads, powerlines, etc.), (3) the coseismic effects (ground shaking, landslides, liquefaction), and (4) the duration it takes the landscape to recover (muddy rivers, increased sediment supply, prolonged landsliding), the more we can anticipate expected hazards and foster societal resilience.

Despite its name, the AF is almost completely obscured beneath a dense temperate rain-forest canopy, which has hindered fine-scale geomorphic studies. Relatively low quality airborne LiDAR (2 m-resolution bare-earth model) was first collected in 2010 for a 32 km-length of the central AF. Despite being the best studied portion of the AF, 82 % of the fault traces identified in the LiDAR were previously unmapped (Barth et al., 2012). The LiDAR reveals the width and style of ground deformation. Interpretation of the bare-earth landscape in combination with on the ground sampling, allows single earthquake displacements, uplift rates, recurrence of landslides, and post-earthquake sedimentation rates to be quantified. A new 2019 airborne LiDAR dataset collected along 230 km-length of the southern AF has great potential to improve our understanding of this relatively “well-behaved” fault system, what to expect from its next earthquake, and to give us insight into considerably more complex fault systems like the San Andreas.

(A) Aerial view of the South Island of New Zealand highlighting the boundary between the Pacific and Australian plates (white) and the Alpine Fault in particular (red). (B) View showing the extent of the 2019 airborne LiDAR survey to be processed by this lasmoons proposal. (C) Aerial imagery over Franz Josef, site of a 2010 airborne LiDAR survey. (D) 2010 Franz Josef LiDAR DTM hillshade (GNS Science). LiDAR has revolutionized our ability to map fault offsets and other earthquake ground deformation beneath this dense temperate rainforest.

Goal:
The LAStools software will be used to check the quality of the data (reclassing ground points and removing any low ground classed outliers if needed) and create a seamless digital terrain model (DTM) from the 1695 tiled LAS files provided. The DTM will be used to create derivative products including contours, slope map, aspect map, single direction B&W hillshades, multi-directional hillshades, and slope-colored hillshades to interpret the fault and landslide related landscape features hidden beneath the dense temperate rain-forest. The results will be used as seed data to seek national-level science funding to field verify interpretations and collect samples to determine ages of features (geochronology). The ultimate goal is to improve our understanding of the Alpine Fault prior to its next major earthquake and to communicate those findings effectively through publications in open access peer-reviewed journal articles and meetings with NZ regional councils.

Data:
+
airborne LiDAR survey collected in 2019 using a Riegl LSM-Q780 sensor by AAM New Zealand
+ provided data are as 1695 LAS files organized into 500 m x 500 m tiles and classified as ground and non-ground points (75 pts/m2 or ~0.8 ground-classed pts/m2; 320 GB total)

LAStools processing:
1) check the quality of the ALS data [lasinfo, lasoverlap, lasgrid]
2) [if needed] remove any low and high ground-classed outliers [lasnoise]
3) [if needed] reclassify ground and non-ground points [lasground]
4) create Digital Terrain Model (DTM) from ground points [blast2dem]

References:
Howarth, J.D., Cochran, U.A., Langridge, R.M., Clark, K.J., Fitzsimons, S.J., Berryman, K.R., Villamor, P., Strong, D.T. (2018) Past large earthquakes on the Alpine Fault: paleosismological progress and future directions. New Zealand Journal of Geology and Geophysics, v. 61, 309-328, doi: 10.1080/00288306.2018.1465658
Barth, N.C., Toy, V.G., Langridge, R.M., Norris, R.J. (2012) Scale dependence of oblique plate-boundary partitioning: new insights from LiDAR, central Alpine Fault, New Zealand. Lithosphere 4(5), 435-448, doi: 10.1130/L201.1

LASmoons: Olumese Efeovbokhan

Olumese Efeovbokhan (recipient of three LASmoons)
Geosciences, School of Geography
University of Nottingham, UK

Background:
One of the vital requirements to successfully drive and justify favorable flood risk management policies is the availability of reliable data for hydrological modelling. Unfortunately, this poses a big challenge in data-sparse regions and has resulted in uncoordinated and ineffective flood risk management policies with some areas left at the mercy of the floods they are exposed to. This research is focused on the ability to successfully generate data required for hydrological modelling using affordable and easy-to-replicate methods. The research will utilize unmanned aerial vehicles (UAVs) for the generation of bare earth models (DTMs) from photogrammetry points, which will be subsequently used for flood vulnerability mapping.

Photogrammetry point cloud of Tafawa Balewa Square in Lagos Island, Nigeria

Goal:
Generate a bare earth model using a combination of Agisoft Photoscan and LAStools and then validate its suitability for hydrological modelling. Should the generated model prove to be suitable we will use it to conduct flood sensitivity analysis and inundation modelling in other data-sparse regions using high resolution bare earth models generated the same way.

Data:
+
high-resolution photogrammetry point cloud for a portion of the study area
– – – imagery obtained with an Ebee Sensefly drone flight
– – – photogrammetry point cloud generated with Photoscan by AgiSoft 
+ classified LiDAR point cloud with a resolution of 1 pulse per square meter obtained for the study area from the Lagos State Government

LAStools processing:
1) tile large photogrammetry point cloud into tiles with buffer [lastile]
2) mark set of points whose z coordinate is a certain percentile of that of their neighbors [lasthin]
3) remove isolated low points from the set of marked points [lasnoise]
4) classify marked points into ground and non-ground [lasground]
5) pull in points close above and below the ground [lasheight]
6) create Digital Terrain Model (DTM) from ground points [las2dem]
7) merge and hillshade individual raster DTMs [blast2dem]

How Many Decimal Digits for Storing Longitude & Latitude?

Recently I came across this tweet containing the image below and it made me laugh … albeit not in the original way the tweet intended. The tweet was joking that “Anyone is able to open a GeoJSON file” and included the Microsoft Word screen shot seen below as a response to someone else tweeting that “Handing in a project as @GeoJSON. Let’s see if I get the usual “I can’t open this file” even though […]”. What was funny to me was seeing longitude and latitude coordinates stored with 15 decimal digits right of the decimal point. There are many memes about “German efficiency” but few about “German accuracy” 😁. Clearly it is time for another blog post about storage resolution and positional accuracy. The last blog post came on the heels of the national open elevation release of England with insane vertical resolution.

Longitude and latitude coordinates are stored with 15 decimal digits right of the decimal points.

By default LAStools will use 7 decimal digits to store longitude and latitude coordinates to a LAS or LAZ file. But what do 15 decimal digits mean for longitude and latitude coordinates? How “accurate” are the corresponding coordinates when converted to projected  coordinates? I took the second coordinate pair [ 10.049567988755534, 53.462766759577057 ] shown in the screen shot above and converted it from longitude and latitude to the easting and northing values of the WGS 84 / UTM 32N projection that has EPSG code 32632. Before conversion we quantize these numbers to have 5 through 15 decimal digits and then record the absolute difference to the coordinate pair that uses the most digits.

Number o decimal digits for longitude and latitude coordinate and absolute difference in projected position.

The table above shows that – at least for this particular longitude and latitude coordinate pair located in Germany – that 7 digits are sufficient to store coordinates with centimeter [cm] accuracy and that 8 digits are enough to store coordinates with millimeter [mm] accuracy. Any additional digit right of the decimal point will only be necessary when we need micrometer [um] or nanometer [nm] accuracy, which is very unlikely to be the case in most geospatial applications.

This means we could remove the 7 or 8 right most digits of each number from the screenshot that was tweeted and make this GeoJSON file even smaller, faster, and easier to store, transmit, open, and read. After this post was tweeted there was a follow-up tweet suggesting to have a look at this site for a more detailed analysis of what accuracy each digit in a longitude and latitude coordinate can store.

Clean DTM from Agisoft Photogrammetric Points of Urban Scene

We occasionally get permission to distribute a nice data sets and blog about how to best process it with LAStools because this gets around having to pay our “outrageous” consulting fees. (-: This time we received a nice photogrammetric point cloud of the Tafawa Balewa Square in Lagos Island, Lagos, Nigeria. This area is part of the central business district of Lagos and characterized by high-rise buildings. The Tafawa Balewa Square was constructed in 1972 over the site of a defunct track for horse racing and is bounded by Awolowo road, Cable street, Force road, Catholic Mission street and the 26-story Independence House. We want to create a nice Digital Terrain Model from the dense-matching point cloud that was generated with Photoscan by AgiSoft and – as always with photogrammetry – we have to take special care of low noise points. The final result is shown below. All processing commands used are here.

After downloading the data it is useful to familiarize yourself with the file, which can be done with lasview, lasinfo, and lasgrid using the command lines shown below. According to the lasinfo report there are around 47 million points points with RGB colors in the file and their average density is around 100 points per square meter.

lasview -i 0_raw\TafawaBalewa.laz

lasinfo -i 0_raw\TafawaBalewa.laz ^
        -cd -histo intensity 256 ^
        -histo z 1 ^
        -odir 1_quality -odix _info -otxt

lasgrid -i 0_raw\TafawaBalewa.laz ^
        -step 1 ^
        -density ^
        -false -set_min_max 50 150 ^
        -odir 1_quality -odix _d_50_150 -opng

The average point density value of 100 from the lasinfo report suggests that 50 as the minimum and 150 as the maximum are good false color ramp values for a map showing how the point density per square meter is distributed.

Color-coded point density: blue equals 50 or less and red means 150 or more points per square meter.

We use lastile to create a buffered tiling for the 47 million points. We use a tile size of 200 meters and request a large buffer of 50 meters around every tile because there are large buildings in the survey areas. We also flag buffer points as withheld so they can be easily be dropped later.

lastile -i 0_raw\TafawaBalewa.laz ^
        -tile_size 200 -buffer 50 -flag_as_withheld ^
        -odir 2_tiles_raw -o tafawa.laz

If you inspect the resulting tiles – such as ‘tafawa_544000_712600.laz’ as shown here – with lasview you will see the kind of low noise that is shown below and that may cause a ground classification algorithm. While our lasground software is able to deal with a certain amount of low noise – if there are too many it will likely latch onto them. Therefore we will first generate a subset of points that has as few as possible of such low noise points.

Typical low noise in dense-matching photogrammetry points in urban scene.

Next we use a sequence of three LAStools modules, namely lasthinlasground, and lasheight to classify this photogrammtric point cloud into ground and non-ground points. All processing commands used are here. First we use lasthin to give the point the classification code 8 that is closest to the 50th percentile in elevation within every 50 centimeter by 50 centimeter cell (but only if the cells containing at least 20 points).

lasthin -i 2_tiles_raw\tafawa*.laz ^
        -step 0.5 ^
        -percentile 50 20 ^
        -classify_as 8 ^
        -odir 3_tiles_median_50cm -olaz ^
        -cores 3

Next we use lasground to ground-classify only the points that have classification code 8 (i.e. by ignoring those with classification codes 0) and set their classification code to ground (2) or non-ground (1). Because of the large buildings in this urban scene we use ‘-metro’ which uses a large step size of 50 meters for the pre-processing. This also sets the internally used bulge parameter to 5.0 which you can see if you run the tool in verbose ‘-v’ mode. In three different trial runs we determined that forcing the bulge parameter to be 0.5 instead gave better results. The bulge and the spike parameters can be useful to vary in order to improve ground classification results (also see the README file).

lasground -i 3_tiles_median_50cm\tafawa*.laz ^
          -ignore_class 0 ^
          -metro -bulge 0.5 ^
          -odir 4_tiles_ground_50cm -olaz ^
          -cores 3

The resulting ground points are a subset with a resolution of 50 centimeter that is good enough to create a DTM with meter resolution, which we do with las2dem command line shown below. We really like storing DTM elevation rasters to the LAZ point format because it is a more compressed way of storing elevation rasters compared to ASC, BIL, TIF, or IMG. It also makes the raster output a natural input to subsequent LAStools processing steps.

las2dem -i 4_tiles_ground_50cm\tafawa*.laz ^
        -keep_class 2 ^
        -step 1 -kill 100 ^
        -use_tile_bb ^
        -odir 5_tiles_dtm_1m -olaz ^
        -cores 3

Finally we use blast2dem to create a seamless hill-shaded version of our 1 meter DTM from on-the-fly merged elevation rasters. This is the DTM pictured at the beginning of this article.

blast2dem -i 5_tiles_dtm_1m\tafawa*.laz -merged ^
          -step 1 ^
          -hillshade ^
          -o dtm_1m.png

The corresponding DSM pictured at the beginning of this article was generated with the two command lines below by first keeping only the 95th percentile highest elevation of every 50 cm by 50 cm cell with lasthin (which remove spurious high noise points) and then by triangulating the surviving points with blast2dem into a seamless TIN that is also hill-shaded and rasterized with 1 meter resolution. Running the 64 bit version of lasthin (note the ‘-cpu64‘ switch) allows us to work on the entire data set (rather than its tiles version) at once, where the standard 32 bit version may run out of memory.

lasthin -i 0_raw\TafawaBalewa.laz ^
        -cpu64 ^
        -step 0.5 ^
        -percentile 95 20 ^
        -o 0_raw\TafawaBalewa_p95_50.laz

blast2dem -i 0_raw\TafawaBalewa_p95_50.laz ^
          -step 1 ^
          -hillshade ^
          -o dsm_1m.png

In order to generate the final DTM at higher resolution we use lasheight to pull all points into the ground class that lie within a 5 cm distance vertically below or a 15 cm distance vertically above the triangulated surface of ground points computed in the previous step. You could experiment with other values here to be less or more conservative about pulling detail into the ground class.

lasheight -i 4_tiles_ground_50cm\tafawa*.laz ^
          -classify_between -0.05 0.15 2 ^
          -odir 6_tiles_ground -olaz ^
          -cores 3

We repeat the same processing step as before las2dem to create the raster DTM tiles, but this time with a resolution of 25 cm.

las2dem -i 6_tiles_ground\tafawa*.laz ^
        -keep_class 2 ^
        -step 0.25 -kill 100 ^
        -use_tile_bb ^
        -odir 7_tiles_dtm_25cm -olaz ^
        -cores 3

And we again use blast2dem to create a seamless hill-shaded version of the DTM from on-the-fly merged elevation rasters, but this time with a resolution of 25 cm. This is the DTM shown below. All processing commands used are here.

blast2dem -i 7_tiles_dtm_25cm\tafawa*.laz -merged ^
          -step 0.25 ^
          -hillshade ^
          -o dtm_25cm.png

Hill-shade of final DTM with resolution of 25 cm.

Smooth DTM from Drone LiDAR off Velodyne HDL 32A mounted on DJI M600 UAV

Recently we attempted to do a small LiDAR survey by drone for a pet project of our CEO in our “code and surf camp” here in Samara, Costa Rica. But surveying is difficult when you are a novice and we ran into a trajectory issue. The dramatic “wobbles” were entirely our fault, but fortunately our mistakes also led to something useful: We found some LAS export bugs. Our laser scanner was a Velodyne HDL-32E integrated with a NovAtel INS into the Snoopy Series A HD made by LiDARUSA. The system was carried by a DJI Matrice 600 (M600) drone. We processed the trajectory with NovAtel Inertial Explorer (here we made the “wobbles” error) and finally exported the LAS and LAZ files with ScanLook PC (version 1.0.182) from LiDARUSA.

While we were investigating our “wobbles” (which clearly were our mistake) we also found five different LAS export bugs in ScanLook PC that seem to have started sometime after version 1.0.171 and will likely end with version 1.0.193. Below an illustration of a correct export from version 1.0.129 and a buggy export from version 1.0.182. In both instances you see the returns from one revolution of the Velodyne HDL-32E scanner head ordered by their GPS time stamps and colored to distinguish the 32 separate beams. In the buggy version, groups of around seven non-adjacent returns are given the same time stamp. This bug will only affect you, if correct GPS time stamps are important for your subsequent LiDAR processing or if your client explicitly asked for ASPRS specification compliant LAS files. We plan to publish another blog post detailing how to find this GPS time stamping bug (and the other four bugs we found).

During the many interactions we had working through “wobbles” and export bugs, we obtained a nice set of six flight lines from Seth Gulich of Bowman Consulting – a US American company based in Stuart, Florida – who flew an identical “Snoopy Series A HD” system also on a DJI Matrice 600 drone at approximately 100 feet above ground level above a model airplane airport in Palm Beach, Florida. You can download the data set here. In the following we will check the flight line alignment of this data set and then process it into a smooth DTM. All command lines used are summarized in this text file.

First we generate a lasinfo report that includes a number of histograms for on-the-fly merged flight lines with lasinfo and then use the z coordinate histogram from the lasinfo report to set reasonable min/max values for the elevation color ramp of lasview:

lasinfo -i 0_strips_raw\Velodyne*.laz -merged ^
        -cd ^
        -histo z 1 ^
        -histo user_data 1 ^
        -histo point_source 1 ^
        -o 1_quality\Velodyne_merged_info.txt

lasview -i 0_strips_raw\Velodyne*.laz ^
        -points 10000000 ^
        -set_min_max 25 75

The lasinfo report shows no information about the coordinate reference system. We found out experimentally that the horizontal coordinates seem to be EPSG code 2236 and that the vertical units are most likely be US survey feet. The warnings you will see in the lasinfo report have to do with the fact that the double-precision bounding box stored in the LAS header was populated with numbers that have many more decimal digits than the coordinates in the file, which only have millifeet resolution as all three scale factors are 0.001 (meaning coordinates have three decimal digits). The information which of the 32 lasers was collecting which point is stored in both the ‘user data’ and the ‘point source ID’ field which is evident from the histograms in the lasinfo report. We need to be careful not to override both fields in later processing.

Next we use lasoverlap to check how well the LiDAR points from the flight out and the flight back align vertically. This tool computes the difference of the lowest points for each square foot covered by multiple flight lines. Differences of less than a quarter of a foot are both times mapped to white, differences of more than one foot (more than half a foot) are mapped to saturated red or blue depending on whether the difference is positive or negative in the first run (in the second run):

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 1.00 -step 1 ^
           -odir 1_quality -o overlap_025_100.png

lasoverlap -i 0_strips_raw\Velodyne*.laz ^
           -faf ^
           -min_diff 0.25 -max_diff 0.50 -step 1 ^
           -odir 1_quality -o overlap_025_050.png

We use a new feature of the LAStools GUI (as of version 180429) to closer inspect large red or blue areas. With lasmerge we clip out regions that looks suspect for closer examination with lasview. First we spatially index the flight lines to make this process faster. With the ‘-gui’ switch we start the tool in GUI mode with flight lines already loaded. Using the new PNG overlay roll-out on the left we add the ‘overlap_025_050_diff.png’ image from the quality folder created in the last step and clip out three areas.

lasindex -i 0_strips_raw\Velodyne*.laz
         -tile_size 10 -maximum -100 ^
         -cores 3

lasmerge -i 0_strips_raw\Velodyne*.laz -gui

You can also clip out these three areas using the command lines below:

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 939500 889860 100 ^
         -o 1_quality\939500_889860.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940400 889620 100 ^
         -o 1_quality\940400_889620.laz

lasmerge -i 0_strips_raw\Velodyne*.laz ^
         -faf ^
         -inside_tile 940500 890180 100 ^
         -o 1_quality\940500_890180.laz

The reader may inspect the areas 939500_889860.laz, 940400_889620.laz, and 940500_890180.laz with lasview using profile views via hot keys ‘x’ and switching back and forth between the points from different flight lines via hot keys ‘0’, ‘1’, ‘2’, ‘3’, … for individual and ‘a’ for all flight lines as we have done it in previous tutorials [1,2,3]. Using drop-lines or rise-lines via the pop-up menu gives you a sense of scale. Removing points with lastrack that are horizontally too far from the trajectory could be one strategy to use fewer outliers. But as our surfaces are expected to be “fluffy” (because we have a Velodyne LiDAR system), we accept these flight line differences and continue processing.

Here the complete LAStools processing pipeline for creating an average ground model from the set of six flight lines that results in the hillshaded DTM shown below. The workflow is similar to those we have developed in earlier blog posts for Velodyne Puck based systems like the Hovermap and the Yellowscan and in the other Snoopy tutorial. All command lines used are summarized in this text file.

Hillshaded DTM with half foot resolution generated via average ground computation with LAStools.

In the first step we lastile the six flight lines into 250 by 250 feet tiles with 25 feet buffer while preserving flight line information. The flight line information will be stored in the “point source ID” field of each point and therefore override the beam ID that is currently stored there. But the beam ID is also stored in the “user data” field as the  lasinfo report had told us. We set all classifications to zero and add information about the horizontal coordinate reference system EPSG code 2236 and the vertical units (US Survey Feet).

lastile -i 0_strips_raw\*.laz ^
        -faf ^
        -set_classification 0 ^
        -epsg 2236 -elevation_survey_feet ^
        -tile_size 250 -buffer 25 -flag_as_withheld ^
        -odir 2_tiles_raw -o pb.laz

On three cores in parallel we then lassort the points in the tiles into a space-filling curve order which will accelerate later operations.

lassort -i 2_tiles_raw\*.laz ^
        -odir 2_tiles_sorted -olaz ^
        -cores 3

Next we use lasthin to classify the point whose elevation is closest to the 5th elevation percentile among all points falling into its cell with classification code 8. We run lasthin multiple times and each time increase the cell size from 1, 2, 4, 8 to 16 foot. We do this because we have requested the 5th elevation percentile to only be computed when there are at least 20 points in the cell. Percentiles are statistical measures and need a reasonable sample size to be stable. Because drone flights are very dense in the center and more sparse at the edges this increase in cell size assures that we have a good selection of points classified with classification code 8 across the entire survey area.

lasthin -i 2_tiles_sorted\*.laz ^
        -step 1 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step01 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step01\*.laz ^
        -step 2 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step02 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step02\*.laz ^
        -step 4 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step04 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step04\*.laz ^
        -step 8 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step08 -olaz ^
        -cores 3

lasthin -i 3_tiles_thinned_p05_step08\*.laz ^
        -step 16 -percentile 5 20 -classify_as 8 ^
        -odir 3_tiles_thinned_p05_step16 -olaz ^
        -cores 3

Then we let lasground_new run on only the points classified with classification code 8 (i.e. by ignoring the points still classified with code 0) which classifies them into ground (code 2) and non-ground (code 1).

lasground_new -i 3_tiles_thinned_p05_step16\*.laz ^
              -ignore_class 0 ^
              -town ^
              -odir 4_tiles_ground_low -olaz ^
              -cores 3

The ground points we have computed form somewhat of a lower envelope of the “fluffy” points of a Velodyne scanner. With lasheight we now draw all the points near the ground – namely those from 0.1 foot below to 0.4 foot above the ground – into a new classification code 6 that we term “thick ground”. The ‘-do_not_store_in_user_data’ switch prevent the default behavior of lasheight from happening, which would override the beam ID information that it stored in the ‘user data’ field with approximate height value.

lasheight -i 4_tiles_ground_low\*.laz ^
          -classify_between -0.1 0.4 6 ^
          -do_not_store_in_user_data ^
          -odir 4_tiles_ground_thick -olaz ^
          -cores 3

A few close-up shots of the resulting “thick ground” are shown in the picture gallery below.

We then use lasgrid to average the (orange) thick ground points onto a regular grid with a cell spacing of half a foot. We do not grid the tile buffers by adding the ‘-use_tile_bb’ switch.

lasgrid -i 4_tiles_ground_thick\*.laz ^
        -keep_class 6 ^
        -step 0.5 -average ^
        -use_tile_bb ^
        -odir 5_tiles_gridded_mean_ground -olaz &
        -cores 3

Finally we use blast2dem to merge all the averaged ground point grids into one file, interpolate across open areas without ground points, and compute the hillshaded DTM shown above. All command lines used are summarized in this text file.

blast2dem -i 5_tiles_gridded_mean_ground\*.laz ^
          -merged ^
          -step 0.5 ^
          -hillshade ^
          -o dtm.png

We thank Seth Gulich of Bowman Consulting for sharing this LiDAR data set with us. It was flown with a DJI Matrice 600 drone carrying a “Snoopy A series HD” LiDAR system from LidarUSA.

Using Open LiDAR to Remove Low Noise from Photogrammetric UAV Point Clouds

We collected drone imagery of the restored “Kance” tavern during the lunch stop of the UAV Tartu summer school field trip (actually the organizer Marko Kohv did that, as I was busy introducing students to SUP boarding). With Agisoft PhotoScan we then processed the images into point clouds below the deck of the historical “Jõmmu” barge on the way home (actually Marko did that, because I was busy enjoying the view of the wetlands in the afternoon sun). The resulting data set with 7,855,699 points is shown below and can be downloaded here.

7,855,699 points produced with Agisoft Photoscan

Generating points using photogrammetric techniques in scenes containing water bodies tends to be problematic as dense blotches of noise points above and below the water surface are common as you can see in the picture below. Especially the low points are troublesome as they adversely affect ground classification which results in poor Digital Elevation Models (DTMs).

Clusters of low noise points nearly 2 meters below the actual surface in water areas.

In a previous article we have described a LAStools workflow that can remove excessive low noise. In this article here we use external information about the topography of the area to clean our photogrammetry points. How convenient that the Estonian Land Board has just released their entire LiDAR archives as open data.

Following these instructions here you can download the available open LiDAR for this area, which has the map sheet index 475681. Alternatively you can download the four currently available data sets here flown in spring 2010, in summer 2013, in spring 2014, and in summer 2017. In the following we will use the one flown in spring 2014.

We can view both data sets simultaneously in lasview. By adding ‘-faf’ to the command-line we can switch back and forth between the two data sets by pressing ‘0’ and ‘1’.

lasview -i Kantsi.laz ^
        -i 475681_2014_tava.laz ^
        -points 10000000 ^
        -faf

We find cut the 1 km by 1 km LiDAR tile down to a 250 m by 250 m tile that nicely surrounds our photogrammetric point set using the following las2las command-line:

las2las -i 475681_2014_tava.laz ^
        -inside_tile 681485 6475375 250 ^
        -o LiDAR_Kantsi.laz

lasview -i Kantsi.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

Scrutinizing the two data sets we quickly find that there is a miss-alignment between the dense imagery-derived and the comparatively sparse LiDAR point clouds. With lasview we investigate the difference between the two point clouds by hovering over a point from one point cloud and pressing <i> and then hovering over a somewhat corresponding point from the other point cloud and pressing <SHIFT>+<i>. We measure displacements of around 2 meters vertically and of around 3 to 3.5 meter in total.

Before we can use the LiDAR points to remove the low noise from the photogrammetric points we must align them properly. For simple translation errors this can be done with a new feature that was recently added to lasview. Make sure to download the latest version (190404 or newer) of LAStools to follow the steps shown in the image sequence below.

las2las -i Kantsi.laz ^
        -translate_xyz 0.89 -1.90 2.51 ^
        -o Kantsi_shifted.laz

lasview -i Kantsi_shifted.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result looks good in the sense that both sides of the photogrammetric roof are reasonably well aligned with the LiDAR. But there is still a shift along the roof so we repeat the same thing once more as shown in the next image sequence:

We use a suitable displacement vector and apply it to the photogrammetry points, shifting them again:

las2las -i Kantsi_shifted.laz ^
        -translate_xyz -1.98 -0.95 0.01 ^
        -o Kantsi_shifted_again.laz

lasview -i Kantsi_shifted_again.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result is still not perfect as there is also some rotational error and you may find another software such as Cloud Compare more suited to align the two point clouds, but for this exercise the alignment shall suffice. Below you see the match between the photogrammetry points and the LiDAR TIN before and after shifting the photogrammetry points with the two (interactively determined) displacement vectors.

The final steps of this exercise use las2dem and the already ground-classified LiDAR compute a 1 meter DTM, which we then use as input to lasheight. We classify the photogrammetry points using their height above this set of ground points with 1 meter spacing: points that are 40 centimeter or more below the LiDAR DTM are classified as noise (7), points that are between 40 below to 1 meter above the LiDAR DTM are classified to a temporary class (here we choose 8) that has those points that could potentially be ground points. This will help, for example, with subsequent ground classification as large parts of the photogrammetry points – namely those on top of buildings and in higher vegetation – can be ignored from the start by a ground classification algorithm such as lasground.

las2dem -i LiDAR_Kantsi.laz ^
        -keep_class 2 ^
        -kill 1000 ^
        -o LiDAR_Kantsi_dtm_1m.bil
lasheight -i Kantsi_shifted_again.laz ^
          -ground_points LiDAR_Kantsi_dtm_1m.bil ^
          -classify_below -0.4 7 ^
          -classify_between -0.4 1.0 8 ^
          -o Kantsi_cleaned.laz

Below the results we have achieved after “roughly” aligning the two point clouds with some new lasview tricks and then using the LiDAR elevations to classify the photogrammetry points into “low noise”, “potential ground”, and “all else”.

We thank the Estonian Land Board for providing open data with a permissive license. Special thanks also go to the organizers of the UAV Summer School in Tartu, Estonia and the European Regional Development Fund for funding this event. Especially fun was the fabulous excursion to the Emajõe-Suursoo Nature Reserve and through to Lake Peipus aboard, overboard and aboveboard the historical barge “Jõmmu”. If you look carefully you can also find the barge in the photogrammetry point cloud. The photogrammetry data used here was acquired during our lunch stop.

Fun aboard and overboard the historical barge “Jõmmu”.

No Sugarcoating: Sweet LiDAR from RiCOPTER carrying VUX-1UAV over Sugarcane

Recently we saw an interesting LiDAR data set talked about on social media by Chad Netto from Chustz Surveying in New Roads, Louisiana and asked for a copy. It is a LiDAR scan of a sugarcane plantation in Assumption Parish, Louisiana carried out with the VUX-1UAV by RIEGL mounted onto a RiCOPTER and guided by an Applanix IMU and a Trimble base station. That is probably one of the sweetest (but also one of the most expensive) UAV LiDAR system you can buy today. I received this LiDAR file and this trajectory file. In the following we talk a detailed look at this data set.

First we run lasinfo to get an idea of the contents of the data set. We create various histograms as those can often help understand an unfamiliar data set:

lasinfo -i sugarcane\181026_163424.laz ^
        -cd ^
        -histo gps_time 5 ^
        -histo intensity 64 ^
        -histo point_source 1 ^
        -histo z 5 ^
        -odix _info -otxt

You can download the resulting report here. For the 84,751,955 points we notice that

  1. both horizontal and vertical coordinates are stored in US survey feet
  2. with scale factors of 0.00025 this means a resolution of 76 micrometer
  3. there is no explicit flight line information (all point source IDs are zero)
  4. gaps in the GPS time stamp histogram are suggesting multiple lines

First we use las2las to lower the insanely high resolution from 0.00025 US survey feet to something more reasonable for an airborne UAV scan, namely to 0.01 or 1 hundredths of a US survey foot or centi-US-survey-feet:

las2las -i sugarcane\181026_163424.laz ^
        -rescale 0.01 0.01 0.01 ^
        -odix _cft -olaz

I have already done this for you. The file that is online is already in “centi-US-survey-feet” because it reduced the file size from the original 678 MB file that we got from Netto to the 518 MB file that is online, meaning that you had 160 MB less data to download.

Next we use lassplit to recover the original flight lines as follows:

lassplit -i sugarcane\181026_163424.laz ^
         -recover_flightlines ^
         -odir sugarcane\0_recovered_strips ^
         -o assumption.laz

This results in 5 strips. We then use lassort to bring the strips back into their original acquisition order by sorting first based on the GPS time stamp (which brings all returns of one pulse together) and second on the return number (which sorts them in ascending order). We do this on 3 cores in parallel with this command:

lassort -i sugarcane\0_recovered_strips\*.laz ^
        -gps_time ^
        -return_number ^
        -odir sugarcane\1_sorted_strips -olaz ^
        -cores 3

We also create a spatial index for each of these strips using lasindex so that any area-of-interest query that we do later will be significantly accelerated. See the README file for the meaning of the parameters:

lasindex -i sugarcane\1_sorted_strips\*.laz ^
         -tile_size 10 -maximum -100 ^
         -cores 3

Then we check for flight line alignment using lasoverlap by comparing – per 2 feet by 2 feet area – the lowest elevation value of points from different strips wherever there is overlap. Cells with an absolute vertical difference of less than a quarter of a foot are mapped to white. Cells with vertical differences of more (or less) than a quarter foot are mapped to an increasingly red (or blue) color that is saturated red (or blue) when one full foot is reached.

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap.png

The resulting image looks dramatic at first glance. But we have to remember that this is sugarcane. The penetration of the laser can vary greatly depending on the direction from which the beam hits the densely standing stalks. Large differences between flight lines can be expected where sugarcane stands tall. We need to focus our visual quality checks on the few open areas, namely the access roads and harvested areas.

Color-mapped highest vertical difference in lowest point per 2 feet by 2 feet area between overlapping flight lines.

We use las2las via its native GUI to cut out several suspicious-looking open areas with overly red or overly blue shading. By loading the resulting image into the GUI these areas-of-interest are easy to target and cut out.

las2las -i sugarcane\1_sorted_strips\*.laz -gui

Overlaying the difference image in the GUI of las2las to cut out suspicious areas for closer inspection.

We cut out four square 100 by 100 meter tiles in open areas that show a suspiciously strong pattern of red or blue colors for closer inspection. The command lines for these four square areas are given below and you can download them here:

  1. assumption_3364350_534950_100.laz
  2. assumption_3365600_535750_100.laz
  3. assumption_3364900_535500_100.laz
  4. assumption_3365500_535600_100.laz
las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3364350 534950 100 ^
        -o sugarcane\assumption_3364350_534950_100.laz

las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3365600 535750 100 ^
        -o sugarcane\assumption_3365600_535750_100.laz

las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3364900 535500 100 ^
        -o sugarcane\assumption_3364900_535500_100.laz

las2las -i sugarcane\1_sorted_strips\*.laz ^
        -merged -faf ^
        -inside_tile 3365500 535600 100 ^
        -o sugarcane\assumption_3365500_535600_100.laz

In the image sequence below we scrutinize these differences which will lead us to notice two things:

  1. There are vertical miss-alignments of around one foot. These big difference can especially be observed between flight lines that cover an area with a very high point density and those that cover the very same area with a very low point density.
  2. There are horizontal miss-alignments of around one foot. Again these differences seem somewhat correlated to the density that these flight lines cover a particular area with.

For the most part the miss-aligned points come from a flight line that has only sparse coverage in that area. In a flat terrain the return density per area goes down the farther we are from the drone as those areas are only reached with higher and higher scan angles. Hence an immediate idea that comes to mind is to limit the scan angle to those closer to nadir and lower the range from -81 to 84 degrees reported in the lasinfo report to something like -75 to 75, -70 to 70, or -65 to 65 degrees. We can check how this will improve the alignment with these lasoverlap command lines:

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -75 75 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap75.png

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -70 70 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap70.png

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -65 65 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap65.png

lasoverlap -i sugarcane\1_sorted_strips\*.laz ^
           -files_are_flightlines ^
           -keep_scan_angle -60 60 ^
           -step 2.0 ^
           -min_diff 0.25 -max_diff 1.0 ^
           -o sugarcane\2_quality\overlap60.png

This simple technique significantly improves the difference image. Based on these images would suggest to only use returns with a scan angle between -70 and 70 degrees for any subsequent analysis. This seems to remove most of the discoloring in open areas without loosing too many points. Note that only using returns with a scan angle between -60 and 60 degrees means that some flight lines are no longer overlapping each other.

Note also that by limiting the scan angle we get suddenly get white areas even in incredible dense vegetation. The more horizontal a laser shoot is the more likely it will only hit higher up sugarcane plants and the less likely it will penetrate all the way to the ground. The white areas coincide with where laser pulses are close to nadir which is in the flight line overlap areas that directly below the drone’s trajectory.

Can we improve alignment further? Not with LAStools, so I turned to Andre Jalobeanu, a specialist on that particular issue, who I have known for many years. Andre has developed BayesStripAlign – a software by his company BayesMap that is quite complementary to LAStools and does exactly what the name suggests: it align strips. I gave Andre the five flight lines and he aligned them for me. Below the new difference images:

We cut out the very same four square areas from the realigned strips for closer inspection and you may investigate them on your own. You can download them here.

  1. assumption_3364350_534950_100_realigned.laz
  2. assumption_3365600_535750_100_realigned.laz
  3. assumption_3364900_535500_100_realigned.laz
  4. assumption_3365500_535600_100_realigned.laz

In the image sequence below we are just looking at the last of the four square areas and you can see that most of the miss-alignment we saw earlier between the flight lines was removed.

We would like to thank Chad Netto from Chustz Surveying to make this data set available to us and Andre Jalobeanu from BayesMap to align the flight lines for us.