LASmoons: Martin Romain

Martin Romain (recipient of three LASmoons)
Marshall Islands Conservation Society
Majuro, Republic of the MARSHALL ISLANDS

Background:
As a low-lying coastal nation, the Republic of the Marshall Islands (RMI) is at the forefront of exposure to climate change impacts. RMI has a strong dependence on natural resources and biodiversity not only for food and income but also for culture and livelihood. However, these resources are threatened by rising sea levels and associated coastal hazards (king tides, storm surges, wave run-up, saltwater intrusion, erosion). This project aims at addressing the lack of technical capacity and available data to implement effective risk reduction and adaptation measures, with a particular focus on inundation mapping and local evacuation planning in population centers.

DCIM100MEDIADJI_0507.JPG

Typical low-lying coastal area of the Republic of the Marshall

Goal:
This project intends to use LAStools to generate a DEM of the inhabited sections of 3 remote atolls (Aur, Ebon, Likiep) and 1 island (Mejit). The resulting DEM will be used to produce an inundation exposure model (and map) under variable sea level rise projections for each site. The ultimate goal is to integrate the results into each site’s disaster risk reduction strategy (long-term outcome) and present it through community consultations in schools, community centers, and council houses.

Data:
+
Aerial imagery of 11.5 square kilometers of land (6.3% of total national landmass) using DJI Matrice 200 V2 & DJI Zenmuse X5S with a minimum overlap of 75/75 and maximum altitude of 120m.

LAStools processing:
1) tile large point cloud into tiles with buffer [lastile]
2) remove noise points [lasthin, lasnoise]
3) classify points into ground and non-ground [lasground]
4) create Digital Terrain Models and Digital Surface Models [lasthin, las2dem]

Potential LAStools pipelines:
1)
Removing Excessive Low Noise from Dense-Matching Point Clouds
2)
Digital Pothole Removal: Clean Road Surface from Noisy Pix4D Point Cloud
3)
Creating DTMs from dense-matched points of UAV imagery from SenseFly’s eBee

LASmoons: Volga Lipwoni

Volga Lipwoni (recipient of three LASmoons)
Department of Geography, School of Earth and Environment
University of Canterbury, NEW ZEALAND

Background:
Structure from motion (SfM) photogrammetry, has emerged as an effective tool to accurately extract three-dimensional (3D) structures from a series of overlapping two-dimensional (2D) Unmanned aerial vehicles (UAVs) images. The bid to switch from the current labour-intensive, and time consuming forestry inventory practices has seen a lot of interest geared towards understanding the use of SfM photogrammetry to derive forest metrics (Iglhaut et al., 2019). There are a range of commercial, free and open source SfM photogrammetric software packages that can be used to process UAV images into 3D point clouds. Selection of the most appropriate package has become an important issue for most projects (Turner, Lucieer, & Wallace, 2013). A comparison of software performance in terms of accuracy, processing times and related costs would help foresters in deciding the best tool for the job.

lasmoons_Volga_Lipwoni

Typical point cloud derived with SfM software from UAV imagery.

Goal:
The study will generate 3D point clouds of images of a young forest trial and LAStools will be used to derive canopy height models (CHM) for computing tree heights. Tree heights from LiDAR data will serve as a baseline for accuracy assessment of heights derived from the point clouds.

Data:
+
422 UAV images processed into 3D point clouds using ten (10) different commercial and open source SfM software packages

LAStools processing:
1) tile large point cloud into tiles with buffer [lastile]
2) remove noise points [lasthin, lasnoise]
3) classify points into ground and non-ground [lasground]
4) create Digital Terrain Modelsand Digital Surface Models [lasthin, las2dem]
5) produce Canopy Height Models for computing tree heights [lasheight, las2dem]

References:
Iglhaut, J., Cabo, C., Puliti, S., Piermattei, L., O’Connor, J., & Rosette, J. (2019). Structure from motion photogrammetry in forestry: A review. Current Forestry Reports, 5(3), 155-168. doi:https://doi.org/10.1007/s40725-019-00094-3
Turner, D., Lucieer, A., & Wallace, L. (2013). Direct georeferencing of ultrahigh-resolution UAV imagery. EEE Transactions on Geoscience and Remote Sensing, 52(5), 2738-2745. doi:10.1109/TGRS.2013.2265295

LASmoons: Gabriele Garnero

Gabriele Garnero (recipient of three LASmoons)
Interuniversity Department of Regional and Urban Studies and Planning
Politecnico e Università degli Studi, Torino
ITALY

Background:
Last spring, the LARTU research group produced a laser scanner survey of the Abbey of Sant’Andrea in Vercelli, on the occasion of the VIII centenary of the dedication (1219). The database produced with a topographic tool that integrates the potential of a total station with laser scanner and photogrammetric sensors (Trimble SX 10), has been used to produce representations that can be consulted in interactive mode, navigating within the point clouds and producing a consultation platform that can also be accessed by non-specialist users such as art historians or archaeologists.

lasmoons_gabriele_garnero_0

Goal:
The LAStools software will be used to improve both the point cloud produced by eliminating the remaining noises, and check other ways of publishing the data, so as to make it usable from outside, to the community of researchers.

Data:
+
laser scanner and photogrammetric acquisitions of the interior of the building (150 millions of points)
+ laser scanner and photogrammetric acquisitions of the outside of the building (210 millions of points)
+ drone-based shooting of outdoor areas processed with Pix4D (23 millions of points)

LAStools processing:
1) tile large point cloud into tiles with buffer [lastile]
2) mark set of points whose z coordinate is a certain percentile of that of their neighbors [lasthin]
3) remove isolated low points from the set of marked points [lasnoise]
4) classify marked points into ground and non-ground [lasground]
5) creates a LiDAR portal for 3D visualization of LAS files [laspublish]

Using Open LiDAR to Remove Low Noise from Photogrammetric UAV Point Clouds

We collected drone imagery of the restored “Kance” tavern during the lunch stop of the UAV Tartu summer school field trip (actually the organizer Marko Kohv did that, as I was busy introducing students to SUP boarding). With Agisoft PhotoScan we then processed the images into point clouds below the deck of the historical “Jõmmu” barge on the way home (actually Marko did that, because I was busy enjoying the view of the wetlands in the afternoon sun). The resulting data set with 7,855,699 points is shown below and can be downloaded here.

7,855,699 points produced with Agisoft Photoscan

Generating points using photogrammetric techniques in scenes containing water bodies tends to be problematic as dense blotches of noise points above and below the water surface are common as you can see in the picture below. Especially the low points are troublesome as they adversely affect ground classification which results in poor Digital Elevation Models (DTMs).

Clusters of low noise points nearly 2 meters below the actual surface in water areas.

In a previous article we have described a LAStools workflow that can remove excessive low noise. In this article here we use external information about the topography of the area to clean our photogrammetry points. How convenient that the Estonian Land Board has just released their entire LiDAR archives as open data.

Following these instructions here you can download the available open LiDAR for this area, which has the map sheet index 475681. Alternatively you can download the four currently available data sets here flown in spring 2010, in summer 2013, in spring 2014, and in summer 2017. In the following we will use the one flown in spring 2014.

We can view both data sets simultaneously in lasview. By adding ‘-faf’ to the command-line we can switch back and forth between the two data sets by pressing ‘0’ and ‘1’.

lasview -i Kantsi.laz ^
        -i 475681_2014_tava.laz ^
        -points 10000000 ^
        -faf

We find cut the 1 km by 1 km LiDAR tile down to a 250 m by 250 m tile that nicely surrounds our photogrammetric point set using the following las2las command-line:

las2las -i 475681_2014_tava.laz ^
        -inside_tile 681485 6475375 250 ^
        -o LiDAR_Kantsi.laz

lasview -i Kantsi.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

Scrutinizing the two data sets we quickly find that there is a miss-alignment between the dense imagery-derived and the comparatively sparse LiDAR point clouds. With lasview we investigate the difference between the two point clouds by hovering over a point from one point cloud and pressing <i> and then hovering over a somewhat corresponding point from the other point cloud and pressing <SHIFT>+<i>. We measure displacements of around 2 meters vertically and of around 3 to 3.5 meter in total.

Before we can use the LiDAR points to remove the low noise from the photogrammetric points we must align them properly. For simple translation errors this can be done with a new feature that was recently added to lasview. Make sure to download the latest version (190404 or newer) of LAStools to follow the steps shown in the image sequence below.

las2las -i Kantsi.laz ^
        -translate_xyz 0.89 -1.90 2.51 ^
        -o Kantsi_shifted.laz

lasview -i Kantsi_shifted.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result looks good in the sense that both sides of the photogrammetric roof are reasonably well aligned with the LiDAR. But there is still a shift along the roof so we repeat the same thing once more as shown in the next image sequence:

We use a suitable displacement vector and apply it to the photogrammetry points, shifting them again:

las2las -i Kantsi_shifted.laz ^
        -translate_xyz -1.98 -0.95 0.01 ^
        -o Kantsi_shifted_again.laz

lasview -i Kantsi_shifted_again.laz ^
        -i LiDAR_Kantsi.laz ^
        -points 10000000 ^
        -faf

The result is still not perfect as there is also some rotational error and you may find another software such as Cloud Compare more suited to align the two point clouds, but for this exercise the alignment shall suffice. Below you see the match between the photogrammetry points and the LiDAR TIN before and after shifting the photogrammetry points with the two (interactively determined) displacement vectors.

The final steps of this exercise use las2dem and the already ground-classified LiDAR compute a 1 meter DTM, which we then use as input to lasheight. We classify the photogrammetry points using their height above this set of ground points with 1 meter spacing: points that are 40 centimeter or more below the LiDAR DTM are classified as noise (7), points that are between 40 below to 1 meter above the LiDAR DTM are classified to a temporary class (here we choose 8) that has those points that could potentially be ground points. This will help, for example, with subsequent ground classification as large parts of the photogrammetry points – namely those on top of buildings and in higher vegetation – can be ignored from the start by a ground classification algorithm such as lasground.

las2dem -i LiDAR_Kantsi.laz ^
        -keep_class 2 ^
        -kill 1000 ^
        -o LiDAR_Kantsi_dtm_1m.bil
lasheight -i Kantsi_shifted_again.laz ^
          -ground_points LiDAR_Kantsi_dtm_1m.bil ^
          -classify_below -0.4 7 ^
          -classify_between -0.4 1.0 8 ^
          -o Kantsi_cleaned.laz

Below the results we have achieved after “roughly” aligning the two point clouds with some new lasview tricks and then using the LiDAR elevations to classify the photogrammetry points into “low noise”, “potential ground”, and “all else”.

We thank the Estonian Land Board for providing open data with a permissive license. Special thanks also go to the organizers of the UAV Summer School in Tartu, Estonia and the European Regional Development Fund for funding this event. Especially fun was the fabulous excursion to the Emajõe-Suursoo Nature Reserve and through to Lake Peipus aboard, overboard and aboveboard the historical barge “Jõmmu”. If you look carefully you can also find the barge in the photogrammetry point cloud. The photogrammetry data used here was acquired during our lunch stop.

Fun aboard and overboard the historical barge “Jõmmu”.

Removing Excessive Low Noise from Dense-Matching Point Clouds

Point clouds produced with dense-matching by photogrammetry software such as SURE, Pix4D, or Photoscan can include a fair amount of the kind of “low noise” as seen below. Low noise causes trouble when attempting to construct a Digital Terrain Model (DTM) from the points as common algorithm for classifying points into ground and non-ground points – such as lasground – tend to “latch onto” those low points, thereby producing a poor representation of the terrain. This blog post describes one possible LAStools workflow for eliminating excessive low noise. It was developed after a question in the LAStools user forum by LASmoons holder Muriel Lavy who was able to share her noisy data with us. See this, this, this, thisthis, and this blog post for further reading on this topic.

Here you can download the dense matching point cloud that we are using in the following work flow:

We leave the usual inspection of the content with lasinfolasview, and lasvalidate that we always recommend on newly obtained data as an exercise to the reader. Note that a check for proper alignment of flightlines with lasoverlap that we consider mandatory for LiDAR data is not applicable for dense-matching points.

With lastile we turn the original file with 87,261,083 points into many smaller 500 by 500 meter tiles for efficient multi-core processing. Each tile is given a 25 meter buffer to avoid edge artifacts. The buffer points are marked as withheld for easier on-the-fly removal. We add a (terser) description of the WGS84 UTM zone 32N to each tile via the corresponding EPSG code 32632:
lastile -i muriel\20161127_Pancalieri_UTM.laz ^
        -tile_size 500 -buffer 25 -flag_as_withheld ^
        -epsg 32632 ^
        -odir muriel\tiles_raw -o panca.laz
Because dense-matching points often have a poor point order in the files they get delivered in we use lassort to rearrange them into a space-filling curve order as this will speed up most following processing steps:
lassort -i muriel\tiles_raw\panca*.laz ^
        -odir muriel\tiles_sorted -olaz ^
        -cores 7
We then run lasthin to reclassify the highest point of every 2.5 by 2.5 meter grid cell with classification code 8. As the spacing of the dense-matched points is around 40 cm in both x and y, around 40 points will fall into each such grid cell from which the highest is then classified as 8:
lasthin -i muriel\tiles_sorted\panca*.laz ^
        -step 2.5 ^
        -highest -classify_as 8 ^
        -odir muriel\tiles_thinned -olaz ^
        -cores 7
Considering only those points classified as 8 in the last step we then run lasnoise to find points that are highly isolated in wide and flat neighborhoods that are then reclassified as 7. See the README file of lasnoise for a detailed explanation of the different parameters:
lasnoise -i muriel\tiles_thinned\panca*.laz ^
         -ignore_class 0 ^
         -step_xy 5 -step_z 0.1 -isolated 4 ^
         -classify_as 7 ^
         -odir muriel\tiles_isolated -olaz ^
         -cores 7
Now we run a temporary ground classification of only (!!!) on those points that are still classified as 8 using the default parameters of lasground. Hence we only use the points that were the highest points on the 2.5 by 2.5 meter grid and that were not classified as noise in the previous step. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_isolated\panca*.laz ^
          -city -ultra_fine -ignore_class 0 7 ^
          -odir muriel\tiles_temp_ground -olaz ^
          -cores 7
The result of this temporary ground filtering is then merely used to mark all points that are 0.5 meter below the triangulated TIN of these temporary ground points with classification code 12 using lasheight. See the README file of lasheight for a detailed explanation of the different parameters:
lasheight -i muriel\tiles_temp_ground\panca*.laz ^
          -do_not_store_in_user_data ^
          -classify_below -0.5 12 ^
          -odir muriel\tiles_temp_denoised -olaz ^
          -cores 7
In the resulting tiles the low noise (but also many points above the ground) are now marked and in a final step we produce properly classified denoised tiles by re-mapping the temporary classification codes to conventions that are more consistent with the ASPRS LAS specification using las2las:
las2las -i muriel\tiles_temp_denoised\panca*.laz ^
        -change_classification_from_to 1 0 ^
        -change_classification_from_to 2 0 ^
        -change_classification_from_to 7 0 ^
        -change_classification_from_to 12 7 ^
        -odir muriel\tiles_denoised -olaz ^
        -cores 7
Let us visually check what each of the above steps has produced by zooming in on a 300 meter by 100 meter strip of points with the bounding box (388500,4963125) to (388800,4963225) in tile ‘panca_388500_4963000.laz’:
The final classification of all points that are not already classified as noise (7) into ground (2) or non-ground (1) was done with a final run of lasground. See the README file of lasground for a detailed explanation of the different parameters:
lasground -i muriel\tiles_denoised\panca*.laz ^
          -ignore_class 7 ^
          -city -ultra_fine ^
          -odir muriel\tiles_ground -olaz ^
          -cores 7
Then we create a seamless hill-shaded DTM tiles by triangulating all the points classified as ground into a temporary TIN (including those in the 25 meter buffer) and then rasterizing only the inner 500 meter by 500 meter of each tile with option ‘-use_tile_bb’ of las2dem. For more details on the importance of buffers in tile-based processing see this blog post here.
las2dem -i muriel\tiles_ground\panca*.laz ^
        -keep_class 2 ^
        -step 1 -hillshade ^
        -use_tile_bb ^
        -odir muriel\tiles_dtm -opng ^
        -cores 7

And here the original DSM side-by-side with resulting DTM after low noise removal. One dense forested area near the center of the data was not entirely removed due to the lack of ground points in this area. Integrating external ground points or manual editing with lasview are two possible way to rectify these few remaining errors …

Integrating External Ground Points in Forests to Improve DTM from Dense-Matching Photogrammetry

The biggest problem of generating a Digital Terrain Model (DTM) from the photogrammetric point clouds that are produced from aerial imagery with dense-matching software such as SURE, Pix4D, or Photoscan is dense vegetation: when plants completely cover the terrain not a single point is generated on the ground. This is different for LiDAR point clouds as the laser can even penetrate dense multi-level tropical forests. The complete lack of ground points in larger vegetated areas such as closed forests or dense plantations means that the many processing workflows for vegetation analysis that have been developed for LiDAR cannot be used for photogrammetric point clouds  … unless … well unless we are getting those missing ground points some other way. In the following we see how to integrate external ground points to generate a reasonable DTM under a dense forest with LAStools. See this, this, this, this, and this article for further reading.

Here you can download the dense matching point cloud, the manually collected ground points, and the forest stand delineating polygon that we are using in the following example work flow:

We leave the usual inspection of the content with lasinfo and lasview that we always recommend on newly obtained data as an exercise to the reader. Using las2dem and lasgrid we created the Google Earth overlays shown above to visualize the extent of the dense matched point cloud and the distribution of the manually collected ground points:

las2dem -i DenseMatching.laz ^
        -thin_with_grid 1.0 ^
        -extra_pass ^
        -step 2.0 ^
        -hillshade ^
        -odix _hill_2m -opng

lasgrid -i ManualGround.laz ^
        -set_RGB 255 0 0 ^
        -step 10 -rgb ^
        -odix _grid_10m -opng

Attempts to ground-classify the dense matching point cloud directly are futile as there are no ground points under the canopy in the heavily forested area. Therefore 558 ground points were manually surveyed in the forest of interest that are around 50 to 120 meters apart from another. We show how to integrate these points into the dense matching point cloud such that we can successfully extract bare-earth information from the data.

In the first step we “densify” the manually collected ground points by interpolating them with triangles onto a raster of 2 meter resolution that we store as LAZ points with las2dem. You could consider other interpolation schemes to “densify” the ground points, here we use simple linear interpolation to prove the concept. Due to the varying distance between the manually surveyed ground points we allow interpolating triangles with edge lengths of up to 125 meters. These triangles then also cover narrow open areas next to the forest, so we clip the interpolated ground points against the forest stand delineating polygon with lasclip to classify those points that are really in the forest as “key points” (class 8) and all others as “noise” (class 7).

las2dem -i ManualGround.laz ^
        -step 2 ^
        -kill 125 ^
        -odix _2m -olaz

lasclip -i ManualGround_2m.laz ^
        -set_classification 7 ^ 
        -poly forest.shp ^
        -classify_as 8 -interior ^
        -odix _forest -olaz

Below we show the resulting densified ground points colored by elevation that survive the clipping against the forest stand delineating polygon and were classified as “key points” (class 8). The interpolated ground points in narrow open areas next to the forest that fall outside this polygon were classified as “noise” (class 7) and are shown in violet. They will be dropped in the next step.

We then merge the dense matching points with the densified manual ground points (while dropping all the violet points marked as noise) as input to lasthin and reclassify the lowest point per 1 meter by 1 meter with a temporary code (here we use class 9 that usually refers to “water”). Only the subset of lowest points that receives the temporary classification code 9 will be used for ground classification later.

lasthin -i DenseMatching.laz ^
        -i ManualGround_2m_forest.laz ^
        -drop_class 7 ^
        -merged ^
        -lowest -step 1 -classify_as 9 ^
        -o DenseMatchingAndDensifiedGround.laz

We use the GUI of lasview to pick several interesting areas for visual inspection. The selected points load much faster when the LAZ file is spatially indexed and therefore we first run lasindex. For better orientation we also load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i DenseMatchingAndDensifiedGround.laz 

lasview -i DenseMatchingAndDensifiedGround.laz -gui

We pick the area shown below that contains the target forest with manually collected and densified ground points and a forested area with only dense matching points. The difference could not be more drastic as the visualizations show.

Now we run ground classification using lasground with option ‘-town’ using only the points with the temporary code 9 by ignoring all other classifications 0 and 8 in the file. We leave the temporary classification code 9 unchanged for all the points that were not classified with “ground” code 2 so we can visualize later which ones those are.

lasground -i DenseMatchingAndDensifiedGround.laz ^
          -ignore_class 0 8 ^
          -town ^
          -non_ground_unchanged ^
          -o GroundClassified.laz

We again use the GUI of lasview to pick several interesting areas after running lasindex and again load the forest stand delineating polygon as an overlay into the GUI.

lasindex -i GroundClassified.laz 

lasview -i GroundClassified.laz -gui

We pick the area shown below that contains all three scenarios: the target forest with manually collected and densified ground points, an open area with only dense matching points, and a forested area with only dense matching points. The result is as expected: in the target forest the manually collected ground points are used as ground and in the open area the dense-matching points are used as ground. But there is no useful ground in the other forested area.

Now we can compute the heights of the points above ground for our target forest with lasheight and either replace the z elevations in the file of store them separately as “extra bytes”. Then we can compute, for example, a Canopy Height Model (CHM) that color codes the height of the vegetation above the ground with lasgrid. Of course this will only be correct in the target forest where we have “good” ground but not in the other forested areas. We also compute a hillshaded DTM to be able to visually inspect the topography of the generated terrain model.

lasheight -i GroundClassified.laz ^
          -store_as_extra_bytes ^
          -o GroundClassifiedWithHeights.laz

lasgrid -i GroundClassifiedWithHeights.laz ^
        -step 2 ^
        -highest -attribute 0 ^
        -false -set_min_max 0 25 ^
        -o chm.png

las2dem -i GroundClassified.laz ^
        -keep_class 2 -extra_pass ^
        -step 2 ^ 
        -hillshade ^
        -o dtm.png

Here you can download the resulting color-coded CHM and the resulting hill-shaded DTM as Google Earth KMZ overlays. Clearly the resulting CHM is only meaningful in the target forest where we used the manually collected ground points to create a reasonable DTM. In the other forested areas the ground is only correct near the forest edges and gets worse with increasing distance from open areas. The resulting DTM exhibits some interesting looking  bumps in the middle of areas with manually collected ground point. Those are a result of using the dense-matching points as ground whenever their elevation is lower than that of the manually collected points (which is decided in the lasthin step). Whether those bumps represent true elevations of are artifacts of low erroneous elevation from dense-matching remains to be investigated.

For forests on complex and steep terrain the number of ground points that needs to be manually collected may make such an approach infeasible in practice. However, maybe you have another source of elevation, such as a low-resolution DTM of 10 or 25 meter provided by your local government. Or maybe even a high resolution DTM of 1 or 2 meter from a LiDAR survey you did several years ago. While the forest may have grown a lot in the past years, the ground under the forest will probably not have changed much …

LASmoons: Stéphane Henriod

Stéphane Henriod (recipient of three LASmoons)
National Statistical Committee of the Kyrgyz Republic
Bishkek, Kyrgyzstan

This pilot study is part of the International Climate Initiative project called “Ecosystem based Adaptation to Climate change in the high mountainous regions of Central Asia” that is funded by the Federal Ministry for the Environment, Nature Conservation, Building and Nuclear Safety (BMU) of Germany and implemented by the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) GmbH in Kyrgyzstan, Tajikistan and Kazakhstan.

lasmoons_Stephane_Henriod_1

Background:
The ecosystems in high mountainous regions of Central Asia are characterized by a unique diversity of flora and fauna. In addition, they are the foundation of the livelihoods of the local population. Specific benefits include clean water, pasture, forest products, protection against floods and landslides, maintenance of soil fertility, and ecotourism. However, the consequences of climate change such as melting glaciers, changing river runoff regimes, and weather anomalies including sharp temperature fluctuations and non-typical precipitation result in negative impacts on these ecosystems. Coupled with unwise land use, these events damage fragile mountain ecosystems and reduce their regeneration ability undermining the local population’s livelihoods. Therefore, people living in rural areas and directly depending on natural resources must adapt to adverse impacts of climate change. This can be done through a set of measures, known in the world practice as ecosystem-based adaptation (EbA) approach. It promotes the sustainable use of natural resources to sustain and enhance the livelihood of the population depending on those resources.

lasmoons_Stephane_Henriod_2 Goal:
In two selected pilot regions of Kyrgyzstan and Tajikistan, planned measures will concentrate on climate-informed management of ecosystems in order to maintain their services for the rural population. EbA always starts with identifying the vulnerability of the local population. Besides analyzing the socio-economic situation of the local population, this includes (1) assessing the ecological conditions of the ecosystems in the watershed and the related ecosystem services people benefit from, (2) identifying potential disaster risks, and (3) analyzing glacier dynamics with its response to water runoff. As a baseline to achieve this and to get spatially explicit, remote sensing based techniques and mapping activities need to be utilized.

A first UAV (unmanned aerial vehicle) mission has taken place in the Darjomj watershed of the Bartang valley in July 2016. RGB-NIR images as well as a high-resolution Digital Surface Model have been produced that now need to be segmented and analysed in order to produce comprehensive information. The main processing that will take advantage of LAStools is the generation of a DTM from the DSM that will then be used for identifying risk areas (flood zones, landslides and avalanches, etc.). The results of this approach will ultimately be compared with lower-cost satellite images (RapidEye, Planet, Sentinel).

Data:
+ High-resolution RGB and NIR image (10 cm) from a SenseFly Ebee
+ High-resolution DSM (10 cm) from a SenseFly Ebee

LAStools processing:
1)
classify DSM points obtained via dense-matching photogrammetry into a SenseFly Ebee imagery into ground and non-ground points via processing pipelines as described here and here [lastile, lassort, lasnoise, lasground]
2) create a DTM [las2dem, lasgrid, blast2dem]
3) produce 3D visualisations to facilitate the communication around adaptation to climate change [lasview]
lasmoons_Stephane_Henriod_0

Creating a Better DTM from Photogrammetic Points by Avoiding Shadows

At INTERGEO 2016 in Hamburg, the guys from Aerowest GmbH shared with us a small photogrammetric point cloud from the city of Soest that had been generated with the SURE dense-matching software from nFrames. We want to test whether – using LAStools – we can generate a decent DTM from these points that are essentially a gridded DSM. If this interest you also see this, this, this, and this story.

soest_00_google_earth

Here you can download the four original tiles (tile1, tile2, tile3, tile4) that we are using in these experiments. We first re-tile them into smaller 100 meter by 100 meter tiles with a 20 meter buffer using lastile. We use option ‘-flag_as_withheld’ that flags all the points falling into the buffer using the withheld flag so they can easily be removed on-the-fly later with the ‘-drop_withheld’ filter (see the README for more on this). We also add the missing projection with ‘-epsg 32632’.

lastile -i original\*.laz ^
        -tile_size 100 -buffer 20 ^
        -flag_as_withheld ^
        -epsg 32632 ^
        -odir tiles_raw -o soest.laz

Below is a screenshot from one of the resulting 100 meter by 100 meter tiles (with 20 meter buffer) that we will be focusing on in the following experiments.

The tiles 'soest_437900_5713800.laz'

The tile ‘soest_437900_5713800.laz’ used in all experiments.

Next we use lassort to reorder the points ordered along a coherent space-filling curve as the existing scan-line order has the potential to cause our triangulation engine to slow down. We do this on 4 cores.

lassort -i tiles_raw\*.laz ^
        -odir tiles_sorted -olaz ^
        -cores 4

We then use lasthin to lower the number of points that we consider as ground points (see the README for more info on this tool). We do this because the original 5 cm spacing of the dense matching points is a bit excessive for generating a DTM with a resolution of, for example, 50 cm. Instead we only use the lowest point in each 20 cm by 20 cm cell as a candidate for becoming a ground point, which reduces the number of considered points by a factor of 16. We achieve this by classifying these lowest point to a unique classification code (here: 9) and later tell lasground to ignore all other classifications.

lasthin -i tiles_sorted\*.laz ^
        -step 0.2 -lowest -classify_as 9 ^
        -odir tiles_thinned -olaz ^
        -cores 4
Then we run lasground on 4 cores to classify the ground points with options ‘-step 20’, ‘-bulge 0.5’, ‘-spike 0.5’ and ‘-fine’ that we selected after two trials on a single tile. There are several other options in lasground to play with that may achieve better results on other data sets (see README file for more on this). The ‘-ignore_class 0’ option instructs lasground to ignore all points that are not classified so that only those points that lasthin was classifying as 9 in the previous step are considered.
lasground -i tiles_thinned\*.laz ^
          -step 20 -bulge 0.5 -spike 0.5 -fine ^
          -ignore_class 0 ^
          -odir tiles_ground -olaz ^
          -cores 4
However, when we scrutinize the resulting ground classification notice that there are bumps in the corresponding ground TIN that seem to correspond to areas where the original imagery has dark shadows that in turn seem to significantly affect the geometric accuracy of the points generated by the dense-matching algorithm.
Looking a the bump from below we identify the RGB colors of the points have that form the bump that seem to be of much lower accuracy than the rest. This is an effect that we have noticed in the past for data generated with other dense-matching software and maybe an approach similar to the one we take here could also help with this low noise. It seems that points that are generated from shadowed areas in the input images can have a lot lower accuracy than those from well-lit areas. We use this correlation between RGB color and geometric accuracy to simply exclude all points whose RGB colors indicate that they might be from shadow areas from becoming ground points.
The RGB colors of low-accuracy points are mostly from very dark shadowed areas.

The RGB colors of low-accuracy points are mostly from very dark shadowed areas.

We use las2las with the relatively new ‘-filtered_transform’ option to reclassify all points whose RGB color is close to zero to yet classification code 7 (see README file for more on this). We do this for all points whose red value is between 0 and 30, whose green value is between 0 and 35, and whose blue value is between 0 and 40. Because the RGB values were stored with 16 bits in these files we have to multiply these values with 256 when applying the filter.
las2las -i tiles_thinned\*.laz ^
        -keep_RGB_red 0 7680 ^
        -keep_RGB_green 0 8960 ^
        -keep_RGB_blue 0 10240 ^
        -filtered_transform ^
        -set_classification 7 ^
        -odir tiles_rgb_filtered -olaz ^
        -cores 4
Below you see all points that will no longer be considered because their classification was set to 7 by the command above.
Points whose RGB values indicate they might lie in the shadows.

Points whose RGB values indicate they might lie in the shadows.

We then re-run lasground with the same options ‘-step 20’, ‘-bulge 0.5’, ‘-spike 0.5’ and ‘-fine’ as before but now we ignore all points that are still have classification 0 because they were not classified as 9 by lasthin earlier and we also ignore all points that have been assigned classification 7 by las2las in the previous step.
lasground -i tiles_thinned\*.laz ^
          -step 20 -bulge 0.5 -spike 0.5 -fine ^
          -ignore_class 0 7 ^
          -odir tiles_ground -olaz ^
          -cores 4
The situation has significantly improved for the bumb we saw before as you can see in the images below.

Finally we create a DTM with blast2dem (see README) and a DSM with lasgrid (see README) by merging all points into one file but dropping the buffer points that were marked as withheld by the initial run of lastile (see README).

blast2dem -i tiles_ground\*.laz -merged ^
          -drop_withheld -keep_class 2 ^
          -step 0.5 ^
          -o dtm.bil

lasgrid -i tiles_ground\*.laz -merged ^
        -drop_withheld ^
        -step 0.5 -average ^
        -o dsm.bil
 As our venerable lasview (see README) can directly read BIL rasters as points (just like all the other LAStools), so we can compare the DTM and the DTM by loading them as two flight lines (‘-faf’) and then switch back and forth between the two by pressing ‘0’ and ‘1’ in the viewer.
lasview -i dtm.bil dsm.bil -faf

Above you see the final DTM and the original DSM. So yes, LAStools can definitely create a DTM from point clouds that are the result of dense-matching photogrammetry. We used the correlation between shadowed areas in the source image and geometric errors to remove those points from consideration for ground points that are likely to be too low and result in bumps. For dense-matching algorithms that also output an uncertainty value for each point there is the potential for even better results as our range of eliminated RGB colors may not cover all geometrically uncertain points while at the same time may be too conservative and also remove correct ground points.

LASmoons: Jakob Iglhaut

Jakob Iglhaut (recipient of three LASmoons)
Program for Geospatial Information Management
Carinthia University of Applied Sciences, Villach, AUSTRIA

Background:
As part of the EU LIFE programme two river stretches in Carinthia, Austria have recently been subject to restoration measures. The LIFE-project aims at protecting valuable riverine flora and fauna while improving flood protection. By remodelling the river beds, the construction of groynes and still water bodies the river environment was directed to more natural morphology and state. The joint R&D project “Remotely Piloted Aircraft Multi Sensor System (RPAMSS)” aims at capturing multi-dimensional environmental data in order to monitor the development of these rivers stretches in a holistic way. Flights with an RTK capable fixed wing UAV are carried out at a particular section of the rivers Gail and Drau respectively. The project site at the Upper-Drau is located in the area of Obergottesfeld, Austria (560m ASL), with an area currently remotely monitored by the RPAmSS of approximately 3.5km². The second study area is located close to Feistritz at the river Gail (550m ASL) with an area of approx. 0.9km². Apart from being addressed by the LIFE project both study areas are also defined as NATURA 2000 nature protection sites. In both areas frequent UAV flights are carried out collecting high-resolution multi-spectral imagery. Structure from Motion photogrammetry enables the creation of high-density multi-spectral point clouds.

lasmoons_jakob_iglhaut_0

Goal:
The aim of the project is to assess the morphology and related temporal changes of the described riverine environment based on SfM point clouds. A full processing chain will be developed to take full advantage of the high-density data. Particular interest lies in the extraction of ground points underneath vegetation in leaf-on/leaf-off. Ground points will be gridded to generate DTMs. The qualitative performance of the data will be held against an ALS acquired DTM. Furthermore forest metrics will be extracted for the riparian zone in order to quantify their current state and changes.

Data:
+
High-density multi-spectral (R,G,B,NIR) SfM derived point clouds (UAS imagery)
+ Variable point densities, GSD ~3cm.

LAStools processing:
1) 
fix SfM owing incoherence [lassort]
2) create 100m tiles (10m buffer) for parallel processing [lastile]
3) noise removal introduced by the SfM algorithm [lasnoise]
4).extract ground points [lasground_new]
5) generate normalized above heights [lasheight]
6) classify based on height-above-ground (low veg, high veg) [lasheight]
7) create DSM and DTM [blast2dem]
8) 
generate a Canopy Height Model (CHM) using the pit-free method of Khosravipour et al. (2014) with the workflow described here [lasthin, las2dem, lasgrid]
9) 
sub-sample the point clouds for other (spectral) analyses [lassplit, lasthin, lasmerge]

Reference:
Westoby, M. J., et al. “Structure-from-Motion photogrammetry: A low-cost, effective tool for geoscience applications.” Geomorphology 179 (2012): 300-314.
Fonstad, Mark A., et al. “Topographic structure from motion: a new development in photogrammetric measurement.” Earth Surface Processes and Landforms 38.4 (2013): 421-430.
Khosravipour, A., Skidmore, A.K., Isenburg, M., Wang, T.J., Hussin, Y.A., 2014. Generating pit-free Canopy Height Models from Airborne LiDAR. PE&RS = Photogrammetric Engineering and Remote Sensing 80, 863-872.
Javernick, L., J. Brasington, and B. Caruso. “Modeling the topography of shallow braided rivers using Structure-from-Motion photogrammetry.” Geomorphology 213 (2014): 166-182.

Removing low noise from Semi-Global Matching (SGM) Points

At PhoWo and INTERGEO 2015 rapidlasso was spending quality time with VisionMap who make the A3 Edge camera that provides fine resolution images from high altitudes and can quickly cover large areas. Under the hood of their LightSpeed software is the SURE dense-matching algorithm from nframes that turns images into photogrammetric point clouds. We were asked whether LAStools is able to create bare-earth DTM rasters from such points. If you have read our first, second, or third blog post on the topic you know that our asnwer was a resounding “YES!”. But we ran into an issue due to the large amount of low noise. Maybe the narrow angle between images at a high flying altitude affects the semi-global matching (SGM) algorithm. Either way, in the following we show how we use lascanopy and lasheight to mark low points as noise in a preprocessing step.

We obtained a USB stick containing a 2.42 GB file called “valparaiso_DSM_SURE_100.las” containing about 100 million points spaced 10 cm apart generated by SURE and stored with an (unnecessary high) resolution of millimeters (aka “resolution fluff”) as the third digit of all coordinates was always zero:

las2txt -i F:\valparaiso_DSM_SURE_100.las -stdout | more
255991.440 6339659.230 89.270
255991.540 6339659.240 89.270
255991.640 6339659.240 88.660
255991.740 6339659.230 88.730
[...]

We first compressed the bulky 2.42 GB LAS file into a compact 0.23 GB LAZ to our local hard drive – a file that is 10 times smaller and that will be 10 times faster to copy:

laszip -i F:\valparaiso_DSM_SURE_100.las ^
       -rescale 0.01 0.01 0.01 ^
       -o valparaiso_DSM_SURE_100.laz ^

Then we tiled the 100 million points into 250 meter by 250 meter tiles with 25 meter buffer using lastile. We use the new option ‘-flag_as_withheld’ to mark all buffer points with the withheld flag so they can be easily removed on-the-fly via the ‘-drop_withheld’ command-line filter (also see the README file).

lastile -i valparaiso_DSM_SURE_100.laz ^
        -tile_size 250 -buffer 25 ^
        -flag_as_withheld ^
        -odir valparaiso_tiles_raw -o val.laz
250 meter by 250 meter tiling with 25 meter buffer

250 meter by 250 meter tiles with 25 meter buffer

Before processing millions to billions of points we experiment with different options to find what works best on a smaller area, namely the tile “val_256750_6338500.laz” pointed to above. Using the workflow from this blog posts did not give perfect results due to the large amount of low noise. Although many low points were marked as noise (violett) by lasnoise, too many ended up classified as ground (brown) by lasground as seen here:
excessive low noise affects ground classification

excessive low noise affects ground classification

We use lascanopy – a tool very popular with forestry folks – to compute four BIL rasters where each 5m by 5m grid cell contains the 5th, 10th, 15th, and 20th percentile of the elevation values from all points falling into a cell (also see the README file):
lascanopy -i val_256750_6338500.laz ^
          -height_cutoff -1000 -step 5 ^
          -p 5 10 15 20 ^
          -obil
The four resulting rasters can be visually inspected and compared with lasview:
lasview -i val_256750_6338500_*.bil -files_are_flightlines
comparing 5th and 10th elevation percentiles

comparing the 5th and the 10th elevation percentiles

By pressing the hot keys <0>, <1>, <2> and <3> to switch between the different percentiles and <t> to triangulate them into a surface, we can see that for this example the 10th percentile works well while the 5th percentile is still affected by the low noise. Hence we use the 10th percentile elevation surface and classify all points below it as noise with lasheight (also see the README file).
lasheight -i val_256750_6338500.laz ^
          -ground_points val_256750_6338500_p10.bil ^
          -classify_below -0.5 7 ^
          -odix _denoised -olaz
We visually confirm that all low points where classified as noise (violett). Note the obvious “edge artifact” along the front boundary of the tile. This is why we always recommend to use a buffer in tile-based processing.
points below 10th percentile surface marked as noise

points below 10th percentile surface marked as noise

At the end of the blog post we give the entire command sequence that first computes a 10th percentile raster with 5m resolution for the entire file with lascanopy and then marks all points of each tile below the10th percentile surface as noise with lasheight. When we classify all points into ground and non-ground points with lasground we ignore all points classified as noise. Here are the results:

DTM extracted from SGM points despite low noise

DTM extracted from dense-matching points despite low noise

corresponding DSM with all buildings and vegetaion included

corresponding DSM with all buildings and vegetaion included

Above you see the generated DTM and the corresponding DSM. So yes, LAStools can create DTMs from points that are result of dense-matching photogrammetry … even when there is a lot of low noise. There are many other ways to mix and match the modules of LAStools for more refined workflows. Sometimes declaring all points below the 10th percentile surface as noise may be too agressive. In a future blog post we will look how to combine lascanopy and lasnoise for a more adaptive approach.

:: compute 10th percentile for entire area
lascanopy -i valparaiso_DSM_SURE_100.laz ^
          -height_cutoff -1000 -step 5 ^
          -p 10 ^
          -obil

:: tile input into 250 meter tiles with buffer
lastile -i valparaiso_DSM_SURE_100.laz ^
        -tile_size 250 -buffer 25 ^
        -flag_as_withheld ^
        -odir valparaiso_tiles_raw -o val.laz

:: mark points below as noise
lasheight -i valparaiso_tiles_raw/*.laz ^
          -ground_points valparaiso_DSM_SURE_100_p10.bil ^
          -classify_below -0.5 7 ^
          -odir valparaiso_tiles_denoised -olaz ^
          -cores 4

:: ground classify while ignoring noise points
 lasground -i valparaiso_tiles_denoised\*.laz ^
          -ignore_class 7 ^
          -town -bulge 0.5 ^
          -odir valparaiso_tiles_ground -olaz ^
          -cores 4 

:: create 50 cm DTM rasters in BIL format
las2dem -i valparaiso_tiles_ground\*.laz ^
        -keep_class 2 ^
        -step 0.5 -kill 200 -use_tile_bb ^
        -odir valparaiso_tiles_dtm -obil ^
        -cores 4 

:: average 50 cm DTM values into single 1m DTM 
lasgrid -i valparaiso_tiles_dtm\*.bil -merged ^
        -step 1.0 -average ^
        -o valparaiso_dtm.bil

:: create hillshade adding in UTM 19 southern
blast2dem -i valparaiso_dtm.bil ^
          -hillshade -utm 19M ^
          -o valparaiso_dtm_hill.png

:: create DSM hillshade with same three steps
las2dem -i valparaiso_tiles_raw\*.laz ^
        -step 0.5 -kill 200 -use_tile_bb ^
        -odir valparaiso_tiles_dsm -obil ^
        -cores 4
lasgrid -i valparaiso_tiles_dsm\*.bil -merged ^
        -step 1.0 -average ^
        -o valparaiso_dsm.bil
blast2dem -i valparaiso_dsm.bil ^
          -hillshade -utm 19M ^
          -o valparaiso_dsm_hill.png