Background: Hydrological models require various input data for flood vulnerability mapping. An important input data for flood vulnerability mapping is the DTM over which flow is being routed. DTMs are generated using cartography, ground surveying, digital aerial photogrammetry, interferometric SAR (InSAR), LiDAR amongst other means. The accuracy of high resolution DTMs minimize errors that may emanate from input data when conducting hydrological modelling, especially in small built-up catchment areas. This research involves the application of digital aerial photogrammetry to generate point clouds which can subsequently be utilized for flood vulnerability mapping.
photogrammetry point cloud
photogrammetic DSM
Goal:
To consolidate on previous gains in using LAStools to generate DTMs required for flood vulnerability mapping. The suitability of these DTMs will be subsequently validated for flood vulnerability analysis. These results will be compared with other DTMs in order to determine the uncertainty associated with the use of such DTMs for flood vulnerability mapping.
photogrammetry point cloud
photogrammetric DSM
Data:
+ high-resolution photogrammetry point cloud and DSM for Lagos Island, Ikorodu and Ajah Nigeria – – – imagery obtained with an Ebee Sensefly drone flight – – – photogrammetry point cloud generated with Photoscan by AgiSoft + rainfall data + classified LiDAR point cloud with a resolution of 1 pulse per square meter obtained for the study area from the Lagos State Government
Photogrammetry point cloud
Photogrammetric DSM
LAStools processing: 1) tile large photogrammetry point cloud into tiles with buffer [lastile] 2) mark set of points whose z coordinate is a certain percentile of that of their neighbors [lasthin] 3) remove isolated low points from the set of marked points [lasnoise] 4) classify marked points into ground and non-ground [lasground] 5) pull in points close above and below the ground [lasheight] 6) create Digital Terrain Model (DTM) from ground points [las2dem] 7) merge and hillshade individual raster DTMs [blast2dem]
In the last article about the Livox MID-40 LiDAR scan of Samara we quality checked the data, aligned the flight lines and cleaned the remaining spurious scan lines. In this article we will process this data into the standard products. A focus will be on generating a smooth “median ground” surface from the “fluffy” scanner data. You can get the flight lines here and follow along with the processing after downloading LAStools. Below is one result of this work.
The first processing step will be to tile the strips into tiles that contain fewer points for faster and also parallel processing. One quick “flat terrain” trick first. Often there are spurious points that are far above or below the terrain. For a relatively flat area these can be easily be identified by computing a histogram elevation values with lasinfo and then eliminated with simple drop-filters on the z coordinate.
The relevant excerpts of the output of the lasinfo report are shown below:
[…] z coordinate histogram with bin size 1.000000 bin -104 has 1 bin 5 has 1 bin 11 has 273762 bin 12 has 1387999 bin 13 has 5598767 bin 14 has 36100225 bin 15 has 53521371 […] bin 59 has 60308 bin 60 has 26313 bin 61 has 284 bin 65 has 10 bin 66 has 31 bin 67 has 12 bin 68 has 1 bin 83 has 3 bin 84 has 4 bin 93 has 31 bin 94 has 93 bin 95 has 17 […]
The few points below 11 meters and above 61 meters in elevation can be considered spurious. In the initial tiling step with lastile we add simple elevation filters to drop those points on-the-fly from the buffered tiles. The importance of buffers when processing LiDAR in tiles is discussed in this article. With lastile we create tiles of size 125 meters with a buffer of 20 meters, while removing the points identified as spurious with the appropriate filters. Because the input strips have their “file source ID” in the LAS header correctly set, we use ‘-apply_file_source_ID’ to set the “point source ID” of every point to this value. This preserves the information of which point comes from which flight line.
This produces 49 buffered tiles that will now be processed similarly to the workflow outlined for another lower-priced system that generates similarly “fluffy” point clouds on hard surfaces, the Velodyne HLD-32E, described here and here. What do we mean with “fluffy”? We cut out a 1 meter slice across the road with the new ‘-keep_profile’ filter and las2las and inspect it with lasview.
In the view below we pressed hot key twice ‘]’ to exaggerate the z scale. The “fuzziness” is that thickness of the point cloud in the middle of this flat tar road. It is around 20 to 25 centimeters and is equally evident in both flight lines. What is the correct ground surface through this 20 to 25 centimeter “thick” road? We will compute a “mean ground” that roughly falls into the middle of this “fluffy” surface,
slice of 1 meter width across the tar road in front of Omael store
The next three lasthin runs mark a sub set of low candidate points for our lasground filtering. In every 25 cm by 25 cm, every 33 cm by 33 cm and every 50 cm by 50 cm area we reclassify the point closest to the 10th percentile as class 8. In the first call to lasthin we put all other points into class 1.
Below you can see the resulting points of the 10th percentile classified as class 8 in red.
Operating only on the points classified as 8 (i.e. ignoring those classified as 1) we then run a ground classification with lasground using the following command line, which creates a “low ground” classification. .
Since this is an open road this classifies most of the red points as ground points.
Using lasheight we then create a “thick ground” by pulling all those points into the ground surface that are between 5 centimeter below and 17 centimeter above the “low ground”. For visualization purposes we temporarily use class 6 to capture this thickened ground.
We go back to lasthin and reclassify in every 50 cm by 50 cm area the point closest to the 50th percentile as class 8. This is what we call the “median ground”.
The final “median ground” points are shown in red below. These are the points we will use to eventually compute the DTM.
We complete the fully automated classification available in LAStools by running lasclassify with the following options. See the README file for what these options mean. Note that we move the “thick ground” from the temporary class 6 to the proper class 2. The “median ground” continues to be in class 8.
Before the resulting tiles are published or shared with others we should remove the temporary buffers, which is done with lastile – the same tool that created the buffers.
Below a screenshot of the resulting Potree 3D Web portal rendered with Potree Desktop. Inspecting the classification will reveal a number of errors that could be tweaked manually with lasview. How the point colors were generated is not described here but I used Google satellite imagery and mapped it with lascolor to the points. The elevation colors are mapped from 14 meters to 25 meters. The intensity image may help us understand why the black tar road on the left hand side that runs from the “Las Palmeras Condos” to the beach in “Cangrejal” has no samples. It seems the intensity is lower on this side which indicates that the drone may have flown higher here – too high to for the road to reflect enough photons. The yellow view of return type indicates that despite it’s multi-return capability, the Livox MID-40 LiDAR is mostly collecting single returns.
colored by classification
colored by Google Earth RGB
colored by elevation
corored by intensity
colored by return type
The penetration capability of the Livox MID-40 LiDAR was less good than we had hoped for. Below thick vegetation we have too few points on the ground to give us a good digital terrain model. In the visualization below you can see that below the dense vegetation there are large black areas which are completely void of points.
all returns
only building, thick and median ground points
Now we produce the standard product DTM and DSM at a resolution of 50 cm. Because the total area is not that big we generate temporary tiles in “raster LAZ” with las2dem and merge them into a single GeoTiff with blast2dem.
Background: Canopy height is a fundamental geometric tree parameter in supporting sustainable forest management. Apart from the standard height measurement method using LiDAR instruments, other airborne measurement techniques, such as very high-resolution passive airborne imaging, have also shown to provide accurate estimations. However, both methods suffer from high cost and cannot be regularly repeated.
Preliminary results of predicted CHE based on multi-temporal satellite images against ground-truth LiDAR measurements. The 3rd column depicts pixel-wise absolute error of prediction. Last column depicts pixel-wise uncertainty estimation of the prediction (in means of 3 standard deviations).
Goal: In our study, we attempt to substitute airborne measurements with widely available satellite imagery. In addition to spatial and spectral correlations of a single-shot image, we seek to exploit temporal correlations of sequential lower resolution imagery. For this we use a convolutional variant of a recurrent neural network based model for estimating canopy height, based on a temporal sequence of Sentinel-2 images. Our model’s performance using sequential space borne imagery is shown to outperform the compared state-of-the-art methods based on costly airborne single-shot images as well as satellite images.
Digital Terrain Model of a part of the study area
Data: The experimental study area of approximately 940 squared km is includes two national parks, Bavarian Forest National Park and Šumava National Park, which are located at the border between Germany and Czech Republic. LiDAR measurements of the area from 2017 and 2019 will be used as ground truth height measurements that have been provided by the national park’s authorities. Temporal sequences of Sentinel-2 imagery will be acquired from the Copernicus hub for canopy height estimation.
LAStools processing: Accurate conversion of LAS files into DEM and DSM in order to acquire ground truth canopy height model. 1) Remove noise [lasthin, lasnoise] 2) Classify points into ground and non-ground [lasground, lasground_new] 3) Create DTMs and DSMs [lasthin, las2dem]
Zak Kus (recipient of three LASmoons)
Topology Enthusiast
San Francisco, USA
Background: While LiDAR data enables a lot of research and innovation in a lot of fields, it can also be used to create unique and visceral art. Using the high resolution data available, a 3D printer, and a long tool chain, we can create a physical, 3D topological map of the San Francisco bay area that shows off both the city’s hilly geology, and its unique skyline.
Test print of San Francisco’s Golden Gate Park.
Test print of San Francisco’s Golden Gate Park.
Goal:
The ultimate goal of this project is to create an accurate, unique physical map of San Francisco, and the surrounding areas, which will be given to a loved one as a birthday gift. Using the data from the 2010 ARRA-CA GoldenGate survey, we can filter and process the raw lidar data into a DEM format using LAStools, which can be converted using a python script into a “water tight” 3D printable STL file.
While the data works fairly well out of the box, it does require a lot of manual editing, to remove noise spikes, and to delineate the coast line from the water in low lng areas. Interestingly, while many sophisticated tools exist to edit STLs that could in theory be used to clean up and prepare the files at the STL stage, few are capable of even opening files with so much detailed data. Using LAStools to manually classify, and remove unwanted data is the only way to achieve the desired level of detail in the final piece.
Data:
+ LiDAR data provided through USGS OpenTopography, using the ARRA-CA GoldenGate 2010 survey + Average point density of 3.33 pts/m^2 (though denser around SF) + Covers 2638 km^2 in total (only a ~100 km^2 subset is used)
LAStools processing:
1) Remove noise [lasnoise] 2) Manually clean up shorelines and problematic structures [lasview, laslayers] 3) Combine multiple tiles (to fit 3d printer) [lasmerge] 4) Create DEMs (asc format) for external tool to process [las2dem]
Background: As a low-lying coastal nation, the Republic of the Marshall Islands (RMI) is at the forefront of exposure to climate change impacts. RMI has a strong dependence on natural resources and biodiversity not only for food and income but also for culture and livelihood. However, these resources are threatened by rising sea levels and associated coastal hazards (king tides, storm surges, wave run-up, saltwater intrusion, erosion). This project aims at addressing the lack of technical capacity and available data to implement effective risk reduction and adaptation measures, with a particular focus on inundation mapping and local evacuation planning in population centers.
Typical low-lying coastal area of the Republic of the Marshall
Goal:
This project intends to use LAStools to generate a DEM of the inhabited sections of 3 remote atolls (Aur, Ebon, Likiep) and 1 island (Mejit). The resulting DEM will be used to produce an inundation exposure model (and map) under variable sea level rise projections for each site. The ultimate goal is to integrate the results into each site’s disaster risk reduction strategy (long-term outcome) and present it through community consultations in schools, community centers, and council houses.
Data:
+ Aerial imagery of 11.5 square kilometers of land (6.3% of total national landmass) using DJI Matrice 200 V2 & DJI Zenmuse X5S with a minimum overlap of 75/75 and maximum altitude of 120m.
LAStools processing: 1) tile large point cloud into tiles with buffer [lastile] 2) remove noise points [lasthin, lasnoise] 3) classify points into ground and non-ground [lasground] 4) create Digital Terrain Models and Digital Surface Models [lasthin, las2dem]
Background: Structure from motion (SfM) photogrammetry, has emerged as an effective tool to accurately extract three-dimensional (3D) structures from a series of overlapping two-dimensional (2D) Unmanned aerial vehicles (UAVs) images. The bid to switch from the current labour-intensive, and time consuming forestry inventory practices has seen a lot of interest geared towards understanding the use of SfM photogrammetry to derive forest metrics (Iglhaut et al., 2019). There are a range of commercial, free and open source SfM photogrammetric software packages that can be used to process UAV images into 3D point clouds. Selection of the most appropriate package has become an important issue for most projects (Turner, Lucieer, & Wallace, 2013). A comparison of software performance in terms of accuracy, processing times and related costs would help foresters in deciding the best tool for the job.
Typical point cloud derived with SfM software from UAV imagery.
Goal:
The study will generate 3D point clouds of images of a young forest trial and LAStools will be used to derive canopy height models (CHM) for computing tree heights. Tree heights from LiDAR data will serve as a baseline for accuracy assessment of heights derived from the point clouds.
Data:
+ 422 UAV images processed into 3D point clouds using ten (10) different commercial and open source SfM software packages
LAStools processing: 1) tile large point cloud into tiles with buffer [lastile] 2) remove noise points [lasthin, lasnoise] 3) classify points into ground and non-ground [lasground] 4) create Digital Terrain Modelsand Digital Surface Models [lasthin, las2dem] 5) produce Canopy Height Models for computing tree heights [lasheight, las2dem]
References:
Iglhaut, J., Cabo, C., Puliti, S., Piermattei, L., O’Connor, J., & Rosette, J. (2019). Structure from motion photogrammetry in forestry: A review. Current Forestry Reports, 5(3), 155-168. doi:https://doi.org/10.1007/s40725-019-00094-3
Turner, D., Lucieer, A., & Wallace, L. (2013). Direct georeferencing of ultrahigh-resolution UAV imagery. EEE Transactions on Geoscience and Remote Sensing, 52(5), 2738-2745. doi:10.1109/TGRS.2013.2265295
A while back we had a first look at the Single Photon LiDAR from Leica’s SPL100 sensor (that eventually turned out just to be an SPL99 because one beamlet or one receiver in the 10 by 10 array was broken and did not produce any returns). Today we are taking a closer look at a strategy to remove the excessive noise in the raw Single Photon LiDAR data from a “proper” SPL100 sensor (where all of the 100 beamlets are firing) that was flown in 2017 in Navarra, Spain.
Profile through original points on top of generated DTM.
The data was provided as open data by the cartography section of Navarra’s Government and is available via a simple download FTP portal. We describe the LAStools processing steps that were used to eliminate the excessive noise and to generate a smooth DTM. In the following we are using the originally released version of the data, that we obtained shortly after the portal went online that seems to be a bit more “raw” than the current files available now. One starndard quality check with lasinfo was done with:
Upon inspecting the lasinfo report we suggest a few changes in how to store this Single Photon LiDAR data for more efficient hosting via an online portal. We perform these changes here before starting the actual processing. First we use the las2las call shown below to fix an error in the global encoding bits, remove an irrelevant VLR, re-scale the coordinates from millimeter to centimeters, re-offset the coordinates to nice numbers, and – what is by far the most crucial change for better compression – remap the beamlet ID stored in the ‘user data’ field as described in an earlier article.
Then we use two lassort calls, one to maximize compression and one to improve spatial coherence. One lassort call rearranges the points in increasing order first based on the GPS time stamps, then breaks ties based on the user data field (that stores the beamlet ID), and finally stores the returns of every beamlet ordered by return number. We also add spatial reference information in this step. The other lassort call rearranges the points into a spatially coherent layout. It uses a Z-order sort with the granularity of 50 meter by 50 meter buckets of points. Within each bucket the point order from the prior sort is kept.
Now we start the usual processing workflow by tiling the data with lastile into smaller 500 meter by 500 meter tiles with a 25 meter buffer. We also set the pre-existing point classification in the data to zero as we will compute our own later.
We notice that a large amount of the noise has intensity values below 1000. We are still a bit puzzled where those intensity values come from and what exactly they mean in a Single Photon LiDAR system. But it works. We run las2las with a “filtered transform” to set classification of all points whose intensity value is 1000 or less to the classification code 7 (aka “noise”).
We then ignore this “easy-to-identify” noise and go after the remaining one with lasnoise by ignoring classification code 7 and setting the newly identified noise to classification code 9 – not because it’s “water” (the usual meaning of class 9) but because these points are drawn with a distinct blue color when checking the result with lasview.
Of the surviving non-noise points we then use lasthin to reclassify the point closest to the 20th elevation percentile per 50 cm by 50 cm area with classification code 8 (for all areas that have more than 5 non-noise points per 50 cm by 50 cm area. We repeat the same for every 1 meter by 1 meter area.
We then perform a more agressive second noise removal step one with lasnoise using only those points with classification code 8, namely those non-noise points that were the 20th elevation percentile in either a 50 cm by 50 cm cell or a 1 meter by 1 meter cell. This can be done by ignoring classification code 0, 7, and 9. We mark those noise points as 6 so they appear orange in the point cloud with lasview.
The 20th elevation percentile points that survive the last noise removal are then classified into ground (2) and non-ground (1) points with lasground_new by ignoring all other points, namely those with classification codes 0, 6, 7, and 9.
These images below illustrate the steps we took. They also show that not all data was used and might give you ideas where to tweak our workflow for even better results.
First returns are red, intermediate returns are green, last returns are blue and single returns are yellow.
Colored by intensity. Most noise has low intensity values.
Unused points. Another workflow could try to use more of this data.
Final ground points used for generating the smooth DTM.
Finally we raster the ground points into 1 meter Digital Terrain Model (DTM) rasters with las2dem and store the result (without buffers) to the RasterLAZ format.
The hillshaded DTM that is result of the entire sequence of processing steps described above is shown below.
DTM from ground classification created with LAStools
For comparison we generate the same DTM using the originally provided classification. According to the README file the original ground points are classified with code 22 in areas of flight line overlap and as the usual code 2 elsewhere. Hence we must use both classification codes to construct the DTM. We do this analogue to the earlier processing steps with the three LAStools commands lastile, las2dem, and blast2dem below.
Below the hillshaded DTM generated from the ground classification that was provided with the LiDAR when it was originally released as open data.
DTM from ground classification of originally released data.
In the meantime Andorra’s SPL data have been updated with a newer version in the open data portal. The new version of the data contains a much better ground classification that might have been improved manually as the new files now have the the string ‘cam’ instead of ‘ca’ in the file name, which probably means ‘classified automatically and manually’ instead of the original ‘classified automatically’. We decided not to switch to the new data release as it seemed less “raw” than the original release. For example there are suddenly points with GPS times and returns counts and numbers of zero in the file that seem synthetic. But we also computed the hillshaded DTM for the new release which is shown below.
DTM from ground classification of newly released data.
We thank the cartography section of Navarra’s Government for providing their LiDAR as open data. This not only allows re-purposing expensive data paid for by public taxes but also generates additional value, encourages citizen science, and provides educational opportunity and insights such as this blog article.
Background: Last spring, the LARTU research group produced a laser scanner survey of the Abbey of Sant’Andrea in Vercelli, on the occasion of the VIII centenary of the dedication (1219). The database produced with a topographic tool that integrates the potential of a total station with laser scanner and photogrammetric sensors (Trimble SX 10), has been used to produce representations that can be consulted in interactive mode, navigating within the point clouds and producing a consultation platform that can also be accessed by non-specialist users such as art historians or archaeologists.
Goal:
The LAStools software will be used to improve both the point cloud produced by eliminating the remaining noises, and check other ways of publishing the data, so as to make it usable from outside, to the community of researchers.
Data:
+ laser scanner and photogrammetric acquisitions of the interior of the building (150 millions of points) + laser scanner and photogrammetric acquisitions of the outside of the building (210 millions of points) + drone-based shooting of outdoor areas processed with Pix4D (23 millions of points)
LAStools processing: 1) tile large point cloud into tiles with buffer [lastile] 2) mark set of points whose z coordinate is a certain percentile of that of their neighbors [lasthin] 3) remove isolated low points from the set of marked points [lasnoise] 4) classify marked points into ground and non-ground [lasground] 5) creates a LiDAR portal for 3D visualization of LAS files [laspublish]
Recently a user of LAStools asked a question in our user forum about how to classify LiDAR data that contains lots of low noise. A sample screen shot of the user’s failed attempt to correctly classify the noise using lasnoise and the ground with lasground is shown below: red points are noise, brown points are ground, and grey points are unclassified. In this article we show how to remove this low noise using a temporary ground surface that we construct from a subset of points at a certain elevation percentile. You can follow along by downloading the data and the sequence of command lines used.
example of miss-classified low noise points: ground points (brown) below ground
Download the LiDAR data set that was apparently flown with a RIEGL “crossfire” Q1560. You can also download the command line sequence here. We first run lasinfo with option ‘-compute_density’ (or ‘-cd’ for short) to get a rough idea about the last return density which is quite high with an average of over 31 last returns per square meter. We then use lasthin to classify one last return per square meter with the temporary classification code 8, namely the one whose elevation is closest to the 20th percentile per 1 meter by 1 meter grid cell. We then repeat this command line for the 30th, 40th, 50th percentile modifying the command line accordingly. You must use this version of lasthin that will part of a future LAStools release as options ‘-ignore_first_of_many’ and ‘-ignore_intermediate’ were just added this weekend.
Below you see the resulting subset of points marked with the temporary classification code 8 for the four different percentiles 20th, 30th, 40th, and 50th triangulated into a surface and hill-shaded.
TIN of 20th elevation percentile point per square meter
TIN of 30th elevation percentile point per square meter
TIN of 40th elevation percentile point per square meter
TIN of 50th elevation percentile point per square meter
Next we reclassify only those points marked with the temporary classification code 8 into ground (2) and unclassified (1) points using lasground by ignoring all points that still have the original classification code 0.
Below you see the resulting ground points computed from the subsets of points at four different percentiles 20th, 30th, 40th, and 50th triangulated into a surface and hill-shaded.
TIN of ground classification of 20th elevation percentile point per square meter
TIN of ground classification of 30th elevation percentile point per square meter
TIN of ground classification of 40th elevation percentile point per square meter
TIN of ground classification of 50th elevation percentile point per square meter
Both the ground classification of the 40th and the 50th percentile look reasonable. Only a few down spikes remain in the 40th percentile surface and a few additional bumps appear in the 50th percentile surface. Next we use lasheight with those two reasonable-looking ground surfaces to classify all points that are 20 centimeter below the triangulated ground surface into the noise classification code 7.
Now that the low noise points were removed (or rather classified as noise) we start the actual ground classification process. In this example we want to create a 50 cm DTM, hence it is more than sufficient to find one ground point per 25 cm cell. Therefore we first move all lowest non-noise last return per 25 cm cell to the temporary classification code 8.
Side note: One might also consider to modify the following workflow to run the ground classification on more than just the last returns by omitting ‘-ignore_first_of_many’ and ‘-ignore_intermediate’ from the lasthin call and by adding ‘-all_returns’ to the lasground call. Why? Because for all laser shots that resulted in a low noise point, this noise point will usually be the last return, so that the true ground hit could be the second to last return.
The final ground classification is obtained by running lasground only on the points with temporary classification code 8 by ignoring all others, namely the noise points (7) and the unclassified points (0 and 1).
We then use las2dem to create the 50 cm DTM from the points classified as ground. We store this DTM raster to the LAZ format which has shown to be the most efficient format for storing elevation or height rasters. We have started calling this format RasterLAZ. It is supported by all LAStools and the new DEMzip tool. One advantage is that we can feed RasterLAZ directly back into LAStools, for example as done below, for a second call to las2dem that computes a hill-shaded DTM.
Below the resulting hill-shaded DTMs computed for the 40th and the 50th elevation percentile – as well as for the 45th elevation percentile that we’ve added for comparison.
Hill-shaded DTM resulting from workflow using 40th elevation percentile.
Hill-shaded DTM resulting from workflow using 45th elevation percentile.
Hill-shaded DTM resulting from workflow using 50th elevation percentile.
Below we finally take a closer look at an example 1 meter profile line through the LiDAR classified by the 45th percentile workflow. There is a small stretch of ground points that was incorrectly classified as noise points (find the mouse cursor) so it might be worthwhile to change parameters slightly to make the noise classification less aggressive.
Side note follow-up: The return coloring shows there are indeed some ‘intermediate’ as well some ‘first of many returns’ just where we expect the bare terrain to be. However, there are not so many that the results can be expected to drastically change by including them into the ground finding process.
1 meter profile through final classification for 45th percentile workflow
noise points classified by 45th percentile workflow
ground points classified by 45th percentile workflow
colors by return type (blue = last of many, red = first of many, green = intermediate, yellow = single)
Background: The 850 km-long Alpine Fault (AF) is one of the world’s great laterally-slipping active faults (like California’s San Andreas Fault), which currently accommodates about 80% of the motion between the Australian and Pacific tectonic plates in the South Island of New Zealand (NZ). Well-dated sedimentary layers preserved in swamps and lakes adjacent to the AF currently provide one of the world’s most spatially and temporally complete record of large ground rupturing earthquakes (Howarth et al., 2018). Importantly these records reveal that major earthquakes occur with greater regularity on the AF than any other known fault, releasing a Magnitude (Mw) 7 to 8 earthquake on average every 249 ± 58 years and that the most recent earthquake was around Mw 8 in 1717 AD prior to European arrival. This computes to a conditional probability of 69% that the AF will rupture in the next 50 years. For a country that has recently had several notable earthquakes (e.g. 2010 Mw 7.1 Canterbury, 2016 Mw 7.8 Kaikoura) and has an economy heavily reliant on tourism, the next AF earthquake is the one NZ is trying to prepare for (note that a Mw 8 earthquake is about thirty times the energy release of a Mw 7).
The more data we can gather as scientists to constrain (1) the magnitude of the next AF earthquake, (2) the amount of lateral and vertical slip (offset roads, powerlines, etc.), (3) the coseismic effects (ground shaking, landslides, liquefaction), and (4) the duration it takes the landscape to recover (muddy rivers, increased sediment supply, prolonged landsliding), the more we can anticipate expected hazards and foster societal resilience.
Despite its name, the AF is almost completely obscured beneath a dense temperate rain-forest canopy, which has hindered fine-scale geomorphic studies. Relatively low quality airborne LiDAR (2 m-resolution bare-earth model) was first collected in 2010 for a 32 km-length of the central AF. Despite being the best studied portion of the AF, 82 % of the fault traces identified in the LiDAR were previously unmapped (Barth et al., 2012). The LiDAR reveals the width and style of ground deformation. Interpretation of the bare-earth landscape in combination with on the ground sampling, allows single earthquake displacements, uplift rates, recurrence of landslides, and post-earthquake sedimentation rates to be quantified. A new 2019 airborne LiDAR dataset collected along 230 km-length of the southern AF has great potential to improve our understanding of this relatively “well-behaved” fault system, what to expect from its next earthquake, and to give us insight into considerably more complex fault systems like the San Andreas.
(A) Aerial view of the South Island of New Zealand highlighting the boundary between the Pacific and Australian plates (white) and the Alpine Fault in particular (red). (B) View showing the extent of the 2019 airborne LiDAR survey to be processed by this lasmoons proposal. (C) Aerial imagery over Franz Josef, site of a 2010 airborne LiDAR survey. (D) 2010 Franz Josef LiDAR DTM hillshade (GNS Science). LiDAR has revolutionized our ability to map fault offsets and other earthquake ground deformation beneath this dense temperate rainforest.
Goal:
The LAStools software will be used to check the quality of the data (reclassing ground points and removing any low ground classed outliers if needed) and create a seamless digital terrain model (DTM) from the 1695 tiled LAS files provided. The DTM will be used to create derivative products including contours, slope map, aspect map, single direction B&W hillshades, multi-directional hillshades, and slope-colored hillshades to interpret the fault and landslide related landscape features hidden beneath the dense temperate rain-forest. The results will be used as seed data to seek national-level science funding to field verify interpretations and collect samples to determine ages of features (geochronology). The ultimate goal is to improve our understanding of the Alpine Fault prior to its next major earthquake and to communicate those findings effectively through publications in open access peer-reviewed journal articles and meetings with NZ regional councils.
Data:
+ airborne LiDAR survey collected in 2019 using a Riegl LSM-Q780 sensor by AAM New Zealand + provided data are as 1695 LAS files organized into 500 m x 500 m tiles and classified as ground and non-ground points (75 pts/m2 or ~0.8 ground-classed pts/m2; 320 GB total)
LAStools processing: 1) check the quality of the ALS data [lasinfo, lasoverlap, lasgrid] 2) [if needed] remove any low and high ground-classed outliers [lasnoise] 3) [if needed] reclassify ground and non-ground points [lasground] 4) create Digital Terrain Model (DTM) from ground points [blast2dem]
References: Howarth, J.D., Cochran, U.A., Langridge, R.M., Clark, K.J., Fitzsimons, S.J., Berryman, K.R., Villamor, P., Strong, D.T. (2018) Past large earthquakes on the Alpine Fault: paleosismological progress and future directions. New Zealand Journal of Geology and Geophysics, v. 61, 309-328, doi: 10.1080/00288306.2018.1465658
Barth, N.C., Toy, V.G., Langridge, R.M., Norris, R.J. (2012) Scale dependence of oblique plate-boundary partitioning: new insights from LiDAR, central Alpine Fault, New Zealand. Lithosphere 4(5), 435-448, doi: 10.1130/L201.1