Around 86,000 square kilometres of the province of British Columbia have been mapped using LiDAR and these high-tech point clouds are now released as open data. More about these big news can be found here. We decided to take the portal for a quick spin and try to find the Computer Science building of UBC Vancouver where we studied from 1996 to 1999. The experience was pleasant. Either go directly to the portal or visit the government page linking to it.
Then follow the steps shown in the images below to get to the data.
A few moments later the LiDAR tile in compressed LAZ format is yours to play with as long as you adhere to the rules of the Open Government License – British Columbia. After I downloaded the tile of UBC campus that contains the Computer Science building, I ran a quick lasinfo report whose output is shown below.
Everything looks fine. Unfortunately the time stamps are in “GPS week time” instead of “Adjusted GPS Standard time” so that the per-point information about which week of which year a point was collected is missing. A minor thing is the “coordinate resolution fluff” in the z coordinate. This means that although the z scale factor is set to 0.001 for storing millimeter precise elevations, the actual z coordinates stored in the LAZ file were rounded to full centimeters, so that a scale factor of 0.01 would be more appropriate.
Using the GUI of las2las I then clipped out a 500 meter by 300 meter part containing the Computer Science building,
and visualized the resulting LAZ file using lasview with option ‘-spike_free 1.0’ which can then be used for spike-free TIN generation by pressing hotkey <SHIFT>+<y>.
On the left is the Computer Science building. By hovering over two points on either end of the building and pressing <SHIFT>+<i> each time, I can measure that it is about 110 meters long. On the right, we see the much more famous UBC Forestry building. Below the same data as a point cloud colored by return type, where yellow points are single returns, red points are first of many, blue points are last of many and green points are intermediate returns.
Previously, access to LiDAR in British Columbia had been prohibitively expensive for many users. At the same time this precious data had often been sitting idle for years on government computers, never mind that it was collected with tax dollars. Now it is available to small and big businesses, non-profit organizations, recreational associations, scientists, academics and hobbyist alike allowing maximal exploration and exploitation for many different purposes, even some currently still unforeseen ones. Kudos to the Government of British Columbia for finally taking this step.
Background: Hydrological models require various input data for flood vulnerability mapping. An important input data for flood vulnerability mapping is the DTM over which flow is being routed. DTMs are generated using cartography, ground surveying, digital aerial photogrammetry, interferometric SAR (InSAR), LiDAR amongst other means. The accuracy of high resolution DTMs minimize errors that may emanate from input data when conducting hydrological modelling, especially in small built-up catchment areas. This research involves the application of digital aerial photogrammetry to generate point clouds which can subsequently be utilized for flood vulnerability mapping.
photogrammetry point cloud
To consolidate on previous gains in using LAStools to generate DTMs required for flood vulnerability mapping. The suitability of these DTMs will be subsequently validated for flood vulnerability analysis. These results will be compared with other DTMs in order to determine the uncertainty associated with the use of such DTMs for flood vulnerability mapping.
photogrammetry point cloud
+ high-resolution photogrammetry point cloud and DSM for Lagos Island, Ikorodu and Ajah Nigeria – – – imagery obtained with an Ebee Sensefly drone flight – – – photogrammetry point cloud generated with Photoscan by AgiSoft + rainfall data + classified LiDAR point cloud with a resolution of 1 pulse per square meter obtained for the study area from the Lagos State Government
Photogrammetry point cloud
LAStools processing: 1) tile large photogrammetry point cloud into tiles with buffer [lastile] 2) mark set of points whose z coordinate is a certain percentile of that of their neighbors [lasthin] 3) remove isolated low points from the set of marked points [lasnoise] 4) classify marked points into ground and non-ground [lasground] 5) pull in points close above and below the ground [lasheight] 6) create Digital Terrain Model (DTM) from ground points [las2dem] 7) merge and hillshade individual raster DTMs [blast2dem]
The first processing step will be to tile the strips into tiles that contain fewer points for faster and also parallel processing. One quick “flat terrain” trick first. Often there are spurious points that are far above or below the terrain. For a relatively flat area these can be easily be identified by computing a histogram elevation values with lasinfo and then eliminated with simple drop-filters on the z coordinate.
The relevant excerpts of the output of the lasinfo report are shown below:
[…] z coordinate histogram with bin size 1.000000 bin -104 has 1 bin 5 has 1 bin 11 has 273762 bin 12 has 1387999 bin 13 has 5598767 bin 14 has 36100225 bin 15 has 53521371 […] bin 59 has 60308 bin 60 has 26313 bin 61 has 284 bin 65 has 10 bin 66 has 31 bin 67 has 12 bin 68 has 1 bin 83 has 3 bin 84 has 4 bin 93 has 31 bin 94 has 93 bin 95 has 17 […]
The few points below 11 meters and above 61 meters in elevation can be considered spurious. In the initial tiling step with lastile we add simple elevation filters to drop those points on-the-fly from the buffered tiles. The importance of buffers when processing LiDAR in tiles is discussed in this article. With lastile we create tiles of size 125 meters with a buffer of 20 meters, while removing the points identified as spurious with the appropriate filters. Because the input strips have their “file source ID” in the LAS header correctly set, we use ‘-apply_file_source_ID’ to set the “point source ID” of every point to this value. This preserves the information of which point comes from which flight line.
This produces 49 buffered tiles that will now be processed similarly to the workflow outlined for another lower-priced system that generates similarly “fluffy” point clouds on hard surfaces, the Velodyne HLD-32E, described here and here. What do we mean with “fluffy”? We cut out a 1 meter slice across the road with the new ‘-keep_profile’ filter and las2las and inspect it with lasview.
In the view below we pressed hot key twice ‘]’ to exaggerate the z scale. The “fuzziness” is that thickness of the point cloud in the middle of this flat tar road. It is around 20 to 25 centimeters and is equally evident in both flight lines. What is the correct ground surface through this 20 to 25 centimeter “thick” road? We will compute a “mean ground” that roughly falls into the middle of this “fluffy” surface,
The next three lasthin runs mark a sub set of low candidate points for our lasground filtering. In every 25 cm by 25 cm, every 33 cm by 33 cm and every 50 cm by 50 cm area we reclassify the point closest to the 10th percentile as class 8. In the first call to lasthin we put all other points into class 1.
Below you can see the resulting points of the 10th percentile classified as class 8 in red.
Operating only on the points classified as 8 (i.e. ignoring those classified as 1) we then run a ground classification with lasground using the following command line, which creates a “low ground” classification. .
Since this is an open road this classifies most of the red points as ground points.
Using lasheight we then create a “thick ground” by pulling all those points into the ground surface that are between 5 centimeter below and 17 centimeter above the “low ground”. For visualization purposes we temporarily use class 6 to capture this thickened ground.
The final “median ground” points are shown in red below. These are the points we will use to eventually compute the DTM.
We complete the fully automated classification available in LAStools by running lasclassify with the following options. See the README file for what these options mean. Note that we move the “thick ground” from the temporary class 6 to the proper class 2. The “median ground” continues to be in class 8.
Below a screenshot of the resulting Potree 3D Web portal rendered with Potree Desktop. Inspecting the classification will reveal a number of errors that could be tweaked manually with lasview. How the point colors were generated is not described here but I used Google satellite imagery and mapped it with lascolor to the points. The elevation colors are mapped from 14 meters to 25 meters. The intensity image may help us understand why the black tar road on the left hand side that runs from the “Las Palmeras Condos” to the beach in “Cangrejal” has no samples. It seems the intensity is lower on this side which indicates that the drone may have flown higher here – too high to for the road to reflect enough photons. The yellow view of return type indicates that despite it’s multi-return capability, the Livox MID-40 LiDAR is mostly collecting single returns.
The penetration capability of the Livox MID-40 LiDAR was less good than we had hoped for. Below thick vegetation we have too few points on the ground to give us a good digital terrain model. In the visualization below you can see that below the dense vegetation there are large black areas which are completely void of points.
Now we produce the standard product DTM and DSM at a resolution of 50 cm. Because the total area is not that big we generate temporary tiles in “raster LAZ” with las2dem and merge them into a single GeoTiff with blast2dem.
The scanning part was easy. Getting sensible data out of the ScanLookPC software proved to be quite an Odyssee as I neither had access to nor training with the ScanLookPC software. Hard surfaces such as the roof of the “New China” supermarket looked this wobbly when looking at the points from individual beams of this 32 beam scanner and turned into a complete fuzz when using all points from all beams.
I could only scrutinize the LAS files and I found several unrelated errors, such as duplicate returns and non-unique GPS time stamps, but I was unable to fix the wobbles. Frustrated with the vendor support I was ready to give up when out-of-nowhere I suddenly got an email from Luis Hernandez Perez who also worked with these system. He sent me a link to properly exported flight strips and suggested “it was a problem of poor GNSS signal and the solution for me was use the base station IND1 (of IGS) that was 3.2 kilometers away”. After months of struggles I finally had LiDAR data from downtown Samara and look how crisp the roof of the supermarket suddenly was.
As always my standard quality checks are running lasview, lasinfo, lasgrid but most importantly lasoverlap which would reveal the typical flight line misalignments that we can find in most airborne surveys.
As usually, I contacted Andre Jalobeanu from Bayesmap and asked for help with the alignment. A few days later he returned a new and improved set of flight lines to me that he had run through his stripalign software. So I performed the same quality check with lasoverlap once again.
Vertical differences of more than 20 centimeters are mapped to saturated blue and red and the improvement in alignment though stripalign is impressive. Note that the road is not white because it was perfectly aligned but because there was no data. A few days before the scan, Samara got a new tar road from the municipality. As we were flying at 70 meters above ground – a little too high for this LiDAR scanner – we did not capture surfaces with low reflectivity and the fresh black tar did not reflect enough photons.
The Velodyne HDL-32E scanner rotates shooting laser beams from 32 different heads and the information which return comes from which head was stored into the “user data” field and into the “point_source ID” field of each point by the exporting software. The LAS format does not have a dedicated field for this information as the supported maximum number of different laser beams is 4 in the new point types 6 through 10 of the latest LAS 1.4 specification (where this field is called the “scanner channel”). But when we use lastile to turn the flight lines into square tiles we will override the “point_source ID” with the flight line number. Also the “user data” field is a fragile place to store important information as lasheight, for example, will store temporary data there. The “extra bytes” concept of the LAS format is perfect to store such an additional attribute and the ASPRS LAS Working Group is currently discussing to have standardized “extra bytes” for exactly such laser beam IDs.
We at rapidlasso have already implemented this a while ago into our LAStools software. So before processing the data any further we copy the beam index that ranges from 0 to 31 from the “user data” field into an unsigned 8 bit integer “extra byte” with these two las2las commands.
In a future article we will process these aligned and prepared flight lines into a number of products. We thank Nelson Mattie from LiDAR Latinoamerica, Luis Hernandez Perez and Andre Jalobeanu from Bayesmap to help me acquire and fix this data. Several others helped with experiments using their own software and data or contributed otherwise to the discussions in the LAStools user forum. Thanks, guys. This data will soon be available as open data but a sample lasinfo report is already below.
lasinfo (210128) report for 'Samara\Drone\00_raw_ready\flightline_01.laz' reporting all LAS header entries: file signature: 'LASF' file source ID: 1 global_encoding: 0 project ID GUID data 1-4: 00000000-0000-0000-0000-000000000000 version major.minor: 1.2 system identifier: 'LAStools (c) by rapidlasso GmbH' generating software: 'las2las (version 210315)' file creation day/year: 224/2019 header size: 227 offset to point data: 527 number var. length records: 2 point data format: 1 point data record length: 29 number of point records: 6576555 number of points by return: 6576555 0 0 0 0 scale factor x y z: 0.001 0.001 0.001 offset x y z: 661826 1092664 14 min x y z: 660986.622 1092595.013 11.816 max x y z: 661858.813 1092770.575 95.403 variable length header record 1 of 2: reserved 0 user ID 'ScanLook' record ID 25 length after header 0 description 'ScanLook Point Cloud' variable length header record 2 of 2: reserved 0 user ID 'LASF_Spec' record ID 4 length after header 192 description 'by LAStools of rapidlasso GmbH' Extra Byte Descriptions data type: 1 (unsigned char), name "laser beam ID", description: "which beam ranged this return", scale: 1 (not set), offset: 0 (not set) LASzip compression (version 3.4r3 c2 50000): POINT10 2 GPSTIME11 2 BYTE 2 reporting minimum and maximum for all LAS point record entries … X -839378 32813 Y -68987 106575 Z -2184 81403 intensity 4 255 return_number 1 1 number_of_returns 1 1 edge_of_flight_line 0 0 scan_direction_flag 0 0 classification 1 1 scan_angle_rank -12 90 user_data 0 0 point_source_ID 0 0 gps_time 537764.192416 537854.547507 attribute0 0 31 ('laser beam ID') number of first returns: 6576555 number of intermediate returns: 0 number of last returns: 6576555 number of single returns: 6576555 overview over number of returns of given pulse: 6576555 0 0 0 0 0 0 histogram of classification of points: 6576555 unclassified (1)
We flew a two-sortie mission covering a destroyed mangrove lagoon that was illegally poisoned, cut-down and filled in with the intention to construct a fancy resort in its place some 25 years ago. For future environmental work I wanted to get a high-resolution baseline scan with detailed topography of the meadow and what now-a-days remains of the mangroves that are part of the adjacent “Rio Lagarto” estuary. Recently the area was illegally treated with herbicides to eliminate the native herbs and the wild flowers and improve grazing conditions for cattle. Detailed topography can show how the heavy rains have washed these illegal substances into the ocean and further prove that the application of agro-chemicals in this meadow causes harm to marine life.
For the density image, lasgrid counts how many last return from all flight lines fall into each 50 cm by 50 cm area, computes the desnity per square meter and maps this number to a color ramp that goes from blue via cyan, yellow and orange to red. The overall density of our survey is in the hundred of laser pulses per square meters with great variations where flight line overlap and at the survey boundary. The start and landing area as well as the place where the first flight ended and the second flight started are the two red spots of maximum density that can easily be picked out.
For the overlap image lasoverlap counts how many different flight lines overlap each 50 cm by 50 cm area and maps the counter to a color: 1 = blue, 2 = cyan, 3 = yellow, 4 = orange, and 5 of more = red. Here the result suggests that the 27 delivered LAS files do not actually correspond to the logical flight lines but that the files are chopped up in some other way. We will have Andre Jalobeanu from Bayesmap repair this for us later.
For the difference image, lasoverlap finds the maximal vertical difference between the lowest points from all flight lines that overlap for each 50 cm by 50 cm area and maps it to a color. If this difference is less than 10 cm, the area is colored white. Differences of 25 cm or more are colored either red or blue. All open areas such as roads, meadows and rooftops should be white here we definitely have way to much red and blue in the open areas.
There is way too much red and blue in areas that are wide open or on roof tops. We inspect this in further detail by taking a closer look at some of these red and blue areas. For this we first spatially index the strips with lasindex so that area-of-interest queries are accelerated, then load the strips into the GUI of lasview and add the difference image into the background via the overlay option.
This way is easy to lasview or clip out (with las2las) those areas that look especially troublesome. We do this here for the large condominium “Las Palmeras” whose roofline and pool provide perfect features to illustrate the misalignment. As you can see in the image sequence below, there is a horizontal shift of up to 1 meter that can be nicely visualized with a cross section drawn perpendicular across the gable of the roof and – due to the inability to get returns from water – in the area without points where the pool is.
The misalignments between flight lines are too big for the data to be useful as is, so we do what we always do when we have this problem: We write an email to Andre Jalobeanu from Bayesmap and ask for help.
After receiving the LAZ files and the trajectory file Andre repaired the misalignment in two steps. The first call to his software stripalign in mode ‘-cut’ recovered a proper set of flight lines and removed most of the LiDAR points from the moments when the drone was turning. The second call to his software stripalign in mode ‘-align’ computed the amount of misalignment in this set of flight lines and produced a new set of flight lines with these errors corrected as much as possible. The results are fabulous.
As you can see above, the improvements are incredible. The data seems now sufficiently aligned to be useful for being processed into a number of products. One last thing to do is the removal of spurious scan lines that seem to stem from an unusual movement of the drone, like the beginning or the end of a turn.
We use lasview with option ‘-load_gps_time’ to determine the GPS time stamps of these spurious scan lines and remove them manually using las2las with option ‘-drop_gps_time_between t1 t2’ or similar. As the points are ordered in acquisition order, we can simply replay the flight by pressing ‘p’ and step forward and backward with ‘s’ and ‘S’.
Once we determined a suitable set of GPS times to remove from a flight lines we first verify our findings once more visually using lasview before actually creating the final cut with las2las.
After spending several hours of manually removing these spurious scan lines as well as deciding to remove a few short scan lines in areas of exzessive overlap we have a sufficiently aligned and cleaned data set to start the actual post-processing.
My new favorite Swiss Miss, Viola Amherd, said today “Geodaten gehören heute zur Infrastruktur wie die Straßen und die Eisenbahn”, which says that “today, geodata are part of the infrastructure like roads and railways”. The federal councilor of Switzerland announced that programmers and planners, whether private or professional, can now download the data free of charge and use it for their projects. There are almost no limits to innovation for information projects.
Hello Germany? Hello Bavaria? Hello Austria? Where is your Open-Data-Autobahn? Whatever happened to unlimited speeds and endless Fahrvergnügen … (-;
This means in Switzerland large amounts of LiDAR are now available for download from this portal here. There is amazing data there, including high-resolution ortho-photos and land cover data. However. we went straight for the LAS files and got ourselves a few tiles near the Sazmartinshorn in the example below.
The tiles can be selected in a number of ways via an interactive map and come as individually zipped LAS files. We downloaded the nine tiles indicated above (2.16 GB), unzipped the bulky LAS files (4.78 GB) and compressed them to the compact LAZ format (638 MB) with laszip. Using LAZ instead of zipped LAS would lower storage size and transmission bandwidth by a factor of 3.5. Something the stereotypically frugal people of Switzerland may want to consider … (-;
Then we process the data with a few typical command lines with the result shown below. The first uses blast2dem to create a hill shaded 1 meter DTM from the points classified as ground.
We use lasgrid to visualize the varying last return density per 2 meter by 2 meter area across the surveyed area with a false coloring that maps 10 or fewer pulses per square meter to blue and 20 or more pulses per square meters to red.
A lasinfo report reveals that the scanner used was a RIEGL and the returning pulse width was quantized in tenths of a nanosecond into the user data field. We use lasgrid to visualize the range of the pulse width between 4.0 and 6.0 nanoseconds with a false coloring. Make sure to drop the points for which no pulse width was recorded (i.e. user data is zero) to avoid artifacts in the visualization.
Finally we created a portal with laspublish to visualize the point cloud data interactively with Potree. The four screenshots below highlight only a few of the abilities for visualizing and measuring the point cloud.
After weeks of planning the helicopter finally came. Since we have been cooperating with Stereocarto‘s San Jose office for quite some time, we were finally able to lure their helicopter down to Samara. Taking off from Liberia it flew about an hour back and forth across Samara and the adjacent beaches. Processing this data will take weeks (if not months) to complete and analyzing it even longer. One of the first applications will be the usual extraction of the bare earth terrain topography followed by the detection of large changes in comparison to our 2012 data. Such changes might be the result of permitted construction or earthworks, but are often illegal remodeling or removal of mountains to “create” building lots or filling in wetland areas with dirt and debris to “gain” land. Finding and quantifying such environmental damages and reporting about the process will be the first primary objective of this project.
Background: Canopy height is a fundamental geometric tree parameter in supporting sustainable forest management. Apart from the standard height measurement method using LiDAR instruments, other airborne measurement techniques, such as very high-resolution passive airborne imaging, have also shown to provide accurate estimations. However, both methods suffer from high cost and cannot be regularly repeated.
Goal: In our study, we attempt to substitute airborne measurements with widely available satellite imagery. In addition to spatial and spectral correlations of a single-shot image, we seek to exploit temporal correlations of sequential lower resolution imagery. For this we use a convolutional variant of a recurrent neural network based model for estimating canopy height, based on a temporal sequence of Sentinel-2 images. Our model’s performance using sequential space borne imagery is shown to outperform the compared state-of-the-art methods based on costly airborne single-shot images as well as satellite images.
Data: The experimental study area of approximately 940 squared km is includes two national parks, Bavarian Forest National Park and Šumava National Park, which are located at the border between Germany and Czech Republic. LiDAR measurements of the area from 2017 and 2019 will be used as ground truth height measurements that have been provided by the national park’s authorities. Temporal sequences of Sentinel-2 imagery will be acquired from the Copernicus hub for canopy height estimation.
LAStools processing: Accurate conversion of LAS files into DEM and DSM in order to acquire ground truth canopy height model. 1) Remove noise [lasthin, lasnoise] 2) Classify points into ground and non-ground [lasground, lasground_new] 3) Create DTMs and DSMs [lasthin, las2dem]
Zak Kus (recipient of three LASmoons)
San Francisco, USA
Background: While LiDAR data enables a lot of research and innovation in a lot of fields, it can also be used to create unique and visceral art. Using the high resolution data available, a 3D printer, and a long tool chain, we can create a physical, 3D topological map of the San Francisco bay area that shows off both the city’s hilly geology, and its unique skyline.
Test print of San Francisco’s Golden Gate Park.
Test print of San Francisco’s Golden Gate Park.
The ultimate goal of this project is to create an accurate, unique physical map of San Francisco, and the surrounding areas, which will be given to a loved one as a birthday gift. Using the data from the 2010 ARRA-CA GoldenGate survey, we can filter and process the raw lidar data into a DEM format using LAStools, which can be converted using a python script into a “water tight” 3D printable STL file.
While the data works fairly well out of the box, it does require a lot of manual editing, to remove noise spikes, and to delineate the coast line from the water in low lng areas. Interestingly, while many sophisticated tools exist to edit STLs that could in theory be used to clean up and prepare the files at the STL stage, few are capable of even opening files with so much detailed data. Using LAStools to manually classify, and remove unwanted data is the only way to achieve the desired level of detail in the final piece.
+ LiDAR data provided through USGS OpenTopography, using the ARRA-CA GoldenGate 2010 survey + Average point density of 3.33 pts/m^2 (though denser around SF) + Covers 2638 km^2 in total (only a ~100 km^2 subset is used)
1) Remove noise [lasnoise] 2) Manually clean up shorelines and problematic structures [lasview, laslayers] 3) Combine multiple tiles (to fit 3d printer) [lasmerge] 4) Create DEMs (asc format) for external tool to process [las2dem]
Background: As a low-lying coastal nation, the Republic of the Marshall Islands (RMI) is at the forefront of exposure to climate change impacts. RMI has a strong dependence on natural resources and biodiversity not only for food and income but also for culture and livelihood. However, these resources are threatened by rising sea levels and associated coastal hazards (king tides, storm surges, wave run-up, saltwater intrusion, erosion). This project aims at addressing the lack of technical capacity and available data to implement effective risk reduction and adaptation measures, with a particular focus on inundation mapping and local evacuation planning in population centers.
Typical low-lying coastal area of the Republic of the Marshall
This project intends to use LAStools to generate a DEM of the inhabited sections of 3 remote atolls (Aur, Ebon, Likiep) and 1 island (Mejit). The resulting DEM will be used to produce an inundation exposure model (and map) under variable sea level rise projections for each site. The ultimate goal is to integrate the results into each site’s disaster risk reduction strategy (long-term outcome) and present it through community consultations in schools, community centers, and council houses.
+ Aerial imagery of 11.5 square kilometers of land (6.3% of total national landmass) using DJI Matrice 200 V2 & DJI Zenmuse X5S with a minimum overlap of 75/75 and maximum altitude of 120m.
LAStools processing: 1) tile large point cloud into tiles with buffer [lastile] 2) remove noise points [lasthin, lasnoise] 3) classify points into ground and non-ground [lasground] 4) create Digital Terrain Models and Digital Surface Models [lasthin, las2dem]