LiDAR thoughts – where to measure, exactly?

LiDAR is a pretty common tool for geospatial stuff. It means ‘Light Detection and Ranging’. For the most part involved shining a laser beam at something then measuring how long a reflection takes to come back. Since we approximate the speed of light, we can use the round trip time to estimate the distance between the light source and ‘something’ with a great degree of accuracy. Modern instruments perform many other types of magic – building histograms of individual photons returned, comparing emitted and returned wave pulses, and even doing this with many different parts of the EM spectrum.

Take a google around about LiDAR basics – there are many resources which already exist to explain all this, for example https://coast.noaa.gov/digitalcoast/training/intro-lidar.html.

What I want to write about here is a characteristic of the returned point data. A decision that needs to be made using LiDAR is:

Where should I measure a surface?

…but what – wait? Why is this even a question? Isn’t LiDAR just accurate to some figure?

Sort of. A few years ago I faced a big question after finding that the LiDAR I was working on was pretty noisy. I made a model to show how noisy the LiDAR should be, and needed some data to verify the model. So we hung the LiDAR instrument in a lab and measured a concrete floor for a few hours.

Here’s a pretty old plot of what we saw:

What’s going on here? In the top panel, I’ve spread our scanlines along an artificial trajectory heading due north (something like N = np.arange(0,10,0.01)), with the Easting a vector of zeroes and height a vector of 3’s – and then made a swath map. I drew in lines showing where scan angle == 75, 90, and 115 are.

In the second panel (B), there’s a single scanline show across-track. This was kind of a surprise – although we should have expected it. What we see is that the range observation from the LiDAR is behaving as specified – accurate to about 0.02 m (from the instrument specifications). What we didn’t realise was that accuracy is angle-dependent, since moving away from instrument nadir the impact of angular measurement uncertainties becomes greater than the ranging uncertainty of the instrument.

Panels C and D show this clearly – near instrument nadir, ranging is very good! Near swath edges we approach the published instrument specification.

This left us with the question asked earlier:

When we want to figure out a height reference from this instrument, where do we look?

If we use the lowest points, we measure too low. Using the highest points, we measure too high. In the end I fitted a surface to the points I wanted to use for a height reference – like the fit line in panel B – and used that. Here is panel B again, with some annotations to help it make sense.

You can see straight away there are bears in these woods – what do we do with points which fall below this plane? Throw them away? Figure out some cunning way to use them?

In my case, for a number of reasons, I had to throw them away, since I levelled all my points using a fitted surface, and ended up with negative elevations in my dataset. Since I was driving an empirical model based on these points, negative input values are pretty much useless. This is pretty crude. A cleverer, more data-preserving method will hopefully reveal itself sometime!

I haven’t used many commercial LiDAR products, but one I did use a reasonable amount was TerraSolid. It worked on similar principles, using aggregates of points and fitted lines/planes to do things which required good accuracy  – like boresight misalignment correction.

No doubt instruments have improved since the one I worked with. However, it’s still important to know that a published accuracy for a LiDAR survey is kind of a mean (like point density)  – some points will have a greater chance of measuring something close to where it really is than others. And that points near instrument nadir are more likely to be less noisy and more accurate.

That’s it for now – another post on estimating accuracy for every single LiDAR point in a survey will come soon(ish).

The figure shown here comes from an internal technical report I wrote in 2012. Data collection was undertaken with the assistance of Dr Jan L. Lieser and Kym Newbery, at the Australian Antarctic Division. Please contact me if you would like to obtain a copy of the report – which pretty much explains this plot and a simulation I did to try and replicate/explain the LiDAR measurement noise.

You can call me Doctor now.

Glorious bearded philosopher-king is clearly more appropriate, but Doctor will do.

After six years of chasing a crazy goal and burrowing down far too many rabbit holes in search of answers to engineering problems, I got a letter from the Dean of graduate research back in late October. My PhD is done!

So what exactly did I do? The ten minute recap is this:

1. I assessed the utility of a class of empirical models for estimating snow depth on sea ice using altimetry (snow height). The models were derived from as many in situ (as in holes drilled in the ice) measurements as I could find, and I discovered that they are great in a broad sense (say hundreds of metres), but don’t quite get the picture right at high resolutions (as in metres). This is expected, and suspected – but nobody actually did the work to say how. So this was published (and of course, is imperfect – there is much more to say on the matter).

2. For a LiDAR altimetry platform I spent a lot of time tracking down noise sources, and ended up implementing a method to estimate the 3D uncertainty of every single point. This was hard! I also got quite good at staring at GPS post-processing output, and became quite insanely jealous anytime anyone showed me results from fixed-wing aircraft.

3. Now having in hand some ideas about estimating snow depth and uncertainties, I used another empirical model to estimate sea ice thickness using snow depths estimated from sea ice elevation (see 1), and propagating uncertainty from LiDAR points through to get an idea of uncertainty in my thickness estimates (see 2). Because of spatial magic I did with a robotic total station on SIPEX-II (see the blog title pic – that’s me with my friendly Leica Viva), I could also coregister some under-ice observations of sea ice draft and use them to come up with parameters to use in the sea-ice thickness from altimetry model at the scale of a single ice floe. For completeness, I did the same with a very high resolution (10cm) model I made from 3D photogrammetry on the same site. I then used this ‘validated’ parameter set to estimate sea-ice thickness for some larger regions.

Overall, the project changed direction three our four times, reshaping as we learned more – and really taking shape after some new methods for sea ice observations were applied in 2012.

What I discovered was that it is actually pretty feasible to do sea ice thickness mapping from a ship-deployed aircraft in the pack ice zone. This is important – because it means regions usually very difficult to access from a land-based runway can be examined.

I also showed that observations of sea ice so far may be underestimating sea ice thickness in certain ice regimes – and also likely to be overestimating sea ice thickness in others. This has pretty important implications for modelling the fresh water flux and other stuff (habitat availability) on the southern ocean – so more work is already underway to try and gather more data. Encouragingly, I showed that drill holes are actually quite accurate – and showed how some new approaches to the practice of in situ sampling might markedly increase the power of these observations.

The possibilities of a robotic total station and a mad keen field sampling nutter on an ice floe – endless! (we did some awesome work with a hacky sack and the total station, hopefully coming to light soon).

…and like all PhD theses, the last part is a vast introspection on where the soft underbelly of the project lies, and what could/should be done better next time.

Needless to say, I’m really relieved to be done. I go wear a floppy hat and collect my bit of paper next month.

What next? Subject the ice thickness work to peer review! Aiming for publication in 2017.

FOSS4G 2016 wrap-up

img_20160826_193742693

I’ve recently returned from FOSS4G 2016 in Bonn, Germany. It was my first OSgeo conference, and it was quite amazing.

As a little background the OSgeo foundation supports the development of open source ‘geo software’, from desktop packages like QGIS to behind-the-scenes libraries (Proj.4) and even geospatial metadata catalogues (eg Geonetwork). My involvement? Over the years I’ve used parts of the OSgeo stack heavily. Increasingly. This also holds for the National Computational Infrastructure, so it made sense to submit an abstract and try to go.

My talk was going to be about storing and querying point cloud data from HDF files. Unfortunately I didn’t finish my experiments in time, and I ended up presenting a high level view of NCI, our point data storage and analysis needs, and some attempts at using Postgres-pointcloud and SPDlib (a HDF based point and waveform data storage system). This worked well with the session – until that time, most of the point data discussions at FOSS4G revolved around airborne or mobile LiDAR. It was nice to feel a bit like we had some unique problems to solve, and I feel grateful to be able to stand among the giants of OSgeo. In the same session, for example, were the developers of the PDAL and Entwine libraries – technical wizardry right there! I just try to keep up, and apply this stuff to jobs I need to do.

The main takeaway from the conference, however, is not the fact that I gave a talk. It was the amount of insight gained from the conference itself, and just being there. It was incredibly difficult to choose which sessions and workshops to attend, there were so many that were potentially useful for my life. I left feeling like I’d stepped into a seriously strong community of people who care about making great tools, and giving them away.

I chaired a session on web processing services, which was a new experience and really informative – especially Nis Hempelmann presenting on birdhouse services. This has seen immediate application in my work at NCI, so in the spirit of FOSS4G I’m running birdhouse, bugging the developers and have made my first ever commit on the project (a documentation typo – but you get the picture. Get involved).

Some other highlights were:

Ivan Sanchez’s lightning talk presenting what3fucks to the world. This actually has a serious point about how to divide the world into smaller pieces, and also how open software and data can be immediately applied, whereas closed-source options are not necessarily going to live as lively a life. If you build something and share it, it gets used.

Pretty much all of the keynote talks. Tomas Zerweck from Munich RE spoke about FOSS in risk management (which is all of us, science nerdlings, we are the boss of risk managers – without our field observation data, there is nothing). Oddly, every title slide showed Australia, and given our ignoble position as top consumers and chief backwards-pedalers of the world, I wondered if we are seen as a risk to be managed. Probably! Another standout keynote was Peter Kusterer, from IBM Germany, who described how the FOSS community got behind an emergency response to refugees arriving in Germany. Take note also Australia: Wir shaffen das!

 

In the regular programme – so much to choose from! I went to a discussion on open standards and open software, and was heartened to hear that small developers and big organisations with standards voting rights are happy working partners. I saw an excellent talk by Arnulf Christi on the relative permanence of data compared to software. Mind the data – words to live by, because there’s no point collecting all this information if nobody can use it in 3, 10, 50 years time. My brain was completely exploded by Olivier Courtin, on using postGIS in a real advanced way. Indeed. And I saw as many geoserver-related talks as I could, because it’s my job. But really, there was so much to choose from, and I will gradually catch all the videos of talks I would have liked to see. A colleague of mine just showed me another mind-blowing talk on geotiff.js and plotly that I would never have thought to go to. Here’s the programme:

http://2016.foss4g.org/schedule.html

…and you can download or watch all of the talks from the conference here:

http://video.foss4g.org/foss4g2016/videos/

So conference main work things done, here is a photo from the conference dinner – on a boat on the Rhine, with an accordion and cello. Later, after some beers and discussions of life and point clouds, I was dragged onto the dance floor and busted out my finest moves.

img_20160825_194221689_hdr

…and here a couple of photos from the world conference centre, an amazing venue.

img_20160826_170413557_hdr

img_20160825_165755612

The theme of the conference was ‘building bridges’ – and it certainly met those aims. There was a vast cross section of the spatial data community from insurance companies doing their due diligence about whether to switch to FOSS software stacks, to excited developers presenting brand new ideas. I was able to meet and speak to a number of people I’d only ever heard and read about – making that personal connection which always helps when we need to develop ideas into reality later.

I’m really grateful that I was able to go, and I’m already looking ahead to 2017, and what ideas I can convert into reality so that I’ve got something to bring to the community.

Finally, a side effect of the conference was catching up with an old friend I have not seen in 14 years!  We have surprisingly parallel lives – so what did we do? climbed rocks, of course!img_20160828_155707701

A model of a model of a world

meerkat_land_model

 

My little people Joe and Oli came up with this amazing landscape, which they called Meerkat world. The meerkats live in a house (a square structure constructed from baskets just left of centre), below a volcano which sticks up out of the snow. A lake drains out to a waterfall, which then flows to a beach off which a pirate ship is anchored. Clearly, there are sharks near the ship!

On the left of the meerkat house is a maze, which prevents tigers and lions from getting to the meerkat house and wreaking havoc. I particularly enjoy the multi-material aspect of their work – the variety of objects used to create Meerkat world is astounding!

An ordinary photo is no justice for this work, so I modelled it instead. This took 89 photos, which were used to generated 15 000 000 points and just over 1 000 000 polygons. Now to find a way to let you interact with this world via plas.io or maybe even straight up three.js. Oh, and overcoming file size limitations on my web host!

ACE CRC, Airborne LiDAR and Antarctic sea ice

Between late June and late August 2015 I worked with the Antarctic Climate and Ecosystems Co-operative Research Centre (ACE CRC) to tidy up some long running loose ends with an airborne LiDAR project. This project is close to home – my PhD revolves around cracking some of the larger nuts associated with getting a science result from survey flights undertaken between 2007 and 2012. However, I’ve worked on one small subset of data – and the CRC was in need of a way to unlock and use the rest.

Many technical documents exist for the airborne LIDAR system, but the ‘glue’ to tie them together was lacking. In six weeks I provided exactly that. The project now has a strong set of documentation covering the evolution of the system, how navigate the myriad steps involved in turning raw logfiles from laser scanners, navigation instruments and GPS observations into meaningful data, and how to interpret the data that arise from the system. After providing a ‘priority list’ of flight data to work on, ACE CRC also took advantage of my experience to churn out post-processed GPS and combined GPS + inertial trajectories for those flights. The CRC also now has the tools to estimate point uncertainty and reprocess any flights from the ground up – should they wish to.

All of which means ACE CRC are in a position to make meaningful science from the current set of airborne LiDAR observations over East Antarctic sea ice.

Some of this – a part of my PhD work and a small part of the overall project – is shown here. A first-cut of sea ice thickness estimates using airborne LiDAR elevations, empirical models for snow depth, and a model for ice thickness based on the assumption that ice, snow and seawater all exist in hydrostatic equilibrium.

figure showing sea ice draft from LiDAR points
Sea ice draft estimated from airborne LiDAR elevations, an empirical model for snow, and a hydrostatic model for ice thickness from elevations and snow depth. This image shows a model of the underside of sea ice – blue and green points are elevations, purple to yellow points visible here are ice draft estimates. Why are some drafts ‘positive’? That’s a question currently being worked on…

A short visual history of shipping access to Australian Antarctic stations

In late 2014 I was contracted by the Antarctic Climate and Ecosystems Cooperative Research Centre to analyse Antarctic shipping patterns from 2000 to 2014. The aim was to extend a planning report first published in 2008, and provide deeper insights into shipping patterns in order to plan for future shipping seasons. Obvious shipping routes arise as a combination of crew experience, average sea conditions, sea ice conditions and logistical constraints. Shipping is expensive, so great effort goes into minimising ship time required for a given task. Days can be saved or lost by the choice of shipping route. Hugging the coast in transit between stations is clearly the shortest route – but seasonally the most risky due to the presence of sea ice.

So what routes are being used most often? and do they work?

Mining data from shipping reports and ship GPS traces, I was able to map where and when ships had difficulty accessing stations. While plenty of maps exist showing ship tracks, there has never been any analysis of where and why ships had difficulty getting to stations. The map presented below is one of the first.

heatmapIt shows a ‘heatmap’ –  a frequency count of hourly ship positions per 25km square grid cell. Overlaid on the map are labelled round indicators of ship ‘stuckness’ due to sea ice conditions (as opposed to delay for operational purposes), and squares where ship to shore helicopter access was forced by ice conditions.

This map says nothing about seasonality, or the times of year which are most risky for ships. It does show a clear preference for routes to and from stations, in particular Casey and Davis. It also shows that generally ships transit between the two stations by heading north to skirt sea ice, or hugging the coast – which is clearly troublesome at times. For the most part, ships get to stations and back with few dramas.

As a final note, a colleague pointed out that this is also a map of bias in our knowledge of the Southern Ocean. That’s a much longer story…

Hi Canberra

Spatialised has moved – I am now based in Canberra, and after a few months of moving and general hiatus I’m beginning the business of science, PhD’ing and spatial stuff once more.

So, greetings from Australia’s capital!

DTM troubleshooting for Spatial Scientific

DTM testing
Two great looking DTMS – which one is problematic? and why?

My first job as a new consultancy! In August 2013 I was asked to solve a DTM problem for a local airborne surveying company, Spatial Scientific.

Starting with two collections of ASCII points at different spatial resolution and an orthophoto, my job was to find out which DTM was ‘right’ – or free of unusual artifacts. Given the initial data format, my first instinct is to treat them as point clouds. In this case CloudCompare is the first port of call, and showed a clear pattern of artifacts present when the DTMs were compared. This exercise gave me a spatial context for the DTM issues – but when the gridded DTM points are 3 to 10m apart and the DTM differences are in the order of decimetres, it is very hard to tell where exactly the artifacts come from.

The solution came in the form of an image analysis approach. Initial concerns about the DTMs came from a raster DTM differencing process, so I first reproduced this result using the rasterised ASCII data. Again, this alone did not provide any clues about which DTM was at fault.

It was impossible to detect the DTM problems by close examination, and difficult to know which DTM was the issue by comparison. So I enlisted the help of image processing methods. Detecting edges and slopes in each DTM clearly identified the culprit – edges and steep slopes/steps leaping from the screen where none should exist! Combined with the CloudCompare result, it gave a very clear picture of where the DTM had problems and pointed clearly at the source. Problem solved.

Tools used: Cloudcompare; QGIS; Sextante; GDAL_translate; GMT

Acquiring prism lock: the cover photo

The cover photo for this site shows.. the back of my head, a Leica Viva TS 15, a prism, and a bright yellow, low cost, very effective instrument warming/battery box I’m very proud of! I’m acquiring prism lock using the remote control, before heading out to collect locations on a SIPEX II ice station. The sea ice surveying project was part of my work for the Australian Antarctic Division, and complements my PhD studies. It was a challenging task – nobody knew if the total station would play happily at -20 degrees celcius on drifting sea ice. It performed admirably, and the results will provide much-needed spatial glue for on, over and under ice spatial datasets collected on the voyage. Photo: Polly Alexander