The LiDAR uncertainty budget II: computing uncertainties

In part 1, we looked at one way that a LIDAR point is created. Just to recap, we have 14 parameters (for the 2D scanner used in this example) each with their own uncertainty. Now, we work out how to determine the geolocation uncertainty of our points.

First, let’s talk about what those uncertainties are. An obvious target is GPS positioning uncertainty, and this is the source that comes to mind first. Using a dual-frequency GPS gives the advantage of sub-decimetre positioning, after some post-processing is done. Merging those positions with observations from the IMU constrains those positions even further.

For a quick mental picture of how this works, consider that the navigation unit is collecting a GPS fix every half a second. The accelerometers which measure pitch, roll and heading take a sample every 1/250th of a second. They also keep track of how far they think they’ve moved since the last GPS fix – plus the rate of motion is tracked, putting a limit on how far it is possible to move between each GPS fix. The navigation unit compares the two data sources. If the GPS fix is wildly unexpected, a correction is added to GPS positions and we carry on.

So we get pretty accurate positions (~5cm).

But is that the uncertainty of our point positions? Nope. There’s more. The laser instrument comes with specifications about how accurate laser ranging is, and how accurate it’s angular encoder is.

Even more, the navigation device has specifications about how accurate it’s accelerometers are, and all of these uncertainties contribute! How?

Variance-covariance propagation to the rescue

Glennie [1] and Schaer [2] used Variance-covariance propagation to estimate the uncertainty in geolocation of LiDAR points. This sounds wildly complex, but at it’s root is a simple idea:

uncert(thing1 + thing2) = \sqrt{uncert(thing1)^2 + uncert(thing2)^2}

See that? we figured out the uncertainty of a thing made from adding thing1 and thing2, by knowing what the uncertainties of things1 and thing2 are.

Going from simple addition to a few matrix multiplications was a quantum leap, handled neatly by symbolic math toolboxes – but only after I nearly quit my PhD trying to write out the uncertainty equation by hand.

Here is the uncertainty equation for LiDAR points, whereU is the uncertainty:

U = FC_uF^t

What!! Is that all? I’m joking, right?

Nearly. F is a 14-element vector containing partial derivatives of each the parameters that go into the LiDAR georeferencing equation (the so-called Jacobians). C is a 14 x 14 matrix on which the uncertainties associated with each element occupy the diagonal.

Writing this set of equations out by hand was understandably a nightmare, but more clever folks than myself have achieved it – and while I certainly learned a lot about linear algebra in this process, I took advice from an old astronomer and used a symbolic maths toolbox to derive the Jacobians.

…which meant that the work actually got done, and I could move on with research!

Now my brain is fully exploded – what do uncertainties look like?

Glennie [1] and Schaer [2] both report that at some point, angular motion uncertainty overtakes GPS position uncertainty as the primary source of doubt about where a point is. Fortunately I found the same thing. Given the inherent noise of the system I was using, this occurred pretty quickly. In the Kalman filter which integrates GPS and IMU observations, a jittery IMU is assigned a higher uncertainty at each epoch. This makes sense, but also means that angular uncertainties need to be minimised (for example, by flying instruments in a very quiet aircraft or an electric-powered UAS)

I made the following map years ago to check that I was getting georeferencing right, and also getting uncertainty estimates working properly. 

It could be prettier, but you see how the components all behave – across-track and ‘up’ uncertainties are dominated by the angular component not far off nadir. Along-track uncertainties are more consistent across-track, because the angular measurement components (aircraft pitch and a bit of yaw) are less variable.

The sample below shows LiDAR point elevation uncertainties (relative to an ITRF08 ellipsoid) during level flight over sea ice. At instrument nadir, height uncertainty is more or less equivalent to instrument positioning uncertainty. Increasing uncertainties toward swath edges are a function of angular measurement uncertainty.

But normally LiDAR surveys come with an averaged accuracy level. Why bother?

i. why I bothered:

In a commercial survey the accuracy figure is determined mostly by comparing LiDAR points with ground control data – and the more ground control there is, the better (you have more comparison points and can make a stronger estimate of the survey accuracy).

Over sea ice this is impossible. I was also using these points as input for an empirical model which attempts to estimate sea-ice thickness. As such, I needed to also propagate uncertainties from my LiDAR points through the model to sea-ice thickness estimates.

In other words, I didn’t want to guess what the uncertainties in my thickness estimates were. As far as plausible, I need to know what the input uncertainty is for each thickness estimate is – so every single point. It’s nonsensical to suggest that every observation and therefore every thickness estimate comes with the same level of uncertainty.

Here is another map from my thesis. It’s pretty dense, but shows the process in action:

The top panel is LIDAR ‘height above sea level’ for sea ice. Orange points are ‘sea level reference’ markers, and the grey patch highlights an intensive survey plot. The second panel is uncertainty associated with each height measurment. In panel  three we see modelled sea ice thickness (I’ll write another post about that later), and the final panel shows the uncertainty associated with each thickness estimate. Thickness uncertainties are greater than LIDAR height uncertainties because we’re also accounting for uncertainties in each of the other model parameters (LIDAR elevations are just one). So, when I get to publishing sea-ice thickness estimates, I can put really well made error bars around them!

ii. why you, as a commercial surveyor or an agency contracting a LiDAR survey or a LIDAR end user should bother:

The computation of uncertainties is straightforward and quick once the initial figuring of the Jacobians is done – and these only need to be recomputed when you change your instrument configuration. HeliMap (Switzerland) do it on-the-fly and compute a survey quality metric (see Schaer et al, 2007 [2]) which allows them to repeat any ‘out of tolerance’ areas before they land. Getting an aircraft in the air is hard, and keeping it there for an extra half an hour is easy – so this capability is really useful in terms of minimising costs to both contracting agencies and surveyors. This analysis in conjunction with my earlier post on ‘where exactly to measure in LiDAR heights‘ shows you where you can assign greater confidence to a set of LiDAR points.

It’s also a great way to answer some questions – for example are surveyors flying over GCP’s at nadir, and therefore failing to meet accuracy requirements off-nadir? (this is paranoid ‘all the world is evil’ me speaking – I’m sure surveyors are aware of this stuff and collect off-nadir ground control matches as well). Are critical parts of a survey being captured off-nadir, when it would be really useful to get the best possible data over them? (this has implications for flight planning). As a surveyor, this type of thinking will give you fewer ‘go back and repeat’ jobs – and as a contracting agency, you might spend a bit more on modified flight plans, but not a lot more to get actually really great data instead of signing off and then getting grief from end users.

As an end user of LiDAR products, If you’re looking for data quality thresholds – for example ‘points with noise < N m over a flat surface’, this type of analysis will help you out. I’ve also talked to a number of end users who wonder about noisy flight overlaps, and why some data don’t appear to be well-behaved. Again, having some quality metric around each point will help an end-user determine which data are useful, and which should be left alone.

Summarising

I certainly stand on the shoulders of giants here, and still incur brain-melting when I try to come at these concepts from first-principles (linear algebra is still hard for me!). However, the idea of being able to estimate in an a-priori quality metric is, in my mind, really useful.

I don’t have much contact with commercial vendors, so I can only say ‘this is a really great idea, do some maths and make your LiDAR life easer!’.

I implemented this work in MATLAB, and you can find it here:

https://github.com/adamsteer/LiDAR-georeference

With some good fortune it will be fully re-implemented in Python sometime this year. Feel free to clone the repository and go for it. Community efforts rock!

And again, refer to these works. I’ll see if I can find any more recent updates, and would really appreciate hearing about any recent work on this stuff:

[1] Glennie, C. (2007). Rigorous 3D error analysis of kinematic scanning LIDAR systems. Journal of Applied Geodesy, 1, 147–157. http://doi.org/10.1515/JAG.2007. (accessed 19 January 2017)

[2] Schaer, P., Skaloud, J., Landtwing, S., & Legat, K. (2007). Accuracy estimation for laser point cloud including scanning geometry. In Mobile Mapping Symposium 2007, Padova. (accessed 19 January 2017)

The LiDAR uncertainty budget I: georeferencing points

This is part 1 of 2, explaining how uncertainties in LiDAR point geolocation can be estimated for one type of scanning system. We know LiDAR observations of elevation/range are not exact (see this post), but a critical question of much interest to LiDAR users is ‘how exact are the measurements I have’?

As an end-used of LiDAR data I get a bunch of metadata that is provided by surveyors. One of the key things I look for are the accuracy estimates. Usually these come as some uncertainty in East, North and Up measurements, in metres, relative to the spatial reference system the point measurements are expressed in. What I don’t get is any information about how these figures are arrived at, or if they apply equally to every point. It’s a pretty crude measure.

As a LiDAR maker, I was concerned with the uncertainty of each single point – particularly height – because I use these data to feed a model for estimating sea ice thickness. I also need to feed in an uncertainty – so that I can put some boundaries around how good my sea ice thickness estimate is. However, there was no way of doing so in an off the shelf software package – so I implemented the LiDAR geoferencing equations and a variance-covariance propagation method for them in MATLAB  and used these. This was a choice of convenience at the time, and I’m now slowly porting my code to Python, so that you don’t need a license to make LiDAR points and figure out their geolocation uncertainties.

My work was based on two pieces of fundamental research: Craig Glennie’s work on rigorous propagation of uncertainties in 3D [1], and Phillip Schaer’s implementation of the same equations [2]. Assuming that we have a 2D scanner, the LiDAR georeferencing equation is:

\begin{bmatrix}  x \\  y \\  z \\  \end{bmatrix}^m=  \begin{bmatrix}  X\\  Y\\  Z\\  \end{bmatrix} + R^b_m  \left[  R^b_s  \rho  \left(  \begin{gathered}  sin\Theta\\  0\\  cos\Theta  \end{gathered}  \right)  +  \begin{bmatrix}  a_x\\  a_y\\  a_z\\  \end{bmatrix}^b  \right]

The first term on the right is the GPS position of the vehicle carrying the LiDAR:

\begin{bmatrix}  X\\  Y\\  Z\\  \end{bmatrix}

The next term is made up of a few things. Here we have points in LiDAR scanner coordinates:

\rho\left(  \begin{gathered}  sin\Theta\\  0\\  cos\Theta  \end{gathered}  \right)

…which means ‘range from scanner to target’ (\rho ) multiplied by sin\theta to give an X coordinate and cos\theta to give a Z coordinate of the point measured.

Note that there is no Y coordinate! This is a 2D scanner, observing an X axis (across track) and a Z axis (from ground to scanner). The Y coordinate is provided by the forward motion of a vehicle, in this case a helicopter.

For a 3D scanner, or a canner with an elliptical scan pattern, there will be additional terms describing where a point lies in the LiDAR frame. Whatever term is used at this point, the product is the position of a reflection-causing object in the LiDAR instrument coordinate system which is rotated to the coordinate system of the vehicle’s navigation device using the matrix:

R^b_s

The point observed also has a lever arm offset added (the distance in 3 axes between the navigation device’s reference point and the LiDAR’s reference point), so we pretend we’re putting our navigation device exactly on the LiDAR instrument reference point:

\begin{bmatrix}  a_x\\  a_y\\  a_z\\  \end{bmatrix}^b

This mess of terms is finally rotated to a mapping frame using euler angles in three axes (essentially heading, pitch, roll) recorded by the navigation device:

R^b_m

…and added to the GPS coordinates of the vehicle (which are really the GPS coordinates of the navigation system’s reference point).

There are bunch of terms there – 14 separate parameters which go into producing a LiDAR point, and that’s neglecting beam divergence, and only showing single returns. Sounds crazy – but the computation is actually pretty efficient.

Here’s a cute diagram of the scanner system I was using – made from a 3D laser scan and some engineering drawings. How’s that? Using a 3D scanner to measure a 2D scanner. Even better, the scan was done on the helipad of a ship in the East Antarctic pack ice zone!

You can see there the relationships I’ve described above. The red box is our navigation device – a dual-GPS, three-axis-IMU strapdown navigator, which provides us with the relationship between the aircraft body and the world. The green cylinder is the LiDAR, which provides us ranges and angles in its own coordinate system. The offset between them is the lever arm, and the orientation difference between the axes of the two instruments is the boresight matrix.

Now consider that each of those parameters from each of those instruments and the relationships between them have some uncertainty associated with them, which contributes to the overall uncertainty about the geolocation of a given liDAR point.

Mind warped yet? Mine too. We’re all exhausted from numbers now, so part 2 will examine how we take all of that stuff and determine, for every point, a geolocation uncertainty.

Feel free to ask questions, suggest corrections, or suggest better ways to clarify some of the points here.

There’s some code implementing this equation here: https://github.com/adamsteer/LiDAR-georeference – it’s Apache 2.0 licensed so feel free to fork the code and make pull requests to get it all working, robust and community-driven!

Meanwhile, read these excellent resources:

[1] Glennie, C. (2007). Rigorous 3D error analysis of kinematic scanning LIDAR systems. Journal of Applied Geodesy, 1, 147–157. http://doi.org/10.1515/JAG.2007. (accessed 19 January 2017)

[2] Schaer, P., Skaloud, J., Landtwing, S., & Legat, K. (2007). Accuracy estimation for laser point cloud including scanning geometry. In Mobile Mapping Symposium 2007, Padova. (accessed 19 January 2017)

LiDAR thoughts – where to measure, exactly?

LiDAR is a pretty common tool for geospatial stuff. It means ‘Light Detection and Ranging’. For the most part involved shining a laser beam at something then measuring how long a reflection takes to come back. Since we approximate the speed of light, we can use the round trip time to estimate the distance between the light source and ‘something’ with a great degree of accuracy. Modern instruments perform many other types of magic – building histograms of individual photons returned, comparing emitted and returned wave pulses, and even doing this with many different parts of the EM spectrum.

Take a google around about LiDAR basics – there are many resources which already exist to explain all this, for example https://coast.noaa.gov/digitalcoast/training/intro-lidar.html.

What I want to write about here is a characteristic of the returned point data. A decision that needs to be made using LiDAR is:

Where should I measure a surface?

…but what – wait? Why is this even a question? Isn’t LiDAR just accurate to some figure?

Sort of. A few years ago I faced a big question after finding that the LiDAR I was working on was pretty noisy. I made a model to show how noisy the LiDAR should be, and needed some data to verify the model. So we hung the LiDAR instrument in a lab and measured a concrete floor for a few hours.

Here’s a pretty old plot of what we saw:

What’s going on here? In the top panel, I’ve spread our scanlines along an artificial trajectory heading due north (something like N = np.arange(0,10,0.01)), with the Easting a vector of zeroes and height a vector of 3’s – and then made a swath map. I drew in lines showing where scan angle == 75, 90, and 115 are.

In the second panel (B), there’s a single scanline show across-track. This was kind of a surprise – although we should have expected it. What we see is that the range observation from the LiDAR is behaving as specified – accurate to about 0.02 m (from the instrument specifications). What we didn’t realise was that accuracy is angle-dependent, since moving away from instrument nadir the impact of angular measurement uncertainties becomes greater than the ranging uncertainty of the instrument.

Panels C and D show this clearly – near instrument nadir, ranging is very good! Near swath edges we approach the published instrument specification.

This left us with the question asked earlier:

When we want to figure out a height reference from this instrument, where do we look?

If we use the lowest points, we measure too low. Using the highest points, we measure too high. In the end I fitted a surface to the points I wanted to use for a height reference – like the fit line in panel B – and used that. Here is panel B again, with some annotations to help it make sense.

You can see straight away there are bears in these woods – what do we do with points which fall below this plane? Throw them away? Figure out some cunning way to use them?

In my case, for a number of reasons, I had to throw them away, since I levelled all my points using a fitted surface, and ended up with negative elevations in my dataset. Since I was driving an empirical model based on these points, negative input values are pretty much useless. This is pretty crude. A cleverer, more data-preserving method will hopefully reveal itself sometime!

I haven’t used many commercial LiDAR products, but one I did use a reasonable amount was TerraSolid. It worked on similar principles, using aggregates of points and fitted lines/planes to do things which required good accuracy  – like boresight misalignment correction.

No doubt instruments have improved since the one I worked with. However, it’s still important to know that a published accuracy for a LiDAR survey is kind of a mean (like point density)  – some points will have a greater chance of measuring something close to where it really is than others. And that points near instrument nadir are more likely to be less noisy and more accurate.

That’s it for now – another post on estimating accuracy for every single LiDAR point in a survey will come soon(ish).

The figure shown here comes from an internal technical report I wrote in 2012. Data collection was undertaken with the assistance of Dr Jan L. Lieser and Kym Newbery, at the Australian Antarctic Division. Please contact me if you would like to obtain a copy of the report – which pretty much explains this plot and a simulation I did to try and replicate/explain the LiDAR measurement noise.

You can call me Doctor now.

Glorious bearded philosopher-king is clearly more appropriate, but Doctor will do.

After six years of chasing a crazy goal and burrowing down far too many rabbit holes in search of answers to engineering problems, I got a letter from the Dean of graduate research back in late October. My PhD is done!

So what exactly did I do? The ten minute recap is this:

1. I assessed the utility of a class of empirical models for estimating snow depth on sea ice using altimetry (snow height). The models were derived from as many in situ (as in holes drilled in the ice) measurements as I could find, and I discovered that they are great in a broad sense (say hundreds of metres), but don’t quite get the picture right at high resolutions (as in metres). This is expected, and suspected – but nobody actually did the work to say how. So this was published (and of course, is imperfect – there is much more to say on the matter).

2. For a LiDAR altimetry platform I spent a lot of time tracking down noise sources, and ended up implementing a method to estimate the 3D uncertainty of every single point. This was hard! I also got quite good at staring at GPS post-processing output, and became quite insanely jealous anytime anyone showed me results from fixed-wing aircraft.

3. Now having in hand some ideas about estimating snow depth and uncertainties, I used another empirical model to estimate sea ice thickness using snow depths estimated from sea ice elevation (see 1), and propagating uncertainty from LiDAR points through to get an idea of uncertainty in my thickness estimates (see 2). Because of spatial magic I did with a robotic total station on SIPEX-II (see the blog title pic – that’s me with my friendly Leica Viva), I could also coregister some under-ice observations of sea ice draft and use them to come up with parameters to use in the sea-ice thickness from altimetry model at the scale of a single ice floe. For completeness, I did the same with a very high resolution (10cm) model I made from 3D photogrammetry on the same site. I then used this ‘validated’ parameter set to estimate sea-ice thickness for some larger regions.

Overall, the project changed direction three our four times, reshaping as we learned more – and really taking shape after some new methods for sea ice observations were applied in 2012.

What I discovered was that it is actually pretty feasible to do sea ice thickness mapping from a ship-deployed aircraft in the pack ice zone. This is important – because it means regions usually very difficult to access from a land-based runway can be examined.

I also showed that observations of sea ice so far may be underestimating sea ice thickness in certain ice regimes – and also likely to be overestimating sea ice thickness in others. This has pretty important implications for modelling the fresh water flux and other stuff (habitat availability) on the southern ocean – so more work is already underway to try and gather more data. Encouragingly, I showed that drill holes are actually quite accurate – and showed how some new approaches to the practice of in situ sampling might markedly increase the power of these observations.

The possibilities of a robotic total station and a mad keen field sampling nutter on an ice floe – endless! (we did some awesome work with a hacky sack and the total station, hopefully coming to light soon).

…and like all PhD theses, the last part is a vast introspection on where the soft underbelly of the project lies, and what could/should be done better next time.

Needless to say, I’m really relieved to be done. I go wear a floppy hat and collect my bit of paper next month.

What next? Subject the ice thickness work to peer review! Aiming for publication in 2017.

FOSS4G 2016 wrap-up

img_20160826_193742693

I’ve recently returned from FOSS4G 2016 in Bonn, Germany. It was my first OSgeo conference, and it was quite amazing.

As a little background the OSgeo foundation supports the development of open source ‘geo software’, from desktop packages like QGIS to behind-the-scenes libraries (Proj.4) and even geospatial metadata catalogues (eg Geonetwork). My involvement? Over the years I’ve used parts of the OSgeo stack heavily. Increasingly. This also holds for the National Computational Infrastructure, so it made sense to submit an abstract and try to go.

My talk was going to be about storing and querying point cloud data from HDF files. Unfortunately I didn’t finish my experiments in time, and I ended up presenting a high level view of NCI, our point data storage and analysis needs, and some attempts at using Postgres-pointcloud and SPDlib (a HDF based point and waveform data storage system). This worked well with the session – until that time, most of the point data discussions at FOSS4G revolved around airborne or mobile LiDAR. It was nice to feel a bit like we had some unique problems to solve, and I feel grateful to be able to stand among the giants of OSgeo. In the same session, for example, were the developers of the PDAL and Entwine libraries – technical wizardry right there! I just try to keep up, and apply this stuff to jobs I need to do.

The main takeaway from the conference, however, is not the fact that I gave a talk. It was the amount of insight gained from the conference itself, and just being there. It was incredibly difficult to choose which sessions and workshops to attend, there were so many that were potentially useful for my life. I left feeling like I’d stepped into a seriously strong community of people who care about making great tools, and giving them away.

I chaired a session on web processing services, which was a new experience and really informative – especially Nis Hempelmann presenting on birdhouse services. This has seen immediate application in my work at NCI, so in the spirit of FOSS4G I’m running birdhouse, bugging the developers and have made my first ever commit on the project (a documentation typo – but you get the picture. Get involved).

Some other highlights were:

Ivan Sanchez’s lightning talk presenting what3fucks to the world. This actually has a serious point about how to divide the world into smaller pieces, and also how open software and data can be immediately applied, whereas closed-source options are not necessarily going to live as lively a life. If you build something and share it, it gets used.

Pretty much all of the keynote talks. Tomas Zerweck from Munich RE spoke about FOSS in risk management (which is all of us, science nerdlings, we are the boss of risk managers – without our field observation data, there is nothing). Oddly, every title slide showed Australia, and given our ignoble position as top consumers and chief backwards-pedalers of the world, I wondered if we are seen as a risk to be managed. Probably! Another standout keynote was Peter Kusterer, from IBM Germany, who described how the FOSS community got behind an emergency response to refugees arriving in Germany. Take note also Australia: Wir shaffen das!

 

In the regular programme – so much to choose from! I went to a discussion on open standards and open software, and was heartened to hear that small developers and big organisations with standards voting rights are happy working partners. I saw an excellent talk by Arnulf Christi on the relative permanence of data compared to software. Mind the data – words to live by, because there’s no point collecting all this information if nobody can use it in 3, 10, 50 years time. My brain was completely exploded by Olivier Courtin, on using postGIS in a real advanced way. Indeed. And I saw as many geoserver-related talks as I could, because it’s my job. But really, there was so much to choose from, and I will gradually catch all the videos of talks I would have liked to see. A colleague of mine just showed me another mind-blowing talk on geotiff.js and plotly that I would never have thought to go to. Here’s the programme:

http://2016.foss4g.org/schedule.html

…and you can download or watch all of the talks from the conference here:

http://video.foss4g.org/foss4g2016/videos/

So conference main work things done, here is a photo from the conference dinner – on a boat on the Rhine, with an accordion and cello. Later, after some beers and discussions of life and point clouds, I was dragged onto the dance floor and busted out my finest moves.

img_20160825_194221689_hdr

…and here a couple of photos from the world conference centre, an amazing venue.

img_20160826_170413557_hdr

img_20160825_165755612

The theme of the conference was ‘building bridges’ – and it certainly met those aims. There was a vast cross section of the spatial data community from insurance companies doing their due diligence about whether to switch to FOSS software stacks, to excited developers presenting brand new ideas. I was able to meet and speak to a number of people I’d only ever heard and read about – making that personal connection which always helps when we need to develop ideas into reality later.

I’m really grateful that I was able to go, and I’m already looking ahead to 2017, and what ideas I can convert into reality so that I’ve got something to bring to the community.

Finally, a side effect of the conference was catching up with an old friend I have not seen in 14 years!  We have surprisingly parallel lives – so what did we do? climbed rocks, of course!img_20160828_155707701

A model of a model of a world

meerkat_land_model

 

My little people Joe and Oli came up with this amazing landscape, which they called Meerkat world. The meerkats live in a house (a square structure constructed from baskets just left of centre), below a volcano which sticks up out of the snow. A lake drains out to a waterfall, which then flows to a beach off which a pirate ship is anchored. Clearly, there are sharks near the ship!

On the left of the meerkat house is a maze, which prevents tigers and lions from getting to the meerkat house and wreaking havoc. I particularly enjoy the multi-material aspect of their work – the variety of objects used to create Meerkat world is astounding!

An ordinary photo is no justice for this work, so I modelled it instead. This took 89 photos, which were used to generated 15 000 000 points and just over 1 000 000 polygons. Now to find a way to let you interact with this world via plas.io or maybe even straight up three.js. Oh, and overcoming file size limitations on my web host!

A new job, and quiet times ahead for Spatialised. For now.

I’ve just taken a position at the National Computational Infrastructure in Canberra, Australia. So I won’t be seeking work using Spatialised as a platform for a while.

This space will, however, become a news feed for my science related life – PhD progress, interesting research I’m involved in, or useful articles related to what I do.

ACE CRC, Airborne LiDAR and Antarctic sea ice

Between late June and late August 2015 I worked with the Antarctic Climate and Ecosystems Co-operative Research Centre (ACE CRC) to tidy up some long running loose ends with an airborne LiDAR project. This project is close to home – my PhD revolves around cracking some of the larger nuts associated with getting a science result from survey flights undertaken between 2007 and 2012. However, I’ve worked on one small subset of data – and the CRC was in need of a way to unlock and use the rest.

Many technical documents exist for the airborne LIDAR system, but the ‘glue’ to tie them together was lacking. In six weeks I provided exactly that. The project now has a strong set of documentation covering the evolution of the system, how navigate the myriad steps involved in turning raw logfiles from laser scanners, navigation instruments and GPS observations into meaningful data, and how to interpret the data that arise from the system. After providing a ‘priority list’ of flight data to work on, ACE CRC also took advantage of my experience to churn out post-processed GPS and combined GPS + inertial trajectories for those flights. The CRC also now has the tools to estimate point uncertainty and reprocess any flights from the ground up – should they wish to.

All of which means ACE CRC are in a position to make meaningful science from the current set of airborne LiDAR observations over East Antarctic sea ice.

Some of this – a part of my PhD work and a small part of the overall project – is shown here. A first-cut of sea ice thickness estimates using airborne LiDAR elevations, empirical models for snow depth, and a model for ice thickness based on the assumption that ice, snow and seawater all exist in hydrostatic equilibrium.

figure showing sea ice draft from LiDAR points
Sea ice draft estimated from airborne LiDAR elevations, an empirical model for snow, and a hydrostatic model for ice thickness from elevations and snow depth. This image shows a model of the underside of sea ice – blue and green points are elevations, purple to yellow points visible here are ice draft estimates. Why are some drafts ‘positive’? That’s a question currently being worked on…

A short visual history of shipping access to Australian Antarctic stations

In late 2014 I was contracted by the Antarctic Climate and Ecosystems Cooperative Research Centre to analyse Antarctic shipping patterns from 2000 to 2014. The aim was to extend a planning report first published in 2008, and provide deeper insights into shipping patterns in order to plan for future shipping seasons. Obvious shipping routes arise as a combination of crew experience, average sea conditions, sea ice conditions and logistical constraints. Shipping is expensive, so great effort goes into minimising ship time required for a given task. Days can be saved or lost by the choice of shipping route. Hugging the coast in transit between stations is clearly the shortest route – but seasonally the most risky due to the presence of sea ice.

So what routes are being used most often? and do they work?

Mining data from shipping reports and ship GPS traces, I was able to map where and when ships had difficulty accessing stations. While plenty of maps exist showing ship tracks, there has never been any analysis of where and why ships had difficulty getting to stations. The map presented below is one of the first.

heatmapIt shows a ‘heatmap’ –  a frequency count of hourly ship positions per 25km square grid cell. Overlaid on the map are labelled round indicators of ship ‘stuckness’ due to sea ice conditions (as opposed to delay for operational purposes), and squares where ship to shore helicopter access was forced by ice conditions.

This map says nothing about seasonality, or the times of year which are most risky for ships. It does show a clear preference for routes to and from stations, in particular Casey and Davis. It also shows that generally ships transit between the two stations by heading north to skirt sea ice, or hugging the coast – which is clearly troublesome at times. For the most part, ships get to stations and back with few dramas.

As a final note, a colleague pointed out that this is also a map of bias in our knowledge of the Southern Ocean. That’s a much longer story…

Hi Canberra

Spatialised has moved – I am now based in Canberra, and after a few months of moving and general hiatus I’m beginning the business of science, PhD’ing and spatial stuff once more.

So, greetings from Australia’s capital!