## More sea ice reconstructions

Because we all focus so hard while writing workshops, right? Here are a couple more ‘fun with sea ice’ visualisations. There’s nothing really scientific about these, they’re based on some proof of concept work which is very slowly iterating toward science.

So, just enjoy! Firstly, SIPEX II Ice station 7 – made from crossing-over flights.

It’s pretty! And you’ll notice that all the heights are referenced to an ellipsoid. It’s not a rigorous science dataset in this incarnation.

Next, a strip mapping test over progressively thin ice at the edge of a polynya:

You can see the ice getting progressively darker toward viewers right, as it thins. If you view by elevation – you can also see some inherent issues with single-strip mapping and loose camera calibration – it’s pretty warped. So we learn, see if it works, and hopefully get to try again another day.

## Open AgTech – a prelude

I recently attended to a workshop on AgTech innovation, hosted by the Canberra Innovation Network and the US Embassy. It was a fun and really useful day out – hearing stories about what drives the agriculture sector, what they see as innovative, and how niches in the sector are filled by innovators literally ‘on the ground’ and also tech people with ideas coming in from the outside.

I’m not a rural person. Over my life I’ve had a few snapshots of life outside the city – and gleaned from those was that people form communities. Communities of practice, communities of support, communities of innovation. People on the land have done this forever.

In the workshop, this basic concept was validated – forming communities was a huge theme!

A second huge theme was ‘farmers are busy running their business’; and don’t have IT support teams in the field. The biggest competitor to tech widgetry was ‘we’ve done it this way for 130 years…’. The next biggest competitor was lack of local support; and/or really bad user interface design, driven by almost-nonexistent interoperability. Many widgets just don’t talk to each other, which is a giant pain in the grass.

Pulling out a third theme, people who make their living farming don’t like salespeople. To wrap that into tech parlance, there’s heavy skepticism that some new platform or widget is going to make their life better – innovation needs to be proven, it needs to be understood, it needs grassroots input and adoption.

Interestingly, some research was presented about crop farming being fast to take up new technology but slow to actually exploit it – whereas animal-based farming was slow to take on new tech; but quick to exploit it when they did.

All of this is seasoned by a liberal sprinkling of agronomists, startups, and big old companies all trying to make a buck. Apparently AgTech is going to be worth astronomical amounts in the future (yes, we’ve all read those kind of reports…).

…excuse my skepticism. Merging technology and agriculture actually go hand in hand – they’ve always done so (recall that we only gained agriculture due to our ability to develop and share technology)!

What surprised me most about everything there, though, was a lack of understanding of open systems – despite ten thousand years of sharing ideas in order to make agriculture happen, grow, and create how we live today. The sale of Red Hat to IBM was cast as ‘hey, you can make money with open source things’.

…but we all know that. Myriad businesses thrive on open systems (see Planetlabs, for a media-worthy example). Entire organisations are built on the construction and promulgation of open standards (the Open Geospatial Consortium, for example). Huge communities of practice have emerged around open software (see the Open Source Geospatial Foundation). And importantly, these communities emerge – in some cases almost spontaneously! Grassroots adoption counts for everything.

What is the real power of open systems in the context of agtech?

The open source business model seems antithetical to the whole startup/venture ecosystem – you mean to give away all the IP? Of course!

…but resonates extremely well with the small window I have into the agricultural community. Open systems allow communities to grow. Open systems allow and encourage ‘field innovation’, which the agricultural sector is famous for! Open systems work to engage and build local communities of practice. Open systems lower financial barriers to implementing or creating bleeding-edge tech solutions. Open systems disrupt vendor lock in, and power interoperable systems from the ground up (not as a bolt-on afterthought!)

This works for knitting, welding and cake baking. It can work for technology too! We know this. It’s proven in geospatial business. It’s proven in high performance computing (that famous recent sale I mentioned earlier). Open source software underpins the entire internet! Unlike Chook Coin, it’s really no joke.

I will boldy contend that the future of AgTech is open. To meet long-term planetary challenges requiring rapid change in how we think about food production, supply, open standards and open systems are the only way to generate the scale of innovation; and harness the diversity of thinking required to construct a sustainable future. The challenge for the AgTech industry is how to break down models which worked well for so long, but won’t soon.  Of course, the big stick of risk management and liability looms over everything.

However – it’s a known challenge, we can see the size and scope of it, we can mitigate.

The wicked problems – the unknown unknowns ahead are far more important and difficult. These require exactly systems which allow for as many collaborators and innovators as possible to engage, and work out myriad solutions without needing to wait for a vendor update cycle. One of those wicked problems is an open approach itself! How can a community developing serious tech be managed effectively? It’s an issue even in long-standing communities.

I’m really looking forward to AgTech Innovation workshop 2.0, the conversations which happen between now and then, and the conversations to come.

Of course, I’d love to talk to you more about building an open AgTech future. While right now now I’m a tech guy blowing smoke – and writing is easy – I have a hunch. I think the key phrases ‘open standards’, ‘open systems’, ‘open source’ are coming to the AgTech conversation. In order to achieve the incredible dreams of the agricultural community and keep innovators in the fields, they need to. And of course, you can do whatever you like with them.

Finally – stay tuned, there’s an incredible (we think) open source AgTech project in the brewing… nothing talks like a proven concept, right?

## The LiDAR uncertainty budget II: computing uncertainties

In part 1, we looked at one way that a LIDAR point is created. Just to recap, we have 14 parameters (for the 2D scanner used in this example) each with their own uncertainty. Now, we work out how to determine the geolocation uncertainty of our points.

First, let’s talk about what those uncertainties are. An obvious target is GPS positioning uncertainty, and this is the source that comes to mind first. Using a dual-frequency GPS gives the advantage of sub-decimetre positioning, after some post-processing is done. Merging those positions with observations from the IMU constrains those positions even further.

For a quick mental picture of how this works, consider that the navigation unit is collecting a GPS fix every half a second. The accelerometers which measure pitch, roll and heading take a sample every 1/250th of a second. They also keep track of how far they think they’ve moved since the last GPS fix – plus the rate of motion is tracked, putting a limit on how far it is possible to move between each GPS fix. The navigation unit compares the two data sources. If the GPS fix is wildly unexpected, a correction is added to GPS positions and we carry on.

So we get pretty accurate positions (~5cm).

But is that the uncertainty of our point positions? Nope. There’s more. The laser instrument comes with specifications about how accurate laser ranging is, and how accurate it’s angular encoder is.

Even more, the navigation device has specifications about how accurate it’s accelerometers are, and all of these uncertainties contribute! How?

### Variance-covariance propagation to the rescue

Glennie [1] and Schaer [2] used Variance-covariance propagation to estimate the uncertainty in geolocation of LiDAR points. This sounds wildly complex, but at it’s root is a simple idea:

$uncert(thing1 + thing2) = \sqrt{uncert(thing1)^2 + uncert(thing2)^2}$

See that? we figured out the uncertainty of a thing made from adding thing1 and thing2, by knowing what the uncertainties of things1 and thing2 are.

Going from simple addition to a few matrix multiplications was a quantum leap, handled neatly by symbolic math toolboxes – but only after I nearly quit my PhD trying to write out the uncertainty equation by hand.

Here is the uncertainty equation for LiDAR points, where$U$is the uncertainty:

$U = FC_uF^t$

What!! Is that all? I’m joking, right?

Nearly. $F$ is a 14-element vector containing partial derivatives of each the parameters that go into the LiDAR georeferencing equation (the so-called Jacobians). $C$ is a 14 x 14 matrix on which the uncertainties associated with each element occupy the diagonal.

Writing this set of equations out by hand was understandably a nightmare, but more clever folks than myself have achieved it – and while I certainly learned a lot about linear algebra in this process, I took advice from an old astronomer and used a symbolic maths toolbox to derive the Jacobians.

…which meant that the work actually got done, and I could move on with research!

### Now my brain is fully exploded – what do uncertainties look like?

Glennie [1] and Schaer [2] both report that at some point, angular motion uncertainty overtakes GPS position uncertainty as the primary source of doubt about where a point is. Fortunately I found the same thing. Given the inherent noise of the system I was using, this occurred pretty quickly. In the Kalman filter which integrates GPS and IMU observations, a jittery IMU is assigned a higher uncertainty at each epoch. This makes sense, but also means that angular uncertainties need to be minimised (for example, by flying instruments in a very quiet aircraft or an electric-powered UAS)

I made the following map years ago to check that I was getting georeferencing right, and also getting uncertainty estimates working properly.

It could be prettier, but you see how the components all behave – across-track and ‘up’ uncertainties are dominated by the angular component not far off nadir. Along-track uncertainties are more consistent across-track, because the angular measurement components (aircraft pitch and a bit of yaw) are less variable.

The sample below shows LiDAR point elevation uncertainties (relative to an ITRF08 ellipsoid) during level flight over sea ice. At instrument nadir, height uncertainty is more or less equivalent to instrument positioning uncertainty. Increasing uncertainties toward swath edges are a function of angular measurement uncertainty.

### But normally LiDAR surveys come with an averaged accuracy level. Why bother?

##### i. why I bothered:

In a commercial survey the accuracy figure is determined mostly by comparing LiDAR points with ground control data – and the more ground control there is, the better (you have more comparison points and can make a stronger estimate of the survey accuracy).

Over sea ice this is impossible. I was also using these points as input for an empirical model which attempts to estimate sea-ice thickness. As such, I needed to also propagate uncertainties from my LiDAR points through the model to sea-ice thickness estimates.

In other words, I didn’t want to guess what the uncertainties in my thickness estimates were. As far as plausible, I need to know what the input uncertainty is for each thickness estimate is – so every single point. It’s nonsensical to suggest that every observation and therefore every thickness estimate comes with the same level of uncertainty.

Here is another map from my thesis. It’s pretty dense, but shows the process in action:

The top panel is LIDAR ‘height above sea level’ for sea ice. Orange points are ‘sea level reference’ markers, and the grey patch highlights an intensive survey plot. The second panel is uncertainty associated with each height measurment. In panel  three we see modelled sea ice thickness (I’ll write another post about that later), and the final panel shows the uncertainty associated with each thickness estimate. Thickness uncertainties are greater than LIDAR height uncertainties because we’re also accounting for uncertainties in each of the other model parameters (LIDAR elevations are just one). So, when I get to publishing sea-ice thickness estimates, I can put really well made error bars around them!

##### ii. why you, as a commercial surveyor or an agency contracting a LiDAR survey or a LIDAR end user should bother:

The computation of uncertainties is straightforward and quick once the initial figuring of the Jacobians is done – and these only need to be recomputed when you change your instrument configuration. HeliMap (Switzerland) do it on-the-fly and compute a survey quality metric (see Schaer et al, 2007 [2]) which allows them to repeat any ‘out of tolerance’ areas before they land. Getting an aircraft in the air is hard, and keeping it there for an extra half an hour is easy – so this capability is really useful in terms of minimising costs to both contracting agencies and surveyors. This analysis in conjunction with my earlier post on ‘where exactly to measure in LiDAR heights‘ shows you where you can assign greater confidence to a set of LiDAR points.

It’s also a great way to answer some questions – for example are surveyors flying over GCP’s at nadir, and therefore failing to meet accuracy requirements off-nadir? (this is paranoid ‘all the world is evil’ me speaking – I’m sure surveyors are aware of this stuff and collect off-nadir ground control matches as well). Are critical parts of a survey being captured off-nadir, when it would be really useful to get the best possible data over them? (this has implications for flight planning). As a surveyor, this type of thinking will give you fewer ‘go back and repeat’ jobs – and as a contracting agency, you might spend a bit more on modified flight plans, but not a lot more to get actually really great data instead of signing off and then getting grief from end users.

As an end user of LiDAR products, If you’re looking for data quality thresholds – for example ‘points with noise < $N$ m over a flat surface’, this type of analysis will help you out. I’ve also talked to a number of end users who wonder about noisy flight overlaps, and why some data don’t appear to be well-behaved. Again, having some quality metric around each point will help an end-user determine which data are useful, and which should be left alone.

### Summarising

I certainly stand on the shoulders of giants here, and still incur brain-melting when I try to come at these concepts from first-principles (linear algebra is still hard for me!). However, the idea of being able to estimate in an a-priori quality metric is, in my mind, really useful.

I don’t have much contact with commercial vendors, so I can only say ‘this is a really great idea, do some maths and make your LiDAR life easer!’.

I implemented this work in MATLAB, and you can find it here:

With some good fortune it will be fully re-implemented in Python sometime this year. Feel free to clone the repository and go for it. Community efforts rock!

And again, refer to these works. I’ll see if I can find any more recent updates, and would really appreciate hearing about any recent work on this stuff:

[1] Glennie, C. (2007). Rigorous 3D error analysis of kinematic scanning LIDAR systems. Journal of Applied Geodesy, 1, 147–157. http://doi.org/10.1515/JAG.2007. (accessed 19 January 2017)

[2] Schaer, P., Skaloud, J., Landtwing, S., & Legat, K. (2007). Accuracy estimation for laser point cloud including scanning geometry. In Mobile Mapping Symposium 2007, Padova. (accessed 19 January 2017)

## The LiDAR uncertainty budget I: georeferencing points

This is part 1 of 2, explaining how uncertainties in LiDAR point geolocation can be estimated for one type of scanning system. We know LiDAR observations of elevation/range are not exact (see this post), but a critical question of much interest to LiDAR users is ‘how exact are the measurements I have’?

As an end-used of LiDAR data I get a bunch of metadata that is provided by surveyors. One of the key things I look for are the accuracy estimates. Usually these come as some uncertainty in East, North and Up measurements, in metres, relative to the spatial reference system the point measurements are expressed in. What I don’t get is any information about how these figures are arrived at, or if they apply equally to every point. It’s a pretty crude measure.

As a LiDAR maker, I was concerned with the uncertainty of each single point – particularly height – because I use these data to feed a model for estimating sea ice thickness. I also need to feed in an uncertainty – so that I can put some boundaries around how good my sea ice thickness estimate is. However, there was no way of doing so in an off the shelf software package – so I implemented the LiDAR geoferencing equations and a variance-covariance propagation method for them in MATLAB  and used these. This was a choice of convenience at the time, and I’m now slowly porting my code to Python, so that you don’t need a license to make LiDAR points and figure out their geolocation uncertainties.

My work was based on two pieces of fundamental research: Craig Glennie’s work on rigorous propagation of uncertainties in 3D [1], and Phillip Schaer’s implementation of the same equations [2]. Assuming that we have a 2D scanner, the LiDAR georeferencing equation is:

$\begin{bmatrix} x \\ y \\ z \\ \end{bmatrix}^m= \begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix} + R^b_m \left[ R^b_s \rho \left( \begin{gathered} sin\Theta\\ 0\\ cos\Theta \end{gathered} \right) + \begin{bmatrix} a_x\\ a_y\\ a_z\\ \end{bmatrix}^b \right]$

The first term on the right is the GPS position of the vehicle carrying the LiDAR:

$\begin{bmatrix} X\\ Y\\ Z\\ \end{bmatrix}$

The next term is made up of a few things. Here we have points in LiDAR scanner coordinates:

$\rho\left( \begin{gathered} sin\Theta\\ 0\\ cos\Theta \end{gathered} \right)$

…which means ‘range from scanner to target’ ($\rho$) multiplied by $sin\theta$ to give an X coordinate and $cos\theta$ to give a Z coordinate of the point measured.

Note that there is no Y coordinate! This is a 2D scanner, observing an X axis (across track) and a Z axis (from ground to scanner). The Y coordinate is provided by the forward motion of a vehicle, in this case a helicopter.

For a 3D scanner, or a canner with an elliptical scan pattern, there will be additional terms describing where a point lies in the LiDAR frame. Whatever term is used at this point, the product is the position of a reflection-causing object in the LiDAR instrument coordinate system which is rotated to the coordinate system of the vehicle’s navigation device using the matrix:

$R^b_s$

The point observed also has a lever arm offset added (the distance in 3 axes between the navigation device’s reference point and the LiDAR’s reference point), so we pretend we’re putting our navigation device exactly on the LiDAR instrument reference point:

$\begin{bmatrix} a_x\\ a_y\\ a_z\\ \end{bmatrix}^b$

This mess of terms is finally rotated to a mapping frame using euler angles in three axes (essentially heading, pitch, roll) recorded by the navigation device:

$R^b_m$

…and added to the GPS coordinates of the vehicle (which are really the GPS coordinates of the navigation system’s reference point).

There are bunch of terms there – 14 separate parameters which go into producing a LiDAR point, and that’s neglecting beam divergence, and only showing single returns. Sounds crazy – but the computation is actually pretty efficient.

Here’s a cute diagram of the scanner system I was using – made from a 3D laser scan and some engineering drawings. How’s that? Using a 3D scanner to measure a 2D scanner. Even better, the scan was done on the helipad of a ship in the East Antarctic pack ice zone!

You can see there the relationships I’ve described above. The red box is our navigation device – a dual-GPS, three-axis-IMU strapdown navigator, which provides us with the relationship between the aircraft body and the world. The green cylinder is the LiDAR, which provides us ranges and angles in its own coordinate system. The offset between them is the lever arm, and the orientation difference between the axes of the two instruments is the boresight matrix.

Now consider that each of those parameters from each of those instruments and the relationships between them have some uncertainty associated with them, which contributes to the overall uncertainty about the geolocation of a given liDAR point.

Mind warped yet? Mine too. We’re all exhausted from numbers now, so part 2 will examine how we take all of that stuff and determine, for every point, a geolocation uncertainty.

Feel free to ask questions, suggest corrections, or suggest better ways to clarify some of the points here.

There’s some code implementing this equation here: https://github.com/adamsteer/LiDAR-georeference – it’s Apache 2.0 licensed so feel free to fork the code and make pull requests to get it all working, robust and community-driven!

[1] Glennie, C. (2007). Rigorous 3D error analysis of kinematic scanning LIDAR systems. Journal of Applied Geodesy, 1, 147–157. http://doi.org/10.1515/JAG.2007. (accessed 19 January 2017)

[2] Schaer, P., Skaloud, J., Landtwing, S., & Legat, K. (2007). Accuracy estimation for laser point cloud including scanning geometry. In Mobile Mapping Symposium 2007, Padova. (accessed 19 January 2017)

## LiDAR thoughts – where to measure, exactly?

LiDAR is a pretty common tool for geospatial stuff. It means ‘Light Detection and Ranging’. For the most part involved shining a laser beam at something then measuring how long a reflection takes to come back. Since we approximate the speed of light, we can use the round trip time to estimate the distance between the light source and ‘something’ with a great degree of accuracy. Modern instruments perform many other types of magic – building histograms of individual photons returned, comparing emitted and returned wave pulses, and even doing this with many different parts of the EM spectrum.

Take a google around about LiDAR basics – there are many resources which already exist to explain all this, for example https://coast.noaa.gov/digitalcoast/training/intro-lidar.html.

What I want to write about here is a characteristic of the returned point data. A decision that needs to be made using LiDAR is:

Where should I measure a surface?

…but what – wait? Why is this even a question? Isn’t LiDAR just accurate to some figure?

Sort of. A few years ago I faced a big question after finding that the LiDAR I was working on was pretty noisy. I made a model to show how noisy the LiDAR should be, and needed some data to verify the model. So we hung the LiDAR instrument in a lab and measured a concrete floor for a few hours.

Here’s a pretty old plot of what we saw:

What’s going on here? In the top panel, I’ve spread our scanlines along an artificial trajectory heading due north (something like N = np.arange(0,10,0.01)), with the Easting a vector of zeroes and height a vector of 3’s – and then made a swath map. I drew in lines showing where scan angle == 75, 90, and 115 are.

In the second panel (B), there’s a single scanline show across-track. This was kind of a surprise – although we should have expected it. What we see is that the range observation from the LiDAR is behaving as specified – accurate to about 0.02 m (from the instrument specifications). What we didn’t realise was that accuracy is angle-dependent, since moving away from instrument nadir the impact of angular measurement uncertainties becomes greater than the ranging uncertainty of the instrument.

Panels C and D show this clearly – near instrument nadir, ranging is very good! Near swath edges we approach the published instrument specification.

This left us with the question asked earlier:

When we want to figure out a height reference from this instrument, where do we look?

If we use the lowest points, we measure too low. Using the highest points, we measure too high. In the end I fitted a surface to the points I wanted to use for a height reference – like the fit line in panel B – and used that. Here is panel B again, with some annotations to help it make sense.

You can see straight away there are bears in these woods – what do we do with points which fall below this plane? Throw them away? Figure out some cunning way to use them?

In my case, for a number of reasons, I had to throw them away, since I levelled all my points using a fitted surface, and ended up with negative elevations in my dataset. Since I was driving an empirical model based on these points, negative input values are pretty much useless. This is pretty crude. A cleverer, more data-preserving method will hopefully reveal itself sometime!

I haven’t used many commercial LiDAR products, but one I did use a reasonable amount was TerraSolid. It worked on similar principles, using aggregates of points and fitted lines/planes to do things which required good accuracy  – like boresight misalignment correction.

No doubt instruments have improved since the one I worked with. However, it’s still important to know that a published accuracy for a LiDAR survey is kind of a mean (like point density)  – some points will have a greater chance of measuring something close to where it really is than others. And that points near instrument nadir are more likely to be less noisy and more accurate.

That’s it for now – another post on estimating accuracy for every single LiDAR point in a survey will come soon(ish).

The figure shown here comes from an internal technical report I wrote in 2012. Data collection was undertaken with the assistance of Dr Jan L. Lieser and Kym Newbery, at the Australian Antarctic Division. Please contact me if you would like to obtain a copy of the report – which pretty much explains this plot and a simulation I did to try and replicate/explain the LiDAR measurement noise.

## ACE CRC, Airborne LiDAR and Antarctic sea ice

Between late June and late August 2015 I worked with the Antarctic Climate and Ecosystems Co-operative Research Centre (ACE CRC) to tidy up some long running loose ends with an airborne LiDAR project. This project is close to home – my PhD revolves around cracking some of the larger nuts associated with getting a science result from survey flights undertaken between 2007 and 2012. However, I’ve worked on one small subset of data – and the CRC was in need of a way to unlock and use the rest.

Many technical documents exist for the airborne LIDAR system, but the ‘glue’ to tie them together was lacking. In six weeks I provided exactly that. The project now has a strong set of documentation covering the evolution of the system, how navigate the myriad steps involved in turning raw logfiles from laser scanners, navigation instruments and GPS observations into meaningful data, and how to interpret the data that arise from the system. After providing a ‘priority list’ of flight data to work on, ACE CRC also took advantage of my experience to churn out post-processed GPS and combined GPS + inertial trajectories for those flights. The CRC also now has the tools to estimate point uncertainty and reprocess any flights from the ground up – should they wish to.

All of which means ACE CRC are in a position to make meaningful science from the current set of airborne LiDAR observations over East Antarctic sea ice.

Some of this – a part of my PhD work and a small part of the overall project – is shown here. A first-cut of sea ice thickness estimates using airborne LiDAR elevations, empirical models for snow depth, and a model for ice thickness based on the assumption that ice, snow and seawater all exist in hydrostatic equilibrium.

## A short visual history of shipping access to Australian Antarctic stations

In late 2014 I was contracted by the Antarctic Climate and Ecosystems Cooperative Research Centre to analyse Antarctic shipping patterns from 2000 to 2014. The aim was to extend a planning report first published in 2008, and provide deeper insights into shipping patterns in order to plan for future shipping seasons. Obvious shipping routes arise as a combination of crew experience, average sea conditions, sea ice conditions and logistical constraints. Shipping is expensive, so great effort goes into minimising ship time required for a given task. Days can be saved or lost by the choice of shipping route. Hugging the coast in transit between stations is clearly the shortest route – but seasonally the most risky due to the presence of sea ice.

So what routes are being used most often? and do they work?

Mining data from shipping reports and ship GPS traces, I was able to map where and when ships had difficulty accessing stations. While plenty of maps exist showing ship tracks, there has never been any analysis of where and why ships had difficulty getting to stations. The map presented below is one of the first.

It shows a ‘heatmap’ –  a frequency count of hourly ship positions per 25km square grid cell. Overlaid on the map are labelled round indicators of ship ‘stuckness’ due to sea ice conditions (as opposed to delay for operational purposes), and squares where ship to shore helicopter access was forced by ice conditions.

This map says nothing about seasonality, or the times of year which are most risky for ships. It does show a clear preference for routes to and from stations, in particular Casey and Davis. It also shows that generally ships transit between the two stations by heading north to skirt sea ice, or hugging the coast – which is clearly troublesome at times. For the most part, ships get to stations and back with few dramas.

As a final note, a colleague pointed out that this is also a map of bias in our knowledge of the Southern Ocean. That’s a much longer story…

## Acquiring prism lock: the cover photo

The cover photo for this site shows.. the back of my head, a Leica Viva TS 15, a prism, and a bright yellow, low cost, very effective instrument warming/battery box I’m very proud of! I’m acquiring prism lock using the remote control, before heading out to collect locations on a SIPEX II ice station. The sea ice surveying project was part of my work for the Australian Antarctic Division, and complements my PhD studies. It was a challenging task – nobody knew if the total station would play happily at -20 degrees celcius on drifting sea ice. It performed admirably, and the results will provide much-needed spatial glue for on, over and under ice spatial datasets collected on the voyage. Photo: Polly Alexander