More sea ice reconstructions

Because we all focus so hard while writing workshops, right? Here are a couple more ‘fun with sea ice’ visualisations. There’s nothing really scientific about these, they’re based on some proof of concept work which is very slowly iterating toward science.

So, just enjoy! Firstly, SIPEX II Ice station 7 – made from crossing-over flights.

It’s pretty! And you’ll notice that all the heights are referenced to an ellipsoid. It’s not a rigorous science dataset in this incarnation.

Next, a strip mapping test over progressively thin ice at the edge of a polynya:

You can see the ice getting progressively darker toward viewers right, as it thins. If you view by elevation – you can also see some inherent issues with single-strip mapping and loose camera calibration – it’s pretty warped. So we learn, see if it works, and hopefully get to try again another day.

Business penguins

I’ve been to Antarctica four times, and seen a lot of penguins. I’ve also worked in various places for more than a couple of decades now, and seen a lot of penguins. I am also trained in behavioural psychology – so I observe things about behaviour and see patterns. I can’t not!

Anyway.

One of the most amusing types of Antarctic penguins is the Adélie.

It has some really interesting characteristics. They travel in packs. When they pop out on the sea ice they are the boss of everything. Roaming around in packs like they own the place; all dressed up and no place to go.

This struck me as exactly parallel to, say, walking around the centre of Sydney (or Melbourne) on a Friday night just after 5; or cruising an airport business lounge.

That isn’t the whole story either. As well as their besuited swagger, Adélie penguins have another really interesting characteristic. In 2008 and 2009 I went to Davis Station, and assisted with station resupply along with a bunch of other stuff. I got to watch a lot of penguins coming in from the ocean to breed.

On the undisturbed fast ice, the penguins rule the place. However, when the mobs of Adélies got to a new feature – for example a line of snow piled up along the sea ice, after a snowplow had cleared a path – they all stopped.

Even though the penguins could easily have jumped or slid across over this new barrier and carried on, they stopped.

They walked up and down along the line.

They discussed the matter internally.

Eventually, after many meetings, one penguin would dramatically cross the barrier.

This led to a most fantastic observation. Every other penguin would take the cue and cross the barrier – but not where they stood. They would all go as close as possible to where the innovative penguin crossed, and cross the barrier there.

Even if it’s much farther to walk, or ends up going someplace less than ideal for the penguin mob. Every penguin sticks to the program.

Having observed behaviour of people in work environments for a long time, it completely struck me that this same pattern is played out across the business world. We faff around and try as hard as we can to do anything but cross some menial but unusual barrier; then when one innovator makes the leap, we all follow!

Here are some real Adélie penguins, pretending to own my surveying equipment:

Penguins, trying to work out what on earth this strange three legged creature is..

Visual language and penguin innovation patterns
I want to draw a parallel here to the work of foundational cognitive researchers who proposed that our cognitive world is constrained by the language we use about our world. In other words, if we limit the number of ways we can express an idea as language, we limit the possible number of ideas we can have. I first came across this concept (neurolinguistic relativity) studying linguistics a long time ago. This review seems like a good way to learn about the idea. It specifically also calls out perception – as if the way we frame language also interacts with the way we are able to perceive things.

Without any reference to actual research, I want to extend this to visual language. In business, we still hear about ways to dress; ways to express a visual lexicon of what is a ‘business like’ appearance in dress, styling of offices, everything. Countless business articles talk about what to wear, all ending with more or less the same pattern (here, here ). We also make up science about it without considering the fact that, well, all our cultural messaging says that we should be penguins. Even if we don’t impose ’suits’, we make recommendations to homogenise our visual language around roles.

I’m proposing here that we limit how we can do business by restricting our visual language around what we think appears business-like. In particular the uniformity of business ‘dress’. In effect, we design a limit around how we can think about problems. So we end up solving our own issues in the same way that Adélie penguins do.

In the familiar, we strut around like the boss of everything.

Faced with a new obstacle, even though seemingly simple, we look it up and down, have endless meetings, and create a lot of extra work for ourselves.

When someone finally makes the leap, we all go to the same point and follow.

…which once more ties the analogy neatly to penguins! Even movie studios agree – if you’ve ever seen Happy Feet, it’s an entire film about this premise.

So diversify!
By designing a restrictive visual language; and a restrictive set of social mores around how to dress for business, we limit our ability to do business well. So we should do something different! Sure, the usual roles of social engagement apply – as in ‘turn up to a workplace in clothes that are clean and relatively fresh’ – but really, we’re all grown ups. We can work out what works for us and what doesn’t.

I turn up usually in a t-shirt and my faithful cactus pants. Because my identity is strongly tied to outdoor activities, I generally work and feel best in stuff which works for me. I’m ready to work, I’m comfortable, confident and don’t feel like I need to get home to get this goddamn tie off!

…and then I have a supply of outdoor gear when my kit gets one too many coffee spills for work (which also has sustainability implications – I buy vastly fewer things and use them more).

But there’s more. Before this is dismissed as a smackdown on suits (which it is… c’mon) – it applies equally to anyplace where visual and cultural homogeneity in an organisation is dominant. It might not be suits, it might be ‘oh you need to only drink red organic IPA made from hops fertilized specifically with cowshit which magic mushrooms grow on’. Or ‘you can wear what you like but we’re gonna judge your selection if it’s not brand Y’ – Which brings up the next section.

Wider implications – diversity of appearance, diversity of thinking, diversity of life
In the ten years I’ve been meaning to write out this janky anecdote, I’ve realised that a fun little dump on the concept of ‘dressing for business’ (I mean sheesh, who puts a noose around themselves then heads off to work. Symbolism much?) is just a side note to the main story.

Penguins are programmed by millennia of evolution to operate the way they do in order to survive. Their actions at a pile of snow reflect their actions at an ice floe edge – a test subject is sent into the water, if they’re not eaten by a leopard seal then everyone else goes in near the same (presumably still safe) spot.

Humans don’t need to act that way – but we have designed our society to do so. It seems completely crazy that we design a visual language which limits our ability to create; to innovate; and to feel comfortable and confident – in the name of a really limited view of appearance. Sure, you don’t want people turning up to meetings in their underpants; but it’s not a black and white scenario.

Even as an extremely privileged as a white man I’ve been judged on appearance for work over my lifetime. Now, as an extremely privileged white man with 20 years experience and a PhD, I don’t want to or need to work for you if you think my apparel is what makes me employable.

Women, people-of-colour, non-binary-gender folk all have a much harder time. And often don’t get to have that choice – so spend entire lives trying to squish themselves into some boxed up ideal of appearance. I can’t speak to how that affects people, since I don’t have that lived experience. It’s not something I would ever willingly submit to given a choice.

I can, however, fall back to science and say ‘hey, letting people express themselves by way of how they turn up to work will make your business buzz!’. And also ‘it’s totally nuts to correlate performance with a particular way of dressing. It’s time, as a society to drop the pretence of ‘business dress’. I mean it’s just stuff you put on your body – not a metric for judgement. Or a magic performance enhancer.

We don’t need to be penguins, we have an alternative.

Sugata Mitra expresses the idea well in this TED talk. To paraphrase – our entire system of education and business has not been updated since the Victorian era.

“…continuously producing identical people for a machine which no longer exists”

This is a problem – it leads down all kinds of roads. And as Sugata Mitra points out – what’s next? Why are we sticking with this model, now that it no longer applies?

But what about…
There is always a case for meaningful dress in meaningful circumstances. For example, a paramedic absolutely needs to be identifiable instantly as a paramedic from far away in a messy post crash scene. A tree feller needs protective equipment, which comes only in certain styles and therefore restricts how they can look on the job. Hell, when I was observing penguins I was wearing standard issue Antarctic field equipment – not my personal choice of awesome outdoor gear.

…but an office worker, a secretary, a brand manager, even a CEO – has no such practical need.

We make a lot of excuses up around why we should all look the same in a business context – ‘perceived risk’; ‘impression counts’; yadda yadda. This actually reinforces the point of this little tale – if we increasingly narrow the scope of how we can express ourselves visually and cognitively in a business context, we narrow the scope of how we are able to solve problems.

In reality, we all work better when we feel comfortable about how we appear, and we all work better when we have some agency about how we go about our work.

Wrapping up – the alternative
For most occupations, we can diversify our visual language around how we look. In this scenario we move from something as simple as clothing from being a rigid bond to a particular way of looking and thinking. Instead, we can alter our visual language and open up new, unforeseen avenues to a diverse, fulfilling, relaxed and creative working life; where innovation happens freely because people feel valued and have agency over even one small thing – their favoured appearance.

We organise our labour along lines which benefit the organisation most. In the technical industry, we use prescribed processes and methods and ways of interaction. In customer service, we need predictable hours and have prescribed ways of going about our job. In science? the same.

In a place where required processes dominate, using how we dress as a tool for diversifying our visual language is a small but vital freedom of expression.

Try it. I think you’ll like it.

A final cheeky validation segue
Let me segue to another story here. Some time ago a senior scientist confided that they were not looking forward to visiting Canberra and having to talk to a roomful of people in suits. I said ‘visualise them all as penguins’ – which immediately turned a frown upside down. And offered a glimmer of validation for the wild idea being discussed here.

I hope that next time you walk into your boardroom, or staff meeting, or office cafeteria, you see something that breaks that analogy.

If not, I hope you have to catch a sly giggle as you take your seat.

Ice floe interactive visualisation, take 1

I recently spoke at POLAR2018 about using aerial photography for observing the properties of snow on sea ice. I’d really hoped to present some new work I’d been trying out on estimating local curvature, roughness and other properties from high resolution 3D models of sea ice topography.

Unfortunately I didn’t get all the way there. Firstly, I reprocessed a bunch  of data and the results were worse than work I’d done in the past. So back to the drawing board, and the fallback position of explaining a bunch of work we’ve done over the past decade. A PDF of my slides is available via researchgate, but preferentially wait for the interactive web version to finish – it’ll be more up to date, have better links and side notes!

I did, however, put together the beginning of a 3D visualisation for sea ice from the surface (using photogrammetric reconstruction) and below (from upward looking sonar). Click and drag below to move/zoom around; and expand the hamburger menu at top left to expose more navigation tools, measuring tools and styling options. Or, click here to open a full page view.

Many thanks to the Antarctic Climate and Ecosystems Cooperative Research Centre for funding the work behind this; and for getting me to Davos.

Drifting sea ice and 3D photogrammetry

3D photogrammetry has been a hobby horse for ages, and I’ve been really excited to watch it grow from an experimental idea [1] to a full-blown industrial tool. It took a really short time from research to production for this stuff. Agisoft Photoscan turned up in 2009 or 2010, and we all went nuts! It is cheap, super effective, and cross-platform. And then along came a bunch of others.

Back to the topic – for my PhD research I was tinkering with the method for a long time, since I had a lot of airborne imagery to play with. I started by handrolling Bundler + PMVS, and then my University acquired a Photoscan Pro license – which made my life a lot simpler!

My question at the time was: how can we apply this to sea ice? or can we at all?

The answer is yes! Some early thoughts and experiments are outlined here, and below are  some results from my doctoral thesis, using imagery captured on a 2012 research voyage (SIPEX II). Firstly, a scene overview because it looks great:

Next, stacking up elevations with in situ measurements from drill holes from a 100m drill hole line on the ice. The constant offset is a result of less-than-great heighting in the local survey – I focussed heavily on getting horizontal measurements right, at the expense of height. Lesson learned for next time!

And finally, checking that we’re looking good in 3D, using a distributed set of drill holes to validate the heights we get from photogrammetric modelling. All looks good except site 7 – which is likely a transcription error.

How did we manage all this? In 2012 I deployed a robotic total station and a farm of GPS receivers on drifting ice, and used them to make up a lagrangian reference frame (fancy word for ‘reference frame which moves with the ice’) – so we can measure everything in cartesian (XYZ) coordinates relative to the ice floe, as well as using displacement and rotation observations to translate world-coordinates to the local frame and vice-versa. Here’s a snapshot:

I don’t know if this will ever make it to publication outside my thesis – I think the method should be applied to bigger science questions rather than just saying ‘the method works and we can publish because nobody put Antarctica in the title yet’ – because we know that from other works already (see [2] for just one example).

So what science questions would I ask? Here’s a shortlist:

  • can we use this method to extract ridge shapes and orientations in detail?
  • can we differentiate between a snow dune and a ridge using image + topographic characteristics?

These are hard to answer with lower-density LiDAR – and are really important for improving models of snow depth on sea ice (eg [3]).

For most effective deployment, this work really needs to be done alongside a raft of in situ observations – previous experience with big aircraft shows that it is really hard to accurately reference moving things from a ship. That’s a story for beers 🙂

References

[1] http://phototour.cs.washington.edu/Photo_Tourism.pdf

[2] Nolan, M., Larsen, C., and Sturm, M.: Mapping snow depth from manned aircraft on landscape scales at centimeter resolution using structure-from-motion photogrammetry, The Cryosphere, 9, 1445-1463, doi:10.5194/tc-9-1445-2015, 2015

[3] Steer, A., et al., Estimating small-scale snow depth and ice thickness from total freeboard for East Antarctic sea ice. Deep-Sea Res. II (2016), http://dx.doi.org/10.1016/j.dsr2.2016.04.025

Data sources

https://data.aad.gov.au/metadata/records/SIPEX_II_RAPPLS

Acknowledgements

Dr Jan Lieser (University of Tasmania) instigated the project which collected the imagery used here, let me propose all kinds of wild ideas for it, and was instrumental in getting my PhD done. Dr Christopher Watson (University of Tasmania) provided invaluable advice on surveying data collection, played a massive part in my education on geodesy and surveying, and also was instrumental in getting my PhD done. Dr Petra Heil and Dr Robert Massom (Australian Antarctic Division) trusted me to run logistics, operate a brand new (never done before in the AAD program) surveying operation and collect the right data on a multi-million dollar investment.  The AAD engineering team got all the instruments talking to each other and battled aircraft certification engineers to get it all in the air. Helicopter Resources provided safe and reliable air transport for instruments and operators; the management and ship’s crew aboard RSV Aurora Australis kept everyone safe, relatively happy, and didn’t get too grumpy when I pushed the operational boundaries too far on the ice; and Walch Optics (Hobart) worked hard to make sure the total station exercise went smoothly.

 

 

 

 

 

The LiDAR uncertainty budget II: computing uncertainties

In part 1, we looked at one way that a LIDAR point is created. Just to recap, we have 14 parameters (for the 2D scanner used in this example) each with their own uncertainty. Now, we work out how to determine the geolocation uncertainty of our points.

First, let’s talk about what those uncertainties are. An obvious target is GPS positioning uncertainty, and this is the source that comes to mind first. Using a dual-frequency GPS gives the advantage of sub-decimetre positioning, after some post-processing is done. Merging those positions with observations from the IMU constrains those positions even further.

For a quick mental picture of how this works, consider that the navigation unit is collecting a GPS fix every half a second. The accelerometers which measure pitch, roll and heading take a sample every 1/250th of a second. They also keep track of how far they think they’ve moved since the last GPS fix – plus the rate of motion is tracked, putting a limit on how far it is possible to move between each GPS fix. The navigation unit compares the two data sources. If the GPS fix is wildly unexpected, a correction is added to GPS positions and we carry on.

So we get pretty accurate positions (~5cm).

But is that the uncertainty of our point positions? Nope. There’s more. The laser instrument comes with specifications about how accurate laser ranging is, and how accurate it’s angular encoder is.

Even more, the navigation device has specifications about how accurate it’s accelerometers are, and all of these uncertainties contribute! How?

Variance-covariance propagation to the rescue

Glennie [1] and Schaer [2] used Variance-covariance propagation to estimate the uncertainty in geolocation of LiDAR points. This sounds wildly complex, but at it’s root is a simple idea:

uncert(thing1 + thing2) = \sqrt{uncert(thing1)^2 + uncert(thing2)^2}

See that? we figured out the uncertainty of a thing made from adding thing1 and thing2, by knowing what the uncertainties of things1 and thing2 are.

Going from simple addition to a few matrix multiplications was a quantum leap, handled neatly by symbolic math toolboxes – but only after I nearly quit my PhD trying to write out the uncertainty equation by hand.

Here is the uncertainty equation for LiDAR points, whereU is the uncertainty:

U = FC_uF^t

What!! Is that all? I’m joking, right?

Nearly. F is a 14-element vector containing partial derivatives of each the parameters that go into the LiDAR georeferencing equation (the so-called Jacobians). C is a 14 x 14 matrix on which the uncertainties associated with each element occupy the diagonal.

Writing this set of equations out by hand was understandably a nightmare, but more clever folks than myself have achieved it – and while I certainly learned a lot about linear algebra in this process, I took advice from an old astronomer and used a symbolic maths toolbox to derive the Jacobians.

…which meant that the work actually got done, and I could move on with research!

Now my brain is fully exploded – what do uncertainties look like?

Glennie [1] and Schaer [2] both report that at some point, angular motion uncertainty overtakes GPS position uncertainty as the primary source of doubt about where a point is. Fortunately I found the same thing. Given the inherent noise of the system I was using, this occurred pretty quickly. In the Kalman filter which integrates GPS and IMU observations, a jittery IMU is assigned a higher uncertainty at each epoch. This makes sense, but also means that angular uncertainties need to be minimised (for example, by flying instruments in a very quiet aircraft or an electric-powered UAS)

I made the following map years ago to check that I was getting georeferencing right, and also getting uncertainty estimates working properly. 

It could be prettier, but you see how the components all behave – across-track and ‘up’ uncertainties are dominated by the angular component not far off nadir. Along-track uncertainties are more consistent across-track, because the angular measurement components (aircraft pitch and a bit of yaw) are less variable.

The sample below shows LiDAR point elevation uncertainties (relative to an ITRF08 ellipsoid) during level flight over sea ice. At instrument nadir, height uncertainty is more or less equivalent to instrument positioning uncertainty. Increasing uncertainties toward swath edges are a function of angular measurement uncertainty.

But normally LiDAR surveys come with an averaged accuracy level. Why bother?

i. why I bothered:

In a commercial survey the accuracy figure is determined mostly by comparing LiDAR points with ground control data – and the more ground control there is, the better (you have more comparison points and can make a stronger estimate of the survey accuracy).

Over sea ice this is impossible. I was also using these points as input for an empirical model which attempts to estimate sea-ice thickness. As such, I needed to also propagate uncertainties from my LiDAR points through the model to sea-ice thickness estimates.

In other words, I didn’t want to guess what the uncertainties in my thickness estimates were. As far as plausible, I need to know what the input uncertainty is for each thickness estimate is – so every single point. It’s nonsensical to suggest that every observation and therefore every thickness estimate comes with the same level of uncertainty.

Here is another map from my thesis. It’s pretty dense, but shows the process in action:

The top panel is LIDAR ‘height above sea level’ for sea ice. Orange points are ‘sea level reference’ markers, and the grey patch highlights an intensive survey plot. The second panel is uncertainty associated with each height measurment. In panel  three we see modelled sea ice thickness (I’ll write another post about that later), and the final panel shows the uncertainty associated with each thickness estimate. Thickness uncertainties are greater than LIDAR height uncertainties because we’re also accounting for uncertainties in each of the other model parameters (LIDAR elevations are just one). So, when I get to publishing sea-ice thickness estimates, I can put really well made error bars around them!

ii. why you, as a commercial surveyor or an agency contracting a LiDAR survey or a LIDAR end user should bother:

The computation of uncertainties is straightforward and quick once the initial figuring of the Jacobians is done – and these only need to be recomputed when you change your instrument configuration. HeliMap (Switzerland) do it on-the-fly and compute a survey quality metric (see Schaer et al, 2007 [2]) which allows them to repeat any ‘out of tolerance’ areas before they land. Getting an aircraft in the air is hard, and keeping it there for an extra half an hour is easy – so this capability is really useful in terms of minimising costs to both contracting agencies and surveyors. This analysis in conjunction with my earlier post on ‘where exactly to measure in LiDAR heights‘ shows you where you can assign greater confidence to a set of LiDAR points.

It’s also a great way to answer some questions – for example are surveyors flying over GCP’s at nadir, and therefore failing to meet accuracy requirements off-nadir? (this is paranoid ‘all the world is evil’ me speaking – I’m sure surveyors are aware of this stuff and collect off-nadir ground control matches as well). Are critical parts of a survey being captured off-nadir, when it would be really useful to get the best possible data over them? (this has implications for flight planning). As a surveyor, this type of thinking will give you fewer ‘go back and repeat’ jobs – and as a contracting agency, you might spend a bit more on modified flight plans, but not a lot more to get actually really great data instead of signing off and then getting grief from end users.

As an end user of LiDAR products, If you’re looking for data quality thresholds – for example ‘points with noise < N m over a flat surface’, this type of analysis will help you out. I’ve also talked to a number of end users who wonder about noisy flight overlaps, and why some data don’t appear to be well-behaved. Again, having some quality metric around each point will help an end-user determine which data are useful, and which should be left alone.

Summarising

I certainly stand on the shoulders of giants here, and still incur brain-melting when I try to come at these concepts from first-principles (linear algebra is still hard for me!). However, the idea of being able to estimate in an a-priori quality metric is, in my mind, really useful.

I don’t have much contact with commercial vendors, so I can only say ‘this is a really great idea, do some maths and make your LiDAR life easer!’.

I implemented this work in MATLAB, and you can find it here:

https://github.com/adamsteer/LiDAR-georeference

With some good fortune it will be fully re-implemented in Python sometime this year. Feel free to clone the repository and go for it. Community efforts rock!

And again, refer to these works. I’ll see if I can find any more recent updates, and would really appreciate hearing about any recent work on this stuff:

[1] Glennie, C. (2007). Rigorous 3D error analysis of kinematic scanning LIDAR systems. Journal of Applied Geodesy, 1, 147–157. http://doi.org/10.1515/JAG.2007. (accessed 19 January 2017)

[2] Schaer, P., Skaloud, J., Landtwing, S., & Legat, K. (2007). Accuracy estimation for laser point cloud including scanning geometry. In Mobile Mapping Symposium 2007, Padova. (accessed 19 January 2017)

ACE CRC, Airborne LiDAR and Antarctic sea ice

Between late June and late August 2015 I worked with the Antarctic Climate and Ecosystems Co-operative Research Centre (ACE CRC) to tidy up some long running loose ends with an airborne LiDAR project. This project is close to home – my PhD revolves around cracking some of the larger nuts associated with getting a science result from survey flights undertaken between 2007 and 2012. However, I’ve worked on one small subset of data – and the CRC was in need of a way to unlock and use the rest.

Many technical documents exist for the airborne LIDAR system, but the ‘glue’ to tie them together was lacking. In six weeks I provided exactly that. The project now has a strong set of documentation covering the evolution of the system, how navigate the myriad steps involved in turning raw logfiles from laser scanners, navigation instruments and GPS observations into meaningful data, and how to interpret the data that arise from the system. After providing a ‘priority list’ of flight data to work on, ACE CRC also took advantage of my experience to churn out post-processed GPS and combined GPS + inertial trajectories for those flights. The CRC also now has the tools to estimate point uncertainty and reprocess any flights from the ground up – should they wish to.

All of which means ACE CRC are in a position to make meaningful science from the current set of airborne LiDAR observations over East Antarctic sea ice.

Some of this – a part of my PhD work and a small part of the overall project – is shown here. A first-cut of sea ice thickness estimates using airborne LiDAR elevations, empirical models for snow depth, and a model for ice thickness based on the assumption that ice, snow and seawater all exist in hydrostatic equilibrium.

figure showing sea ice draft from LiDAR points
Sea ice draft estimated from airborne LiDAR elevations, an empirical model for snow, and a hydrostatic model for ice thickness from elevations and snow depth. This image shows a model of the underside of sea ice – blue and green points are elevations, purple to yellow points visible here are ice draft estimates. Why are some drafts ‘positive’? That’s a question currently being worked on…

A short visual history of shipping access to Australian Antarctic stations

In late 2014 I was contracted by the Antarctic Climate and Ecosystems Cooperative Research Centre to analyse Antarctic shipping patterns from 2000 to 2014. The aim was to extend a planning report first published in 2008, and provide deeper insights into shipping patterns in order to plan for future shipping seasons. Obvious shipping routes arise as a combination of crew experience, average sea conditions, sea ice conditions and logistical constraints. Shipping is expensive, so great effort goes into minimising ship time required for a given task. Days can be saved or lost by the choice of shipping route. Hugging the coast in transit between stations is clearly the shortest route – but seasonally the most risky due to the presence of sea ice.

So what routes are being used most often? and do they work?

Mining data from shipping reports and ship GPS traces, I was able to map where and when ships had difficulty accessing stations. While plenty of maps exist showing ship tracks, there has never been any analysis of where and why ships had difficulty getting to stations. The map presented below is one of the first.

heatmapIt shows a ‘heatmap’ –  a frequency count of hourly ship positions per 25km square grid cell. Overlaid on the map are labelled round indicators of ship ‘stuckness’ due to sea ice conditions (as opposed to delay for operational purposes), and squares where ship to shore helicopter access was forced by ice conditions.

This map says nothing about seasonality, or the times of year which are most risky for ships. It does show a clear preference for routes to and from stations, in particular Casey and Davis. It also shows that generally ships transit between the two stations by heading north to skirt sea ice, or hugging the coast – which is clearly troublesome at times. For the most part, ships get to stations and back with few dramas.

As a final note, a colleague pointed out that this is also a map of bias in our knowledge of the Southern Ocean. That’s a much longer story…