Business penguins

I’ve been to Antarctica four times, and seen a lot of penguins. I’ve also worked in various places for more than a couple of decades now, and seen a lot of penguins. I am also trained in behavioural psychology – so I observe things about behaviour and see patterns. I can’t not!

Anyway.

One of the most amusing types of Antarctic penguins is the Adélie.

It has some really interesting characteristics. They travel in packs. When they pop out on the sea ice they are the boss of everything. Roaming around in packs like they own the place; all dressed up and no place to go.

This struck me as exactly parallel to, say, walking around the centre of Sydney (or Melbourne) on a Friday night just after 5; or cruising an airport business lounge.

That isn’t the whole story either. As well as their besuited swagger, Adélie penguins have another really interesting characteristic. In 2008 and 2009 I went to Davis Station, and assisted with station resupply along with a bunch of other stuff. I got to watch a lot of penguins coming in from the ocean to breed.

On the undisturbed fast ice, the penguins rule the place. However, when the mobs of Adélies got to a new feature – for example a line of snow piled up along the sea ice, after a snowplow had cleared a path – they all stopped.

Even though the penguins could easily have jumped or slid across over this new barrier and carried on, they stopped.

They walked up and down along the line.

They discussed the matter internally.

Eventually, after many meetings, one penguin would dramatically cross the barrier.

This led to a most fantastic observation. Every other penguin would take the cue and cross the barrier – but not where they stood. They would all go as close as possible to where the innovative penguin crossed, and cross the barrier there.

Even if it’s much farther to walk, or ends up going someplace less than ideal for the penguin mob. Every penguin sticks to the program.

Having observed behaviour of people in work environments for a long time, it completely struck me that this same pattern is played out across the business world. We faff around and try as hard as we can to do anything but cross some menial but unusual barrier; then when one innovator makes the leap, we all follow!

Here are some real Adélie penguins, pretending to own my surveying equipment:

Penguins, trying to work out what on earth this strange three legged creature is..

Visual language and penguin innovation patterns
I want to draw a parallel here to the work of foundational cognitive researchers who proposed that our cognitive world is constrained by the language we use about our world. In other words, if we limit the number of ways we can express an idea as language, we limit the possible number of ideas we can have. I first came across this concept (neurolinguistic relativity) studying linguistics a long time ago. This review seems like a good way to learn about the idea. It specifically also calls out perception – as if the way we frame language also interacts with the way we are able to perceive things.

Without any reference to actual research, I want to extend this to visual language. In business, we still hear about ways to dress; ways to express a visual lexicon of what is a ‘business like’ appearance in dress, styling of offices, everything. Countless business articles talk about what to wear, all ending with more or less the same pattern (here, here ). We also make up science about it without considering the fact that, well, all our cultural messaging says that we should be penguins. Even if we don’t impose ’suits’, we make recommendations to homogenise our visual language around roles.

I’m proposing here that we limit how we can do business by restricting our visual language around what we think appears business-like. In particular the uniformity of business ‘dress’. In effect, we design a limit around how we can think about problems. So we end up solving our own issues in the same way that Adélie penguins do.

In the familiar, we strut around like the boss of everything.

Faced with a new obstacle, even though seemingly simple, we look it up and down, have endless meetings, and create a lot of extra work for ourselves.

When someone finally makes the leap, we all go to the same point and follow.

…which once more ties the analogy neatly to penguins! Even movie studios agree – if you’ve ever seen Happy Feet, it’s an entire film about this premise.

So diversify!
By designing a restrictive visual language; and a restrictive set of social mores around how to dress for business, we limit our ability to do business well. So we should do something different! Sure, the usual roles of social engagement apply – as in ‘turn up to a workplace in clothes that are clean and relatively fresh’ – but really, we’re all grown ups. We can work out what works for us and what doesn’t.

I turn up usually in a t-shirt and my faithful cactus pants. Because my identity is strongly tied to outdoor activities, I generally work and feel best in stuff which works for me. I’m ready to work, I’m comfortable, confident and don’t feel like I need to get home to get this goddamn tie off!

…and then I have a supply of outdoor gear when my kit gets one too many coffee spills for work (which also has sustainability implications – I buy vastly fewer things and use them more).

But there’s more. Before this is dismissed as a smackdown on suits (which it is… c’mon) – it applies equally to anyplace where visual and cultural homogeneity in an organisation is dominant. It might not be suits, it might be ‘oh you need to only drink red organic IPA made from hops fertilized specifically with cowshit which magic mushrooms grow on’. Or ‘you can wear what you like but we’re gonna judge your selection if it’s not brand Y’ – Which brings up the next section.

Wider implications – diversity of appearance, diversity of thinking, diversity of life
In the ten years I’ve been meaning to write out this janky anecdote, I’ve realised that a fun little dump on the concept of ‘dressing for business’ (I mean sheesh, who puts a noose around themselves then heads off to work. Symbolism much?) is just a side note to the main story.

Penguins are programmed by millennia of evolution to operate the way they do in order to survive. Their actions at a pile of snow reflect their actions at an ice floe edge – a test subject is sent into the water, if they’re not eaten by a leopard seal then everyone else goes in near the same (presumably still safe) spot.

Humans don’t need to act that way – but we have designed our society to do so. It seems completely crazy that we design a visual language which limits our ability to create; to innovate; and to feel comfortable and confident – in the name of a really limited view of appearance. Sure, you don’t want people turning up to meetings in their underpants; but it’s not a black and white scenario.

Even as an extremely privileged as a white man I’ve been judged on appearance for work over my lifetime. Now, as an extremely privileged white man with 20 years experience and a PhD, I don’t want to or need to work for you if you think my apparel is what makes me employable.

Women, people-of-colour, non-binary-gender folk all have a much harder time. And often don’t get to have that choice – so spend entire lives trying to squish themselves into some boxed up ideal of appearance. I can’t speak to how that affects people, since I don’t have that lived experience. It’s not something I would ever willingly submit to given a choice.

I can, however, fall back to science and say ‘hey, letting people express themselves by way of how they turn up to work will make your business buzz!’. And also ‘it’s totally nuts to correlate performance with a particular way of dressing. It’s time, as a society to drop the pretence of ‘business dress’. I mean it’s just stuff you put on your body – not a metric for judgement. Or a magic performance enhancer.

We don’t need to be penguins, we have an alternative.

Sugata Mitra expresses the idea well in this TED talk. To paraphrase – our entire system of education and business has not been updated since the Victorian era.

“…continuously producing identical people for a machine which no longer exists”

This is a problem – it leads down all kinds of roads. And as Sugata Mitra points out – what’s next? Why are we sticking with this model, now that it no longer applies?

But what about…
There is always a case for meaningful dress in meaningful circumstances. For example, a paramedic absolutely needs to be identifiable instantly as a paramedic from far away in a messy post crash scene. A tree feller needs protective equipment, which comes only in certain styles and therefore restricts how they can look on the job. Hell, when I was observing penguins I was wearing standard issue Antarctic field equipment – not my personal choice of awesome outdoor gear.

…but an office worker, a secretary, a brand manager, even a CEO – has no such practical need.

We make a lot of excuses up around why we should all look the same in a business context – ‘perceived risk’; ‘impression counts’; yadda yadda. This actually reinforces the point of this little tale – if we increasingly narrow the scope of how we can express ourselves visually and cognitively in a business context, we narrow the scope of how we are able to solve problems.

In reality, we all work better when we feel comfortable about how we appear, and we all work better when we have some agency about how we go about our work.

Wrapping up – the alternative
For most occupations, we can diversify our visual language around how we look. In this scenario we move from something as simple as clothing from being a rigid bond to a particular way of looking and thinking. Instead, we can alter our visual language and open up new, unforeseen avenues to a diverse, fulfilling, relaxed and creative working life; where innovation happens freely because people feel valued and have agency over even one small thing – their favoured appearance.

We organise our labour along lines which benefit the organisation most. In the technical industry, we use prescribed processes and methods and ways of interaction. In customer service, we need predictable hours and have prescribed ways of going about our job. In science? the same.

In a place where required processes dominate, using how we dress as a tool for diversifying our visual language is a small but vital freedom of expression.

Try it. I think you’ll like it.

A final cheeky validation segue
Let me segue to another story here. Some time ago a senior scientist confided that they were not looking forward to visiting Canberra and having to talk to a roomful of people in suits. I said ‘visualise them all as penguins’ – which immediately turned a frown upside down. And offered a glimmer of validation for the wild idea being discussed here.

I hope that next time you walk into your boardroom, or staff meeting, or office cafeteria, you see something that breaks that analogy.

If not, I hope you have to catch a sly giggle as you take your seat.

Ice floe interactive visualisation, take 1

I recently spoke at POLAR2018 about using aerial photography for observing the properties of snow on sea ice. I’d really hoped to present some new work I’d been trying out on estimating local curvature, roughness and other properties from high resolution 3D models of sea ice topography.

Unfortunately I didn’t get all the way there. Firstly, I reprocessed a bunch  of data and the results were worse than work I’d done in the past. So back to the drawing board, and the fallback position of explaining a bunch of work we’ve done over the past decade. A PDF of my slides is available via researchgate, but preferentially wait for the interactive web version to finish – it’ll be more up to date, have better links and side notes!

I did, however, put together the beginning of a 3D visualisation for sea ice from the surface (using photogrammetric reconstruction) and below (from upward looking sonar). Click and drag below to move/zoom around; and expand the hamburger menu at top left to expose more navigation tools, measuring tools and styling options. Or, click here to open a full page view.

Many thanks to the Antarctic Climate and Ecosystems Cooperative Research Centre for funding the work behind this; and for getting me to Davos.

Software carpentry – achievement unlocked

Last week I became a certified Carpentries instructor. This means that I have done some training, and demonstrated some skill in, teaching grownups how to write programs and wrangle data.

It also means I’ve joined a worldwide community of people who are dedicated to helping other people avoid mistakes they’ve made. Which is an excellent goal! If you’re a scientist, analyst of any sort, or even someone needing to wrestle with excel spreadsheets which are getting out of hand, I’d urge you to check the materials out. There are streams for Software Carpentry and Data Carpentry , more or less explained here.

I’m excited! Perhaps I’ll see you at a workshop someplace – or even organise one for you!

16 bit to 8 bit RGB colours with PDAL and Python

Sometimes LAS files have RGB values stored as 16 bit colours. This currently upsets the Potree viewer, and maybe some other things. The following recipe using Python and PDAL ‘eightbitifies’ colours. It also compresses incoming LAS files to LAZ.

PDAL has a neat filters.python option, which passes every incoming point into a python function. Flexible huh?

First, a Python function to scale colours:

import numpy as np

def eightbitify(colour):
    notzero = np.where(colour > 0)
    colour[notzero] = (colour[notzero]/255) - 1
    return colour

def scale_colour(ins,outs):
    outs['Red'] = eightbitify(ins['Red'])
    outs['Green'] = eightbitify(ins['Green'])
    outs['Blue'] = eightbitify(ins['Blue'])
    return True

ins is a numpy array of incoming points from PDALs reader. PDAL dimensions define what’s in there – so here I’ve asked filters.python to read Red, Green, and Blue into numpy arrays to work on. The entire set of data will be loaded up, making python array operations useful and fast (if you have the memory). PDAL’s —stream operator isn’t enabled for filters.python –  if you are memory constrained, think about filters.splitter to make manageable chunks first.  This actually makes sense – PDAL has no way of knowing what you’re going to write into your python function!

Next, construct a JSON pipeline to run the conversion:

 

{
    "pipeline": [
        {
            "type" : "readers.las",
            "filename" : "file.las"
        },
        {
            "type" : "filters.python",
            "script": "/opt/data/scalecolour.py",
            "function": "scale_colour",
            "module": "scalecolour"
        },
        {
            "type" : "writers.las",
            "filename" : "outfile.laz"
        }
    ]
}

 

…and finally invoke PDAL to make it go. Here it’s wrapped in a shell script to process a bunch of data. Highlighted lines show the actual PDAL invocation. I’ll replace the shell script to loop over files with a python function once it’s written.

for f in lasfiles/*.las;
    do
    #echo $f
    fileout=$(basename $f ".las")
    #echo $fileout
    docker run -it -v /set/your/working/path:/opt/data \
                    pdal/pdal pdal \
                    pipeline /opt/data/scale_colour.json \
                    --readers.las.filename=/opt/data/${f} \
                    --writers.las.filename=/opt/data/lasfiles_8bit/${fileout}.laz

    done

 

I ran this using the pdal/pdal:latest docker image:

docker pull pdal/pdal

 

A geospatial tinkerer’s feast: FOSS4G 2017

A random Boston photo – city skyline with a public fireplace. Awesome!

Between 11 and 19 August 2017 I was in Boston (USA) for the OSGeo foundation’s FOSS4G conference. To write a one sentence summary of the event? It’s like a cornucopia of open source geospatial ideas, applications and inspirations! Variously the event has been compared to disneyland, summer camp, and a few other things. It’s my second round of FOSS4G and I have only this to add: Most of my working life right now is defined by things I learned at FOSS4G 2016 – and my experience this year has compounded/grown/inspired even wilder ideas to implement. It’s going to absolutely consume my waking/working hours.
The event really is that good!

My attendance is funded by my employer (Australia’s National Computational Infrastructure), on the basis of a talk on point cloud data services using open standards. This makes my life hard because NCI has fingers in many geospatial pies, and as one human I need to cover my own domain expertise, plus things-of-wider-organisational-interest. However, it’s OK and I am very grateful to be able to travel across the world to talk shop. I hope I do a reasonable job of representing my workplace as well!

The beauty of going to the conference is that I can look at the program and try to find people to talk to face to face, or representatives from their organisations. I also get to survey the scene and figure out which talks I really need to watch later. Talks are all recorded, and made freely available later (I’m waiting eagerly for this year’s crop to arrive).

So let’s dive into what floated my boat. Please bear in mind that in general, the standard of idea presented at FOSS4G is really high! So don’t be disappointed if you didn’t make this list – it was really hard to filter what to see, and my list of catchup videos to watch is long.

Things I got really excited about

Workshops.

I played with fundamental spatial algorithms in Python, I was comforted in my old age using GDAL on the command line, bash scripts and awk; my mind was blown using  geonotebooks and geopyspark,  and I got to play with Digitalglobe’s GBDX tools for an afternoon. Unfortunately this conflicted with LIDAR processing in GRASS, and I will definitely step through that one when I can. I really also wanted to do one on cartography, but that’s too far out of my day job for work to pay for.

In short, the workshops were amazing, educational, and delivered by people with deep expertise. My own workshop proposal did not get up, but hey – I am an OSgeo minnow compared to the leviathans I got to go and learn from.

So workshops – yes – do. As many as you can! These are all awesome.

Talks and talks and more talks

Here’s where things got really hard – I made a list of every talk I went to, what I took away, and what I could equally have gone to for work.  It’s five pages long, I’m not going to repeat it. Here is the highlight reel – what sticks in my head a week after the fact, in no particular order:

Paul Ramsey’s keynote on the economies of free and open source software. I really liked it, because I could some new ways of thinking about how I interact with the OSgeo ecosystem in a productive way. I’m a trader in the attention economy, and dive a bit into the giving economy – and I’m feeling a lot less guilty about not being able to write c++ or contribute financially. However – the cash part is still important – while software is given away, it is totally fair to not be taken advantage of.

Maria Aria de Reyna on diversity in tech – this is really important and applies to all our lives. The more we talk about diversity the harder it is for bigotry to take root. Everywhere. In tech, walking down the street free of harassment, going shopping without raising eyebrows, even taking a piss in the toilet that is right for you – many of these things are taken for granted by some of us, but needlessly difficult for others. Even though the FOSS4G community could do better, I contrast it with an AWS summit I went to recently, and… well… that community has more work to do than the OSgeo ecosystem.

Connor Manning on majicking with point clouds. Seriously. Majicking. There is no other word for it. In this same ecosystem, talks on PDAL by Brad Chambers and Michael Smith were also very clear, digestible and useful in my daily life. I have a long list of things to try…

Planetlabs – I went to see two of their talks – amazing to see so much done with relatively simple tools – GDAL, python, stuff literally glued together as they go. And it works! Their successful operational model is a massive, massive  thumbs up to the OSgeo ecosystem.

Matt Lammers on visualising weather in CesiumJS. This was really awesome – I tracked Matt down to talk shop, since NCI have similar data and we are always after ways to make it engaging. I have an even longer list of stuff to try…

Helena Mitasova on viewsheds, vsual complexity and other higher order products using the GRASS GIS ecosystem. I really liked how her group is working at the type of ideas that marketers or social scientists think about – I’m always way too far down in the weeds, and this talk filled me with ideas.

Claire Porter on the ArcticDEM project. I appreciated her really down to earth presentation style and obviously deep knowledge – and it’s also an amazing project!

To finish the week off,  Maria Aria de Reyna on cats. or was it metadata? metacats? But yes – the point: making data infrastructure useful for everybody. I’m on this bandwagon a lot, but nowhere near as succinctly, expertly or cat-ly. Why do all this OSgeo stuff in the first place if we can’t find and use data, right?

…and more. Tom Holderness on PetaBencana, Dan Joseph on DIY drone mapping, Hoard Butler on the epic and seminal Proj.4, and .. well .. almost everything. At a high risk of repeating myself I’m really waiting for the videos to turn up – and go through my review list – there were many talks I could not get to in person!

Also, at some point I’ll go through and add links to all the talks. When all the videos turn up…

A comment on diversity

Since I am really not an expert on diversity, and it came up in the conference with a bathroom queue tweet, I asked Claire Trenham for her two cents on diversity in tech. She was formerly my boss, and we talked a lot about basically the importance of not being an asshat. Her comments reflected a lot about existing efforts within the OSgeo community and and how changing the world can be non-obvious. To sum up her thoughts:

Code of conduct, and code of conduct again. Her worst experiences were at conference dinners – which doesn’t mean we should ban alcohol, just reinforce the code of conduct, and again. She rated the FOSS4G code of conduct highly – and had the following thought to digest on it’s application (I quote):

But, realistically, could you be sure that if the proportion of gender and racial minorities rose, they would get equal speaking time during questions? Could you be sure that none of the guys that currently seem like great guys wouldn’t subconsciously switch to mansplaining if there were more women in the room, so it goes from being a room of equals to a room containing people who need to be ‘taught’ and ‘enabled’

This is really important. From my point of view I didn’t see this happening at FOSS4G – and wonder – as an OSgeo community member and FOSS4G attendee would I be able to call any such behaviour out in a way which was respectful? Do I fall into those patterns myself? How can I do better next time? What are my internal biases, how do they manifest, how can I expose them and break them down?

Remote attendance –  some amazing potential contributors skipped just because they can’t get there, or don’t feel comfortable in a conference environment. FOSS4G does an amazing job of making content accessible. Is properly interactive remote attendance a possibility? (conference organisers, please educate here! I’m sure it’s been investigated…)

Equity of access – for people who can make the conference, what barriers exist? cost? (for example is the travel grant enough to get people from, say Africa, to the US?) childcare? would offering *good* childcare be an enabler for parents who may or may not have immediate family to help out?

Stuff to think about going forward. Having never been a conference organiser, the second two points may be really hard to implement and at some point a compromise is needed, but could we do it as a community? I’d love to hear your thoughts.

…and I really, really hope to make it to Dar Es Salaam – do I want to hear about solving African and global issues, African style? oh hell yeah I do!

Sea ice and beer

A while ago I mentioned that I would write something about sea ice. The context was a talk I gave at the 2017 Pint of Science festival in Canberra. It was really quite fun, despite being totally terrified and full of the ‘what ifs’… ten minutes before stepping in. Thanks to some great tips from a Canberra Innovation Network science communication workshop I managed to get it done.

Anyway, The aim of that talk was to give a brief sea ice overview and then show what it’s like as a scientist working in the field – how we think, how we try to solve problems and potential new techniques we see which can help.

The audience was quite impressive – they grasped the material with two hands and had insightful, interesting questions. It was heartwarming.

…but none of that is about sea ice at all. That was all about a talk! So go and read it yourself here and learn more: https://adamsteer.github.io/talks/pintofscience2017/#/ – it uses Reveal.js – so you can use your arrow keys to move around it either linearly (down, then right, then down) or go around how you please. Press ‘s’ to see speaker notes, I’ll finish those in the next week while I’m travelling to FOSS4G.

Most importantly, it explains the sea ice/beer cycle. This is really an under-studied earth system component and needs massive grant funding and more fieldwork. I mean it! Seriously! Without detailed knowledge of sea ice thickness gained by combining many instruments and rigorous field surveys, we will always be nervous that our beer is at risk.

You can also see a deeper dive ( a PhD thesis in 20 or so slides) here – also in Reveal.js: adamsteer.github.io/talks/phd.wrapup .

I hope those slide decks are digestible and leave you with something you didn’t know before – the sea ice/beer cycle for one; but also how sea ice gets measured and just a few of the issues surrounding how to make a realistic assessment about how much sea ice exists on the southern ocean. It’s super-complex! It’s also about the most mind blowing place I’ve ever been – we don’t need to go to Mars.. the pack ice zone is much closer and still a totally foreign world.

What a planet we live on…

Browsing MH370

Last week Geoscience Australia released a vast bathymetric survey dataset from phase one of an intensive search for the missing aircraft, flight MH370. Read the full story here.

I’ve been ruminating on the idea of treating bathymetric datasets in the same way I handle LiDAR surveys – as massive point clouds. So this dataset presented an opportunity to try some things out.

I used the Python library Siphon to extract data from NCI’s THREDDS catalogue – ending up with roughly 100gb of ASCII files on my working machinery. It was easy to see what these files contained – but they’re no good for my use case as lines of text. I had in mind dumping them all into a postgres-pointcloud database – but then, I got excited by the idea of visualising it all.

So I did.

The first step was to clean up the data. I needed to convert very nice descriptive headers into something that described the data in a less verbose way.

sed -i took care of that task. It also handled removing leading 0’s from two number longitudes. I still had ASCII data, but now I can do something with it!

Enter the Point Data Abstraction Library (PDAL). My new ASCII headers describe PDAL dimensions. My cleaned numbers left no doubt about what an extra 0 means. With a quick pipeline I turned all my data into LAS files, reprojected from lat/long to metres in EPSG:3577 – GDA94 / Australian Albers. I used this because it was the only cartesian projection which could feasibly swallow the whole region without any weirdness (for example writing out things that are in UTM44 as if they were in UTM48).

But wait – what? Why LAS?

…because it can be read by PotreeConverter. You can likely see where I’m headed now. After some wrangling to get PotreeConverter working on CentOS, I burned a bunch of RAM and CPU time to generate octree indexes of all 4ish billion bathymetry points. With some scaling fun and experimenting with just how much data I could throw in at once, I eventually rendered out and glued together visualisation. A screenshot is below. The interactive visualisation is no longer live, I’m working on a new hosting site and a better version – and will update this post when done!

It’s not perfect by any stretch, but you can browse the entire dataset in 3D in your (Chrome or Firefox) web browser. It is a Potree 1.5 viewer, and doesn’t like Safari  (yet). Given the number of individual indexes being loaded, it also sometimes overwhelms the capacity of a browser to open connection sockets. Ideally I’d have a nice basemap and some more context in there as well, but as a proof-of-concept it isn’t bad!

The whole thing is built with a mix of pretty simple freely available tools – from linux built-ins to cutting-edge webGL.

Why did I do all this, and could it be better?

I’m grateful to my employer for the time I get to spend on side projects like this, but there is a real purpose to it. When data get massive, it makes sense to shift big processing off of desktops and onto dedicated high performance compute hardware. Point cloud data are on the cusp of being workable – data as a service in the raster domain has been around for a while.

Point clouds have some special considerations, the primary one being lack of data topology. Creating visualisations like this demonstrates one way of organising data, and makes light of the traditionally difficult task of investigating entire surveys. It also makes us ask hard questions about how to store the data on disk, and how to generate products from the original datasets on demand – without storing derivatives.

For this project, it would have been great to skip the LAS creation part and render straight from an underlying binary data source to the octree used for visualisation. And then, run an on-demand product delivery (rasterisation/gridding, analytical products) from the same data store. In it’s current form this is not possible. As-is, the data are designed for users to make their own copies and then do stuff – which is limited by the size of local compute, or the size of your public cloud account budget.

What next?

Prepare for testing with a point cloud data delivery service I’m working on. Tune in to FOSS4G (August, Boston) to hear about that.

In the meantime, play with it yourself! You can obtain the data shown here for free – it is at: http://dapds00.nci.org.au/thredds/catalog/pw31/catalog.html. I used the data in ‘bathymetry processed -> clean point clouds’ (direct link). The data are also held on AWS (see the Geoscience Australia data description) if that’s easier for you. Tinker with it, have look at the viewer, see what you can come up with!

Oh, and let me know if you spot anything unusual. WebGL lets all kinds of things happen in the ocean deeps

 


Thanks to Geoscience Australia for releasing the dataset to the public! And thanks to the National Computational Infrastructure (NCI) for my time and the hardware used to develop this technology demonstration.

The MH370 dataset is hosted on NCI hardware. However – I used the same methods for data access anyone in the general public would (my download stats were a big blip for a while..)

EGU 2017

I’m going to Vienna next week for EGU17 – which is awesome, and I’m extraordinarily excited under my ‘I’m a badass science ninja’ veneer! I’ll be listening a lot, and talking a little bit about work I’m doing at the National Computational Infrastructure on some ideas for data services around massive, dense point data – standing in front of the poster pictured above.

Hope to see you there!

ps – I promise, I’ll write about sea ice real soon. I need to – for Pint of Science (Australia)  next month…

Drifting sea ice and 3D photogrammetry

3D photogrammetry has been a hobby horse for ages, and I’ve been really excited to watch it grow from an experimental idea [1] to a full-blown industrial tool. It took a really short time from research to production for this stuff. Agisoft Photoscan turned up in 2009 or 2010, and we all went nuts! It is cheap, super effective, and cross-platform. And then along came a bunch of others.

Back to the topic – for my PhD research I was tinkering with the method for a long time, since I had a lot of airborne imagery to play with. I started by handrolling Bundler + PMVS, and then my University acquired a Photoscan Pro license – which made my life a lot simpler!

My question at the time was: how can we apply this to sea ice? or can we at all?

The answer is yes! Some early thoughts and experiments are outlined here, and below are  some results from my doctoral thesis, using imagery captured on a 2012 research voyage (SIPEX II). Firstly, a scene overview because it looks great:

Next, stacking up elevations with in situ measurements from drill holes from a 100m drill hole line on the ice. The constant offset is a result of less-than-great heighting in the local survey – I focussed heavily on getting horizontal measurements right, at the expense of height. Lesson learned for next time!

And finally, checking that we’re looking good in 3D, using a distributed set of drill holes to validate the heights we get from photogrammetric modelling. All looks good except site 7 – which is likely a transcription error.

How did we manage all this? In 2012 I deployed a robotic total station and a farm of GPS receivers on drifting ice, and used them to make up a lagrangian reference frame (fancy word for ‘reference frame which moves with the ice’) – so we can measure everything in cartesian (XYZ) coordinates relative to the ice floe, as well as using displacement and rotation observations to translate world-coordinates to the local frame and vice-versa. Here’s a snapshot:

I don’t know if this will ever make it to publication outside my thesis – I think the method should be applied to bigger science questions rather than just saying ‘the method works and we can publish because nobody put Antarctica in the title yet’ – because we know that from other works already (see [2] for just one example).

So what science questions would I ask? Here’s a shortlist:

  • can we use this method to extract ridge shapes and orientations in detail?
  • can we differentiate between a snow dune and a ridge using image + topographic characteristics?

These are hard to answer with lower-density LiDAR – and are really important for improving models of snow depth on sea ice (eg [3]).

For most effective deployment, this work really needs to be done alongside a raft of in situ observations – previous experience with big aircraft shows that it is really hard to accurately reference moving things from a ship. That’s a story for beers 🙂

References

[1] http://phototour.cs.washington.edu/Photo_Tourism.pdf

[2] Nolan, M., Larsen, C., and Sturm, M.: Mapping snow depth from manned aircraft on landscape scales at centimeter resolution using structure-from-motion photogrammetry, The Cryosphere, 9, 1445-1463, doi:10.5194/tc-9-1445-2015, 2015

[3] Steer, A., et al., Estimating small-scale snow depth and ice thickness from total freeboard for East Antarctic sea ice. Deep-Sea Res. II (2016), http://dx.doi.org/10.1016/j.dsr2.2016.04.025

Data sources

https://data.aad.gov.au/metadata/records/SIPEX_II_RAPPLS

Acknowledgements

Dr Jan Lieser (University of Tasmania) instigated the project which collected the imagery used here, let me propose all kinds of wild ideas for it, and was instrumental in getting my PhD done. Dr Christopher Watson (University of Tasmania) provided invaluable advice on surveying data collection, played a massive part in my education on geodesy and surveying, and also was instrumental in getting my PhD done. Dr Petra Heil and Dr Robert Massom (Australian Antarctic Division) trusted me to run logistics, operate a brand new (never done before in the AAD program) surveying operation and collect the right data on a multi-million dollar investment.  The AAD engineering team got all the instruments talking to each other and battled aircraft certification engineers to get it all in the air. Helicopter Resources provided safe and reliable air transport for instruments and operators; the management and ship’s crew aboard RSV Aurora Australis kept everyone safe, relatively happy, and didn’t get too grumpy when I pushed the operational boundaries too far on the ice; and Walch Optics (Hobart) worked hard to make sure the total station exercise went smoothly.

 

 

 

 

 

The LiDAR uncertainty budget part III: data requirements

Part I of this series described the LiDAR georeferencing process (for one, 2D, single return scanner). Part II showed how point uncertainties are computed and why they are useful.

What hasn’t been covered so far is what data you need to do this stuff! Here is your shopping list:

Scanner data

  • Raw LiDAR ranges
  • Scanner mirror angles
  • Waveforms with times if you are going to decide range-to-returns for yourself
  • Intensity
  • anything else (RGB, extra bands, etc)

Essentially you need the full data package from the LiDAR including any corrections for mirror wobble or other sensor oddness – and a way to decode it. You also need engineering  drawings which show the LiDAR instrument’s reference point.

Aircraft trajectory data

  • GPS/GNSS based aircraft trajectory (positions in space)
  • Attitude data  (relationship between the aircraft’s orientation and some reference frame) normally collected by a ‘strapdown navigator’ – which is an inertial motion sensor (often called IMU), and often integrated with GNSS.

Hopefully you don’t have to do the work combining these yourself, but you need a high-temporal-resolution data stream of position (XYZ) and attitude (heading pitch roll, or omega phi kappa, or a quaternion describing rotation of the airframe relative to the earth). This needs to be as accurate as possible (post-processed dual-frequency GPS with preferably tightly-coupled processing of IMU and GPS observations).

Navigation instrument and aircraft data

  • Engineering drawings which show the navigation instrument’s reference point
  • The lever arm between the navigation instrument reference point and the LIDAR reference point, in IMU-frame-coordinates
  • The rotation matrix between the IMU and the LIDAR coordinate systems
  • If necessary, the rotation matrix describing the difference between the aircraft’s body frame and the IMU’s internal coordinate system (quite often, this gets simplified by assuming that the IMU frame and the aircraft frame are equivalent – but many operators still account for this – which is awesome!)
  • Any boresight misalignment information you can get (tiny angles representing the difference between how we think instruments were mounted and how they actually were mounted)

Of course, you could also just push your contractors to deliver point-by-point uncertainty or some per-point quality factor as part of the end product.

This series might have one more post, on how to deal with these data – or more accurately a tutorial on how not to. And then I run out of expertise – brain = dumped, and over to people with newer, more up to date experience.