Skip to content

Mapping a small farm part 1: basemap data collection

Over the summer of 2019/2020 I was operating out of a small property tucked deep in the mountains of Victoria. Of course, I decided to make a map of it! I was inspired by Tim Sutton’s talk at FOSS4G 2019 about using QGIS to map small holdings, and a need to understand how the water system worked.

The first part of the exercise was to create a basemap – using a small RPA to generate a high resolution orthophoto, along with an elevation map. It took a few iterations – based on hardware and software changes, plus trying new ideas. Because it isn’t a client project, it was perfect for spending time collecting the same data different ways…

Gathering base data – first attempts

In July 2019 I’d acquired a new ANAFI thermal – and rapidly discovered that the usual flight planning suspect (Pix4D capture) didn’t talk to the ANAFI thermal yet, and my regular ANAFI still has a GPS module in need of post-crash repair.

So, plan B became ‘why don’t we just fly around in GPS lapse mode and see what happens’. This has some great advantages, in that it’s possible for a pilot to follow terrain and not get a sore trigger finger while acquiring data. That actually worked OK, and I ended up covering roughly 3 Hectares in a couple of batteries.

This first image shows a contour map made from a DSM built in OpenDroneMap, and an orthophoto from the same process, overlaid on OpenStreetMap tiles, rendered in QGIS.

Here is the flight pattern which acquired the data – it’s pretty messy! Blue ellipses are camera centres. Ellipse length is given by GPS Z accuracy, width by GPS XY accuracy, and orientation by camera yaw angle (heading). Colour comes from GPS Z accuracy in the map (small, white dots are more accurate), but could be height or some other parameter of interest. The final image is just to show coverage area. I tried to manually stick to ‘good flight path’ practices, focussing especially on lots of overlap and different heights.

You can see, however, that I wasn’t one hundred percent consistent on flight line offset angles, and it was a pretty inefficient exercise from a battery usage perspective.

For a quick and dirty data collection this works quite well! Especially if you want to mix up ‘pseudo-grid flights’ with ‘detail inspection’, for example a bunch of high altitude collection then a low circle around specific objects of interest. Or you’re in a hurry and just want to snap off a rough map of a small area.

However, this is hard graft for flying more than a few hectares. Sometime in November 2019, Pix4D planner was able to talk to the ANAFI thermal so I planned a bunch of doubled-grid flights, knowing that using straight up lawnmowing patterns was not going to give me the result I wanted. It was looking like taking a dozen or more flights to cover the entire 8-ish hectares – so off I went. And never finished! It really is hard to find time to fly that many flights at the same time of day when it is competing with paid work and other life stuff. A couple of intermediate results looked great, though. This one shows a broad scale flight at 80m-above-pilot with an underlying 40m-above-pilot collection for a subset of the area:

…and once more showing area covered:

The resulting dense (MVE) point cloud can be explored in Potree by clicking the image below:

Better flight planning…

In around January 2020 Pix4D split off the RGB and thermal components for flight planning purposes. It looks a lot like before this, the thermal camera field of view was used for all planning purposes – because now, using RGB mode I could cover a lot more space in far fewer flight lines!

So I dropped the ‘many double gridded flights’ plan and started over – this time testing some planning for self-calibration concepts identified in the OpenDroneMap community.

The last attempt changed things a little again, using Pix4D planner’s polygon mode to plan all the flights. While Pix4D planner always complains about polygon flights over large areas, the flights went to plan, as shown below. Here, I planned and flew the entire farm in one morning – since I suddenly had a lot fewer commitments to other work.

I really liked using this mode, it let me cover exactly the area of the map, without expending battery and image processing time on stuff I would throw away in the end. I filled a few details with manual (gps mode) flights at lower oblique angles, and ended up with 1297 images collected in just over 90 minutes. Each planned flight here consumed most of a single ANAFI battery, and I had a battery brick on hand to refill the first battery enough to take the manual shots.

These flights pushed the limits of ‘in line of sight’ flying for the ANAFI. Small aircraft are awesomely capable, however most legal environments require a pilot to maintain visual control (without aids) over the aircraft without specific exceptions. In a commercial setting or urban environment with radio interference I would make three or even four flight blocks for a site like this, and invest in extra batteries.

Processing

Pushing this image set through OpenDroneMap was quite the task – I wanted the output to be as detailed as possible so I set the following options:

--camera-lens brown
--min-num-features 30000
--force-gps
--dem-resolution 5
--dsm
--dtm
--pc-las
--orthophoto-resolution 2.5
--mesh-octree-depth 12
--depthmap-resolution 1024
--opensfm-depthmap-method PATCH_MATCH
--texturing-nadir-weight 24
--ignore-gsd
--build-overviews
--orthophoto-cutline
--use-3dmesh
--max-concurrency 12
--mve-confidence 0.7
--time
-v

…which consumed about 30gb of RAM and 18 hours on a ten year old workstation (HP Z600 maxed in both CPU and RAM capacity), reading and writing to spinning disk storage (WD red series – 7200rpm). It did not help proceedings that I had an SSD firmware issue right at the time of processing, although that is another story.

Again, the resulting imagery was processed in OpenDroneMap to generate an orthophoto. However, I didn’t create a terrain model with ODM this time – because I wanted to test a new cloth simulation filter in PDAL, which I’d had really promising results with so far – so I made a DTM with a relatively standard ‘points to DTM’ pipeline:

[
    {
      "type":"readers.las",
      "filename":"input.laz"
    },
    {
      "type":"filters.assign",
      "assignment":"Classification[:]=0"
    },
    {
      "type":"filters.elm"
    },
    {
      "type":"filters.csf"
    },
    {
      "type":"filters.range",
      "limits":"Classification[2:2]"
    },
    {
      "type":"writers.gdal",
      "resolution": "0.05",
      "output_type": "idw",
      "window_size": 10,
      "filename":"dtm.tiff"
    }
]

I also processed a point cloud in Agisoft Metashape. Their magic on aligning depthmaps to make clean point clouds is incredible, and tree reconstruction was vastly better. Metashape took two days on the same machine to produce a high quality dense point cloud, and I stopped processing there. Pushing to an orthophoto was going to take 60+ hours, which I may get done at some point.

I’ve used the metashape point cloud for eye candy purposes. All of the other downstream products (imagery, dtm, dsm) were made with OpenDroneMap.

Results!

Here is the final metashape product in point cloud form, with ground classified using a cloth simulation filter (click the image to explore the point cloud in 3D):

…and here is an orthophoto made in opendronemap – with 1m contours and camera centre dots.

For the purpose of this project (play with collection methods, make a farm map) this is pretty good! A 2-3 day flight to data roundtrip for 8Ha at high resolution is really pretty good, and I’m being slow here. It’s taken so long to write it all up because I’ve walked through many many processing options instead of just choosing whatever the defaults are.

What about ground control?

At the start of summer I set out six ground control points around the property. I had high hopes that my new L1+L5 GNSS phone would achieve sub-metre position accuracy, but it didn’t. The ANAFI camera centres are far more accurate than the phone. With that in mind I’ve used those to reference the model rather than ground control points (and set force-gps in opendronemap, to force openSfM to consider camera centre GPS data in its bundle adjustment). The impact of not using ground control is that my map is not precisely repeatable – even between processing runs I get a metre or so of offset between maps. This offset is also non-deterministic, I can’t wave it away with a simple XY shift.

Long story short – one-off descriptive maps or not concerned with small shifts? Use camera centres. Need to repeat your work to submetre accuracy? Use ground control. Or an RTK/dual frequency GPS postprocessing capable aircraft, or both.

Here are my lightweight, foldable corflute + cloth tape points after a few months out in the fields. They’ve survived sun, rain and cows pretty well (point 3 has a cow hoofprint for authenticity!). The faded drawn-on patterns are for identification – point 3 and point 4 (using a 9-square grid) are seen here. This part didn’t work so well, I’ll use tape (maybe a red-green-blue pattern) to mark out an ID grid in one quarter of each point in the future.

I stepped through building a ground control file for opendronemap, and discovered the wonderful geeqie image viewer for ubuntu ( sudo apt install geeqie). This lets you browse imagery fast; and pick out pixel locations for GCPs. I found it was a lot faster to pick points in geeqie and manually build a GCP file than it was to use WebODM’s GCP interface – which might only tell us I’m more familiar with geeqie-like applications!

For future metashape use, transferring Agisoft’s GCP patterns for automatic registration to this GCP design requires only a little creativity and a crafternoon. Make it with tape and leave them out in field all season if you like!

…and if anyone wants to send a dual frequency GNSS system my way, I’d love to get down to centimetres and tell another story about no-RTK processing GCP data using kinematic PPP processing methods.

Summary, and next!

We’ve seen here that a small farm can be mapped a bunch of different ways using a tiny drone. The key issues in planning here were an interesting block shape, and terrain height differences. These were workable with existing tools and a few different flight methods. The result I liked best in terms of flight efficiency and detail was the height- and 20 degree offset grid, with the camera 10 degrees off nadir, filled in with a few detail flights around objects that needed it. My small ‘life hacks’ from this project are:

  • fly the camera off nadir (just to repeat that one more time)
  • in pix4Dcapture, ignore warnings about polygon flight plans being too large – judge by flight time
  • use detail fill in flights, for example a few photos captured at lower altitudes and camera angles around objects you really want to model well
  • no setting set is perfect for all your missions – aim to understand what you want and work back from that
  • buy more batteries than you think you need
  • legal requirements for control authority put aircraft around ANAFI size at a practical maximum distance-to-pilot of only a few hundred metres, despite advertised control ranges. I have tracked it visually to 600m in very clear unobstructed conditions, however that required constant visual tracking without interruption.

So now we have some great data – really detailed model of a small farm. Which is nice – but what can we do with it? The next post deals with post-processing and making the data useful: mapping farms in QGIS….

The sales pitch

Spatialised is a fully independent consulting business. Everything you see here is open for you to use and reuse, without ads. WordPress sets a few cookies for statistics aggregation, we use those to see how many visitors we got and how popular things are.

If you find the content here useful to your business or research or billion dollar startup idea, you can support production of ideas and open source geo-recipes via Paypal, hire me to do stuff; or hire me to talk about stuff.