Measuring what’s on the ground

If your organisation needs to comply with CSRD, it’s possible that you’ll have a report on Land-Use Change within ESRS E4 – Biodiversity and Ecosystems.

If your organisation needs to comply with CSRD, it’s possible that you’ll have a report on Land-Use Change within ESRS E4 – Biodiversity and Ecosystems.

If your business has multiple sites across different regions, gathering this information can be incredibly difficult and time consuming. Even if you use publicly available satellite imagery to gather data, reliable assessments of the images requires a trained eye and in-depth knowledge of how ecosystems change on the ground.

And getting it wrong as a result of over or under reporting changes in land cover could come back to haunt you as better data becomes available.

What is land-use change?

Land use change is one of the five direct drivers of biodiversity and ecosystem loss globally. The loss and degradation of natural habitats means less space and resources for wildlife, which leads to a reduction in species populations and diversity.

Land cover describes what’s physically on the ground, for example: trees, grass, crops, buildings. Land use tells us what the thing on the ground is used for, such as, natural land, agriculture, industrial and residential.

Changing land use from natural to non-natural is what drives ecosystem impacts.

Deforestation is a great example of this: changing from trees to crops or built-up land.

Why is measuring it so hard?

Measuring and monitoring change in land use and cover is fundamental to understanding and improving our relationship with nature - but this is a surprisingly hard thing to measure. Even more so if you are an international business with multiple sites of operations.

In an ideal world we would have ground truth observations of the type of land cover on a site and what its used for. This is resource-heavy, and we don’t often have the means to do ground truth surveys. So, we turn to the more convenient alternative provided by remote sensing.

Remote sensing from satellites has some key advantages. Satellites can observe every point on the Earth’s surface several times a month, so you have up to date information on what’s on the ground. Plus, this enables you to monitor multiple sites, in a short and similar timeframe.

However, measuring fine details from space is very hard, for two main reasons.

Firstly, the resolution of satellite imagery varies, and they’re not as detailed as we would like. For the Sentinel 2 satellite – which takes photos of every point on the ground on a weekly basis – a single pixel in the image covers 10 metres on the ground. Covering the whole world in a 10 x10 metre grid gives us a lot of data, but, a lot can change on the ground in 10 metres!

Secondly, when looking at satellite images it can be hard to define what you’re looking at. Crops can look like grassland, and trees can look like wetlands.

This is a photo of cropland in Spain. Although you can make out the edges of different types of land by the colour variations – blue, green, pale green, dark green – it’s very hard to figure out if the green bits are grass, trees, wetlands or something else! Additionally, where are the edges between a forest and grassland, or grassland and wetlands?

It’s also possible that you could have a 8m strip of deforestation at an edge near built-up areas. This may or may not be visible in the satellite image, but will have huge impacts on the wildlife living there.

As you can see, within a 10m pixel, you can have multiple types of land cover. This makes it very labour intensive when you have a lot of land to assess.

Now imagine trying to analyse this for every site in a portfolio, such as for every farm that supplies a major supermarket chain.

AI can help… to an extent

To save us time eyeballing every single satellite image we can use machine learning (aka AI) to help us.

A machine learning classification algorithm can be trained to recognise different types of land cover and classify them automatically. These algorithms can do a great job at picking out distinct and different land covers, but like humans, they also get confused between land covers which look similar.

Which leaves us with the same challenge at the edges of different land cover types and also for land covers which look similar, like crops and grass.

The accuracy of these algorithms is variable across the globe – they generally perform well in temperate regions such as Europe or the USA, but have a much poorer performance at distinguishing natural land from crops in tropical locations such as Brazil. This is partly due to the types of crops grown in Brazil – think comparing palm trees to a rainforest.

How we’ve made AI more reliable

The best way to verify satellite imagery is by comparing it to ground truth data. But as we’ve already discussed, ground truth data is expensive to gather. So what should you do?

We’ve spent hours comparing available ground truth data to satellite imagery, in order to assess how good these algorithms are at classifying land cover. This gives us a scientifically robust system to use to help organisations like yours, to gather data and analyse it.

On average, we’ve found that Google’s Dynamic World and ESRI’s Land Cover classifiers correctly classify natural land (trees, grassland, wetlands, shrubland) about 86% of the time, and non-natural land (built-up areas, crops, bare ground) about 70% of the time.

The accuracy of the algorithm, and the size of the satellite image pixels, determine the smallest changes in land cover that we are able to meaningfully detect. Using publicly available satellite data we can’t detect changes on scales smaller than 10 metres (some private satellite companies can do better). Using machine learning algorithms we expect a certain percent of pixels to have been misclassified.

To mitigate this, at natcap, we calculate and disclose the expected accuracy of all land cover related metrics. For example, we might say that a site has 10% natural land with an expected error rate of 1% (±1%). Similarly for any changes, we only say when we have detected change that is larger than the expected error rate and resolution, for example if the error rate is 1%, then change must be larger than 1% for us to be able to say we have meaningfully detected it.

At present, this is the best level of detail we, or anybody, can meaningfully say about land cover and land use change.

Understanding the strengths and limitations of the data and science which underlie metrics is vital for making informed decisions about action, and reporting that is meaningful. Over or under reporting changes in land cover as a result of not knowing the data limitations, could come back to haunt a business when better data becomes available.

If your organisation needs to monitor and report on land-use for CSRD, we could help save you time and money in the short-term, and highlight business risks and opportunities that you can act on for long-term resilience.

To find out more, get in touch.

Dark photograph of feathers overlayed with white outlines

natcap is the nature intelligence platform for your nature-positive reporting & action.

Identify where you should prioritise your efforts, understand your nature impacts and dependencies, disclose to stakeholders and take action.

See how it works...