Computer Vision Case Study

Artisanal and small mine detection

Hint: switch between general information and a more technical view on this project

Starting Point

Artisanal and small scale mining (ASM) is responsible for 10% of global gold production, with an estimated market value of US $14bn, affecting the lives of 10 to 20 million miners (and an additional 80 million miners for other raw materials) and their families. ASM sites are located in more than 80 developing countries and associated with lower environmental and work safety standards. In order to enforce concession rights, labour legislation or to research migration patterns, it is imperative for governmental agencies, NGOs and researchers to obtain timely data on the location and extend of ASM in their respective region of interest. As this requires vast areas to be monitored, satellite imagery is the best technology to map ASM. dida, together with MRE at RWTH Aachen, is developing ASMSpotter, a Computer Vision tool to automatically segment ASM sites in satellite images.

In order to gauge the extent of ASM activities in a certain region, mining experts currently rely on manual examinations of the area via Bing Maps and similar services. There are many issues with this approach. Services like Bing Maps are not necessarily up-to-date, especially in remote regions where ASM is conducted. Also since the analysis of the data is done manually by an expert, it is quite cost intensive and inefficient to analyze large areas.

Challenges

The main challenge of the development is to gather a sufficient set of training data. To this end we had access to Planet Scope, a high-resolution satellite system operated by Planet Labs. The training data consists of mostly cloud-free images of the AOI. Depending on the AOI, cloud-free images can be scarce, but there are methods to combine several images over time to obtain a cloud-free one of the AOI.

Our partner at MRE labelled approximately 100 Planet Scope images. For this project, a single mining expert was in charge of the labelling. For the prototype we limited the AOI to the north-east of Suriname. This area was selected, because it is representative of many regions where ASM sites are found. The region is located in the Amazonian Rainforest and hydraulic mining is the dominant method for gold mining. Planet Scope was the primary satellite constellation, as it provides one of the highest resolutions available (approx. 5m/px). Sentinel-2 was also successfully tested, but was not the primary focus of this study.

Solution

We train a Deep Neural Network with a U-Net architecture. The model is available as a Python library. ASMSpotter can be run locally to analyze Planet Scope images provided by the user. In a future stage of the project, ASMSpotter can be developed into a cloud service, which analyzes satellite images upon request or can be configured to continuously monitor a given AOI.

Technologies used: Python, PyTorch, Numpy, Matplotlib, Requests

The training of the Deep Neural Network was done on a Google Cloud Compute instance with Nvidia Tesla P100. The labelling was conducted using LabelBox. We did not label on the full image, but rather on a gray-scale depiction of the normalized difference water index (NDWI). Planet Scope has four channels (RGB+NIR). The NDWI is computed as

$$ NDWI = \frac{G - NIR}{G + NIR} $$

and it highlights areas of large water content. The NDWI is appended to the Planet Scope images as a fifth channel. The normalized difference vegetation index (NDVI) is computed as

$$ NDVI = \frac{NIR - R}{NIR + R} $$

and highlights vegetation in the images. It is appended as the sixth channel to the input data.

Product

At first we need to import some standard libraries to handle and display the data.

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image

From the ASMSpotter package, we need to import the relevant functions. load_model, as the name suggests, loads the PyTorch model, while Preprocessor is a class that prepares the raw Planet Scope images for the analysis. In our case, we will run ASMSpotter on the GPU, which has only limited memory. Therefore, the image needs to be split into chunks and the predictions need to be stiched togeather. This is done by predict_full_image.

from ASMSpotter import load_model, predict_full_image
from ASMSpotter.preprocessor import Preprocessor

model = load_model()
preproc = Preprocessor()

Next, we will load the Planet Scope image. Planet Scope images can be obtained in various ways, e.g. by using the browser with a GUI by Planet Labs, or by interacting with the API. For now we will assume that the image has been downloaded already. Planet Scope images come in various form. There is the analytic product, which is basically the raw data and it will be used for the analysis. Additionally, Planet Labs offers visual products, which are the same images, but prepared for visual representation. We have downloaded both so that we can prepare ASMSpotters prediction nicely. Potentially, the segmentation masks can be represented in combination with e.g. maps as well.

analytic_product_fname = "/home/dida/20181016_133712_1038_3B_AnalyticMS_SR.tif"
visual_product_fname = "/home/dida/20181016_133712_1038_3B_Visual.tif"

analytic_image = np.asarray(Image.open(analytic_product_fname))
visual_image = np.asarray(Image.open(visual_product_fname))

# Normalize visual representation for display
visual_image = visual_image - visual_image.min()
visual_image = visual_image / visual_image.max()

First, the analytic product needs to be preprocessed. This is done by calling the Preprocessor-object on the analytic image. The preprocessor appends the NDWI and NDVI as additional channels to the image and normalizes channel-wise. The preprocessed image in combination with the model can be passed to predict_full_image, which handles the prediction in in the case of limited memory.

%%time
preprocessed_image = preproc(analytic_image)
segmentation_mask = predict_full_image(model, preprocessed_image)
CPU times: user 11.2 s, sys: 2.45 s, total: 13.7 s
Wall time: 12.1 s

And that is already it! segmentation_mask is a ndarray with the pixel-wise logits of the prediction. It can be converted to a segmentation mask by filtering for values > 0. The image covers roughly $$360 \text{km}^2$$ and it took $$12 s$$ to analyze.

With a few lines of code using matplotlib we can represent the segmentation mask in combination with the visual product.

current_cmap = matplotlib.cm.get_cmap("autumn")
current_cmap.set_bad(alpha=0.)

segmentation_mask[ segmentation_mask > 0 ] = 255.
segmentation_mask[ segmentation_mask <= 0 ] = np.nan

plt.figure(figsize=(20, 20))
plt.title("Detected ASM sites - highlighted in red", fontsize=20)
plt.imshow(visual_image)
plt.imshow(segmentation_mask, alpha=0.5, cmap=current_cmap)
plt.axis("off")
(-0.5, 8999.5, 4375.5, -0.5)

At first we need to import some standard libraries to handle and display the data.

import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from PIL import Image

From the ASMSpotter package, we need to import the relevant functions. load_model, as the name suggests, loads the PyTorch model, while Preprocessor is a class that prepares the raw Planet Scope images for the analysis. In our case, we will run ASMSpotter on the GPU, which has only limited memory. Therefore, the image needs to be split into chunks and the predictions need to be stiched togeather. This is done by predict_full_image.

from ASMSpotter import load_model, predict_full_image
from ASMSpotter.preprocessor import Preprocessor

model = load_model()
preproc = Preprocessor()

Next, we will load the Planet Scope image. Planet Scope images can be obtained in various ways, e.g. by using the browser with a GUI by Planet Labs, or by interacting with the API. For now we will assume that the image has been downloaded already. Planet Scope images come in various form. There is the analytic product, which is basically the raw data and it will be used for the analysis. Additionally, Planet Labs offers visual products, which are the same images, but prepared for visual representation. We have downloaded both so that we can prepare ASMSpotters prediction nicely. Potentially, the segmentation masks can be represented in combination with e.g. maps as well.

analytic_product_fname = "/home/dida/20181016_133712_1038_3B_AnalyticMS_SR.tif"
visual_product_fname = "/home/dida/20181016_133712_1038_3B_Visual.tif"

analytic_image = np.asarray(Image.open(analytic_product_fname))
visual_image = np.asarray(Image.open(visual_product_fname))

# Normalize visual representation for display
visual_image = visual_image - visual_image.min()
visual_image = visual_image / visual_image.max()

First, the analytic product needs to be preprocessed. This is done by calling the Preprocessor-object on the analytic image. The preprocessor appends the NDWI and NDVI as additional channels to the image and normalizes channel-wise. The preprocessed image in combination with the model can be passed to predict_full_image, which handles the prediction in in the case of limited memory.

%%time
preprocessed_image = preproc(analytic_image)
segmentation_mask = predict_full_image(model, preprocessed_image)
CPU times: user 11.2 s, sys: 2.45 s, total: 13.7 s
Wall time: 12.1 s

And that is already it! segmentation_mask is a ndarray with the pixel-wise logits of the prediction. It can be converted to a segmentation mask by filtering for values > 0. The image covers roughly $360 \text{km}^2$ and it took $12 s$ to analyze.

With a few lines of code using matplotlib we can represent the segmentation mask in combination with the visual product.

current_cmap = matplotlib.cm.get_cmap("autumn")
current_cmap.set_bad(alpha=0.)

segmentation_mask[ segmentation_mask > 0 ] = 255.
segmentation_mask[ segmentation_mask <= 0 ] = np.nan

plt.figure(figsize=(20, 20))
plt.title("Detected ASM sites - highlighted in red", fontsize=20)
plt.imshow(visual_image)
plt.imshow(segmentation_mask, alpha=0.5, cmap=current_cmap)
plt.axis("off")
(-0.5, 8999.5, 4375.5, -0.5)

Case Studies in Computer Vision

Computer Vision

Monitoring urban growth and change

An image segmentation algorithm that supports sustainable city planning.
Our solution
Computer Vision

Automatic planning of solar systems

Creative solutions enabled us to automate the process of planning solar systems.
Our solution