Pix4D Series: Processing UAV Surveys for Agriculture

Step 1: Initial Processing

Pix4DMapper makes it simple to process your UAV imagery into high-quality orthomosaics and vegetative index maps. For the first-time user, the learning curve is often steeper than expected, and the variety of processing options might lead to extensive trial-and-error before the outputs are deemed acceptable. How do you judge a successful processing job? That will depend on the goals of every flight, from multispectral mapping to visual colour surveying, however, there are four priorities you should keep in mind:

  1. Completeness: Are there holes or defects in your outputs?
  2. Accuracy: Does your output reflect the true nature of the scene in geometric space? Do the index values fit a normal range?
  3. Speed: Did the time to process allow you to deliver outputs in a timely manner?
  4. Filesize: Are the outputs delivered at a file size acceptable for the end-use?

Producing great maps begins with acquiring great imagery, which Pix4D support covers in this support forum post. For agricultural land surveys, overlap should be increased to 85% frontal and 70% side lap to provide a full view of the scene and adequate point-matching during processing. Lighting, wind and moisture conditions will greatly impact the image quality, so strive to fly in sunny weather with light wind is ideal but not always possible.  Blurry images or swaying plants make it difficult, if not impossible, for the software to match key points between neighbouring images which mean you will need to re-fly the field. If flying a multispectral sensor like the NIR camera or Sequoia, try to wait until no moisture is present on the surface of leaves, whether it be from rain or morning dew. Water will alter the reflectance spectra of the crop which greatly reduces the accuracy of vegetative indices.  

Initial processing options to improve project results

Below are a processing tutorial and comparison of the different processing options most relevant to agricultural mapping.  The case we will use is a corn research plot site requiring high resolution (2.3 cm/pixel). Both the RGB and NIR Canon S110 cameras were flown within less than an hour of each other, and each was processed in the same Pix4D project. Heavy winds during the flights mean the dataset is less than ideal for processing, but make for a great case study on the differences between Pix4D’s many processing options. Additionally, use the TIF format instead of JPEG when processing multispectral images. Though they are much larger, valuable data can be lost to the JPEG compression algorithm. The reflectance map generated in step 3 processing needs to be as accurate as possible for index calculations. While both Tiff and JPEG images can be used, for the best quality reflectance map use Tiff images. Project accuracy is identical when processing TIFF or JPG images. If you have RGB images, you can reprocess the raw images through eMotion into TIFF format if pictures were taken with the Canon S110. Both RGB and NIR image sets detailed here are in the TIFF format. Refer to the Pix4D support forum for details when needed.

Hardware configuration

The PC used for this comparison has the following specifications:

Your processing rig must be a powerful machine to achieve results in a timely manner. The CPU is the most important component and will be used fully at every step. The GPU does reduce processing time in Step 1, but high-end cards like the Nvidia GTX Titan are overkill unless you are working extensively with 3D models. This PC was tested with both the GTX Titan and the GTX 980 and I found no noticeable improvements in processing time when using the more expensive card. The website ca.pcpartpicker.com is an excellent way to build a custom PC on your own. Part compatibility is matched to a database, while prices from the most common online stores are compared so that you can build something to a given budget.

Importing your imagery

If you fly multiple types of the same sensor, such as those from the 3 Canon S110s, you can save time and index calculation headaches by importing all the images into the same project. After you add the first folder of images, simply add the corresponding folders for the other sensors and hit next. You will be greeted with the following screen:

You will notice the two camera models are identified correctly by Pix4D, and each is given a name in the “Group” column. Grouping images define the orthomosaic that will be used to generate. I recommend changing the group named to something easy to recognize, such as “RGB” and “NIR” as these group names will be used during index calculations. You can also use group names for other purposes, such as if you fly at two different altitudes in a single flight. In this case, there is only one camera model so both flights would fall under the same group. Re-naming them to “90m” and “120m” will force Pix4D to process them as separate orthomosaics. Alternatively, you can create two separate projects, run Step 1. for each, then create a project to merge the two together.  

Eventually, you will reach a window with different processing templates. These templates are pre-configured with different options for processing and provide different outputs. While they provide a good starting point, you may find that you are missing certain outputs, or that the Step 1. presets result in poor image calibration and keypoint matching. For the purposes of this tutorial, I have processed using the Ag Modified Camera template under both low- and high-resolution options.

Step 1: Initial Processing

This step is your quality check, therefore a rapid processing should be conducted in the field before leaving the project site. The key features in the summary are the five quality check parameters, each of which should be green checkmarks. Yellow caution symbols may be acceptable depending on the category, such as Georeferencing which will remain yellow if no ground control points (GCPs) are added. Low image calibration may be due to the inclusion of poor quality but unimportant images, such as those during takeoff or landing. In most cases, low image calibration is a symptom of poor image quality. In the example below, image calibration is low in the NIR flight due to high winds impacting mostly the corn trial on the east side.

The default settings of both the rapid and full processing templates are usually sufficient for this step. The image scale largely determines the speed of the processing, with the CPU and GPU drawing significant resources. In the case of poor image quality such as this dataset, changing the matching method to free flight or terrestrial(FFoT) can improve image calibration. The included quality report indicates the number of calibrated images was 640, compared to 635 and 613 for rapid and full processing, respectively. Though only marginally more images were calibrated using FFoT, those that were calibrated had many more key points than under the quarter image scale. The improvement comes at the cost of processing time, requiring nearly 8 hours! Given the poor image quality, the slight gain in image calibration still does not offset the need for a re-fly, but this is an excellent example of how to improve your processing if needed. The table below contains the processing time for each step when running under the rapid, full, and half image scales.

Rapid Processing

Half Scale Processing

Full-Scale Processing

Free Flight or Terrestrial Full Scale