In Precision Agriculture applications, multiple camera and UAV system parameters must be determined and implemented in order to achieve the desired mission objectives. In the paragraphs below we describe several important system parameters and discuss how these parameters may be determined based on the stated mission objectives.
Rolling Shutter versus Global Shutter Cameras
One of the first decisions that must be made for a given application is the choice between a rolling shutter or global shutter camera. A rolling shutter is a method of image capture in which a still picture (in a still camera) or each frame of a video (in a video camera) is captured, not by taking a snapshot of the entire scene at single instant in time, but rather by scanning across the scene rapidly, either vertically or horizontally. In other words, not all parts of the image of the scene are recorded at exactly the same instant. Rolling shutters are used in both mechanical and electronic cameras. Since the scene is not captured at the same instant in time rolling shutter cameras can produce undesirable image artifacts such as: wobble, skew, smear, and partial exposure. The undesirable image artifacts increase in severity with increased vehicle speed and angular rate.
Global shutters take a snapshot representing a single instant in time and therefore do not suffer from the motion artifacts caused by rolling shutters. Essentially, a global shutter sensor will make a very active/fast moving scene appear frozen in time without any motion blur. This is why global shutter imagers are usually preferred over rolling shutter imagers for use in applications involving motion such as fast moving UAVs or moving objects. The EZ Health system uses a global shutter camera.
here Motion blur is caused by camera motion during the image exposure time. The footprint on the ground of a single pixel in the camera focal plane array is known as the ground sampling distance (GSD). If the camera moves a distance greater than 1 GSD during the camera exposure time then the patch of ground that an individual pixel sees at the beginning of the exposure interval is different than the patch of ground seen at the end of the exposure interval, resulting in blurring of the imagery.
The frame rate is the inverse of the time increment between consecutive images taken by a camera. This is a very important factor in camera selection because it defines the ability to fly different missions while still enabling adequate image overlap for mosaicing. For example, if your camera is only able to take images once every 5 seconds then you are unable to fly a mission below 400 feet and maintain 70 percent overlap requirements for robust stitching. This in turn limits the resolution of images you are able to acquire from the mission. However, if you are able to acquire images every second you can fly faster, and lower, and maintain higher overlap for robust mosaicing. This enables you to fly a variety of different missions obtaining higher resolution, and higher coverage rates and still obtain good image overlap for stitching.
Camera Field of View (FOV)
Another decision that must be made is the camera FOV. In agricultural applications, crops are often planted in rows with known spacing between the rows. It is often desirable to not only be able to see the crops in the images but also to be able to see the soil between the rows. This enhances the ability to detect and classify invasive species and weeds in the imagery which would otherwise be shadowed by the crop.
Early in the growing season wider field of views can be used (limited by lens constraints such as lens distortions).
However, later in the growing season when the crop height increases, narrower fields of view are needed to prevent shadowing the ground between crop rows.
Ground Sampling Distance (GSD)
The ground sampling distances in the x and y directions, GSDx and GSDy, give the dimensions of the footprint, on the ground, of a pixel in the camera focal plane array. The choice of the ground sampling distance is based on the size of the smallest object that the user wishes to image. One way of thinking about the required sampling distance is to envision a mesh of grid cells draped over the smallest object of interest in the image, with the size of each cell in the mesh having dimensions GSDx by GSDy. Assuming that each mesh cell, that lies on top of the smallest object that we wish to image, can be assigned a single color, how many cells do we need on the object to detect, recognize, and form an acceptable image of it? A rule of thumb that has been used successfully, is to require a minimum of 10-20 mesh cells on the smallest object that the user wishes to detect, recognize, and image. By covering the smallest object that we wish to resolve by 10-20 cells we can estimate the required GSDx and GSDy.
UAV Height above the Ground (H)
The height that the UAV is flown above the ground depends on the mission objectives. For example, if the user wishes to achieve maximum crop coverage rates, the altitude (H) can be set to 400 ft. (i.e. the maximum altitude that a small unmanned airborne system is permitted to fly in the United States). Alternatively, if high resolution imaging is of paramount importance with little regard for coverage rate, then the altitude can be set to the lowest possible altitude (i.e. for example the altitude at which the fixed focal length lenses begin to defocus). In this case the altitude may be set at 10-20 feet. From these examples we see that the choice of UAV altitude is strongly dependent on the application and mission objectives.
In practice, the optics for a camera will be selected to achieve a specific field of view (FOV). Thus, it can be helpful to use the following equation to estimate GSD based on sensor resolution and field of view. In this equation, FOV should be set to the horizontal field of view for the camera and N should be the number of pixels in one row of the focal plane array (one can alternatively use vertical field of view and the number of pixels in a single column).
During the camera exposure time the patch of ground that is seen by an individual pixel changes due to camera motion. This causes the image to smear (sometimes called motion blur). To reduce image smear we would like for the distance that the camera moves during the image exposure time to be small relative to the GSD.
As the field of view is increased for example from 30 degrees to 60 degrees the GSD is increased. Dependent on the mission objectives there are benefits and drawbacks to increasing the FOV. By increasing the FOV it may be easier for some platforms (i.e. fixed wings with larger turn radius’s) to hold the flight pattern ensuring adequate image overlap for robust stitching. However, the drawback is the decrease in resolution of the imagery. The main advantage for the precision agriculture industry is to increase the coverage rate per mission flown as discussed in more detail below.
|GSD @ 400 feet||GSD @ 200 feet|
|30 degree FOV||5.1 cm||2.55 cm|
|60 degree FOV||11 cm||5.5 cm|
Crop Coverage Rate (Cr)
Once the camera ground sampling distance (GSDx), camera pixel density (Nx), UAV velocity (V), and row-to-row overlap (Or) needed to reliably stitch the images into a mosaic have been determined, based on the considerations above, the crop coverage rate (Cr) may be computed using equation (4) below:
The required crop coverage rate is strongly dependent on the agricultural application. Some users wish to fly many acres per hour (i.e. say for example of wheat or corn) and estimate crop health. Coverage rate to these users is of higher priority than resolution and they are willing to accept lower resolution (i.e. larger GSDx) if it achieves higher coverage rates. Other users, wish to rate individual plant health in test plots and high resolution is of higher priority than coverage rate and they are willing to sacrifice coverage rate if it achieves higher resolution (i.e. smaller GSDx).