Skip to main content

Off-Road Object Detection and the Sensors that Make this Possible

At Trimble, our products are known for their accuracy, reliability and efficiency in the environments that our customers care about. Our autonomy products are no exception. They need to operate at the high standard of performance and safety that customers have come to expect from our brand. 

Perception is a fundamental capability that autonomous machines need to have in order to function in environments where they may encounter objects that they need to interact with or avoid. Perception functions include object detection, mapping and localization. For example, a piece of farming equipment may need to drive in formation with a harvester or lorrie, and an autonomous compactor may need to avoid pylons and workers at a construction site. Perception functions rely on sensors to obtain information about the environment around a vehicle in real time. This blog post focuses on sensors in the context of object detection. However, the strengths and weaknesses of each sensor apply to their use for other perception functions as well.

Object detection makes the machines aware of the objects around them in order to  make intelligent decisions about how to proceed with their current work order. The three pieces of information that object detection systems are tasked with extracting are: object locations, object types and object extent. Figure 1 shows an example of 3D bounding boxes surrounding objects detected in a Light Detection and Ranging (LiDAR) scan. At Trimble we use cameras, radio detection and ranging (radar) and LiDAR to extract this information.

Cameras provide a highly detailed image that allows us to extract this information. However for objects that are far away, it can be challenging to estimate the metric scale and distance of an object. In other words, we can figure out where and how big an object is in an image, but it is difficult to figure out how big the object is in 3D. Cameras are also affected by lighting conditions and heavy obscurants such as dust on a construction site.

Radar sensors transmit and receive radio waves to obtain the 3D position and radial velocity measurements from objects in the environment. Radar sensors deployed on autonomous systems are typically frequency modulated continuous-wave (FMCW) radar. The use of radio waves makes radar robust to obscurants and in various weather and lighting conditions. However, radar sensors do not provide the same level of detail as a camera image. This can make it difficult to tell what kind of object has been detected, even though we can tell that an object is there. For some applications like the autonomous compactor shown in Figure 2 where the workspace of the compactor is typically clear, being able to robustly detect objects - even if we can’t tell what they are without input from others sensors - is a great asset.

Figure 2: Autonomous compactor stopping for crouched person at bauma in Munich.

LiDAR is between the two. It provides highly accurate measurements of the scene surrounding the autonomous vehicle in the form of a point cloud. Not as detailed as a camera, LiDAR can measure distances directly, and provides much more detail than a radar. Because of this, it is sufficient to estimate all three properties of most objects at short and medium distances.

Each sensor has different strengths and weaknesses. At Trimble, we benefit from the complementary strengths of all three sensors and utilize the best combination of tools depending on the particular needs of the customer's application. This enables us to support customers with a wide variety of machine types, implements, operational domains and more.

The final piece of the puzzle is data. Modern approaches for object detection rely on data to “learn” what different objects look like given the information available from each sensor. Since Trimble products are targeted at a wider range of applications than a typical self-driving car, we are committed to gathering data to both train our algorithms to perform well in these environments and validate that they achieve the required level of accuracy and safety to change the way the world works.