The time of flight sensor (ToF) has a peculiar name. It doesn’t necessarily mean it will calculate the time a flying object is in the air, nor does it measure the precise time an object takes off from the ground. Before understanding what a ToF sensor does, it’s essential to understand what ToF actually is. ToF measures the time it takes for a physical object to travel a given distance through a medium. Typically, this measurement can determine velocity and path length, but it can also be used to learn about an object’s dimensions.
A Time of Flight sensor can use all the information gained using ToF principles for applications such as robot movement, human-machine interfaces, – like the second-generation Kinect sensor for the Xbox One – smartphone cameras, machine vision, and even Earth topography. While all these uses aren’t exactly the same, the information provided can serve all their purposes. Now that we’ve established what a ToF sensor can do, it’s equally important to determine what they consist of, how they generate the information, and, finally what specific purposes this information can serve for in the world of robotics.
From a general perspective, a ToF sensor isn’t a device that requires decades of research to understand. It’ll consist of a few parts but none of them are particularly obscure or hard to piece together.
The first part is the lens, which, given that it’s essentially a camera, is pretty easy to understand. The lens itself, like any other camera, gathers the reflected light since it cannot produce any light by itself nor can it acquire depth signal from ambient light. According to a scientific study by Subhash Chandra Sadhu for Texas Instruments, ToF cameras have “have special requirements to be met with while selecting or designing the lenses.” While the rest of the study goes on to explain the specifics (which he explains very well), it’s important to understand these limitations if you want to fabricate your own ToF sensors in the future.
Also included in the ToF camera package is the integrated light source that keeps the seen, well, lit. Considering that all the light must come from the sensor, it’s equally important to make sure that no outside sources of light – like sunlight – disrupt the image intake.
Then there’s the image sensor, the centerpiece of the ToF camera. The sensor does the heavy lifting, storing all capture image information, including the time it takes for the light to travel from the integrated light source to the object and then back.
Finally, there’s the interface, which shows the data captured. It’s the less showy aspect of the ToF, but, hey, it’s still essential!
The Time of Flight sensor is able to capture depth information for every pixel in the image captured. It is mainly used for machine vision applications and advantages include the sensor’s compact construction, its relative ease-of-use, a precise accuracy of approximately 1cm, and high frame rates.
There are 2 principle ways in which a ToF sensor can determine distance and depth.
The first is is a ToF sensor based on light pusled sources. This form will measure the time it takes for a light pulse to travel to from the emitter to the scene and then back. Once everything has been measured and taken, through the magic of mathematics and algorithms, the distance and depth of all the objects captured by the sensor are calculated and determined.
At Seeed Studio, they mocked up a graphic that simply yet accurately depicts how the process works.
Easy enough, right?
The second way is a ToF based on continuous waves which detects the phase shift of reflected light. The modulating amplitude creates a light source inm a sinusoidal form with a known frequency.The detector then determines the phase shift (a shift when the graph of the sine and cosine functions shift left or right from their standard position) of reflected light.
Once this process happens, more math happens as well, determining the distance and depth of all the objects captured by the sensor.
Like any set of technological tools, there are upsides and downsides.
Some of the clear advantages of using ToF sensors for 3D measurements are the following:
ToF sensors are highly practical in numerous applications including logistics, factory automation, and autonomous robotics and vehicles.
For logistics, ToF sensors can help guide robotic arms for packaging assistance, box filling, stacking, volume scanning, and labelling. A pick-and-place case study conducted by Lucid at Pensur, an engineering company, looked into how their 3D vision systems allowed for a far more efficient process and freed up valuable time for employees who were stuck doing the menial job day-in-and-out.
In the context of factory automation, ToF sensors can guide robots to find and pick up objects and place them where they need to be. Think of a car assembly. Nothing changes from car to car, but the ToF sensors will point out where everything is and everywhere they need to be.
ToF sensors can also be used in the context of maritime navigation in that the sensors can use AI-based object recognition. This is done to increase security on boats during sailing by detecting objects that may conflict with the ship’s path such as fishing boats, buoys, and debris, which can’t be detected by using only the ship’s radar.
IDenTV showed a brief example on their YouTube page of how these cameras can work and how quickly they can detect objects even at far distances.
Finally, for autonomous robots, a ToF sensor can help a robot plan and execute a task all on its own. Whether those processes consist of sanding, powder coating, or batch painting, the ToF sensor can help the robot understand each object’s specific dimensions and, with the help of the right software, can execute each task necessary by knowing where to start and stop.
ToF sensors are at the core of AutonomyOS™. They are the key to the first step: 3D perception, helping autonomous robots figure out what they need to do in real-time.