Skip to content

What’s the Reality of Robot Vision?

Robot vision or vision-guided robot systems are a novel means by which many manufacturers avoid the jigging and positioning constraints that come with standard industrial robots.

These constraints exist because a robot must typically follow a bounded or precise program in order to conduct a repeatable process with a reliable output. You can think of these as welding a car chassis together or helping assemble components on a mass assembly line – these are precise and highly repeatable operations that can justify significant and precise setup costs while still having a return on investment.

In many scenarios, however, the variety of parts or their size and volume simply make such precise jigging impractical. Robot vision offers an alternative means to fulfill what are still highly repetitive processes. With specialized technology in addition to vision capacities, robot vision can also open the door to “self-programming” solutions that don’t just overcome the limits of existing jigging constraints, but actually allow robots to respond in real process time to never before seen parts and positions – all with the added value of no manual programming.

Where Robots Are Used Today

Robots have commonly been used by automotive manufacturers and other mass-market firms. It’s estimated that nearly 40% of North American manufacturing robots are used in the automotive sector. These robots assemble vehicles and apply production processes to their parts or the final product.

This works for this type of manufacturer because cars 1) are expensive to buy, 2) are costly to make, 3) have large batch sizes (there are many models of cars that sell tens or even hundreds of thousands of units per year), 4) only see major design changes every 5 to 7 years. 

Pick and place scenarios – a core use of new robot vision applications – are slowly changing the makeup of robots in manufacturing firms, but their benefits still generally skew to high volume applications with limited product mix. Source: St Louis Fed.

Despite the way in which industries like automotive and electronics have powered different kinds of robot operations forward, robots themselves have not become as responsive to other forms of operations. Robot or machine vision has tried to change that by giving robots the sensing capabilities to respond to their environment in real-time or identify key objects and manipulate them in space. 

As it stands today, most of these operations have been fairly repetitive – think of palletizing, pick and place or automated operations where a robot must identify and assemble or conjoin small pieces and components in space. These applications still function within With new uses of artificial intelligence, however, industrial robots using machine vision have the ability to become as responsive as skilled human workers in industrial environments, benefitting high-mix manufacturers most of all. 

How Robot Vision Has Been Tried So Far

Robot vision attempts have been made through static images, radar, lidar, or other formulations, while advancements in computer vision have also created other opportunities to develop more autonomous robots. 

The most recognizable forms of robot vision that we see in the media today are self-driving cars, autonomous mobile robots, and “picker” robots. Self-driving cars, while still in development and slowly achieving a strong safety case, essentially function autonomously on roads and highways. 

Autonomous driving effectively uses a combination of maps, GPS, and situational awareness to evaluate where it is going and what impediments are in the way, while also generally navigating the various issues of traffic, lights, stop signs, pedestrians, speed limits, and objects in the road in order to get to their destination. This technology is already available in cars like Tesla today and from other brands in the form of lane assistance and different safety features that augment a driver’s own capabilities. 

As for autonomous mobile robots, they are mostly used in warehouses and sometimes in package delivery or remote monitoring scenarios (for instance, a drone). In these situations, AI is also used to process general visual information and identify specific objectives. Doordash has already started deploying small delivery robots to carry food to their destination. 

In the example above, machine vision is used to identify the position of small parts in space in order to conduct a machine-assisted robotic assembly process. Manual programming is required to set the range of assembly within what the robot can “see”, but ultimately they don’t adapt to different types of part or process requirements as the process is taking place. Source: Kinemetrix.

Various warehouse robots have also been seen as capable in materials handling and even in enabling packing, while “picker” robots based on technology like Covariant’s allow robots to distinguish between objects on a conveyor and package or sort them, whether for wholesale distribution or retail shipment. This takes things a step further in terms of robot vision because it is meant to distinguish between different types of parts and adequately sort them – a big step forward, but still limited to warehouse applications, materials handling, and not value-added or craft-based processes.

For value-added processes, Robot vision has been used in a variety of ways, but they are often part of still-programmatic machine frameworks that don’t necessarily function in real-time with respect to programming or part mixes. At the same time, they are more needed than ever. Between social distancing, the baby boomer retirement crunch, and a shortage of skilled labor amongst younger workers, firms don’t have enough automation options ready to maintain production output without driving up costs – for themselves, customers and consumers.

Why Robot Vision Enables Self-Programming Breakthroughs

Combining robot vision with effective and process-specialized AI is the last step to giving factories and facilities real autonomy in the workplace. 

At Omnirobotic, we’ve created infrared sensor systems that allow robots to visualize and interpret shapes as they are placed in front of them. This system has sufficient depth perception and field-of-view that it can generate a digitalized picture of various parts, shapes, and positions in a manufacturing environment to a similar degree that a skilled worker might have “in their own head.” Using AI, our system can then generate unique robot motions in real process time that both cuts short the traditional programming process and allow robots to function autonomously regardless of part variety or many common manufacturer requirements. 

This technology takes a variety of process limitations into account. For instance, with a spray process, do you need a certain tool-type? Is there a standoff distance needed between the tool and the part? Are there only faces of parts that need painting?

After having these functions specified, you ultimately get the benefit of machine vision used directly in the process of identifying, interpreting the orientation of parts, and generating a unique robot motion in order to achieve this. These requirements all ultimately necessitate a clear “division of labor” within the AI used to process it. By identifying part, process, and technique parameters, machines can finally actually interpret all the necessary information that allows a robot to achieve near real-time programming through robot vision.

How You Can Use Robot Vision to Transform your Productivity

So, what’s the difference between a robot that does one job compared to a robot that can process nearly any part you throw at it?

Well, basically, robot vision is a good start, but being able to think for itself about how to execute an operation is essential.

Of course, there are still specific instructions required and certain limitations to overcome, but the point is, industrial robots are closer to “set it and forget it” than they’ve ever been before.

What’s more, this also gives your team more flexibility to focus on materials handling, other skilled or even creative tasks and achieve higher quality and productivity in terms of outputs than ever before, while also overcoming the limitations and barriers that have come to high-mix manufacturers in terms of finding skilled labor in recent years. 

You might call it an “absolute win”, but we just like to call it AutonomyOS™. Reach out to us today to learn more

With AutonomyOS™ and AutonomyStudio™, it’s never been easier to deploy an autonomous robotic system. Using 3D Perception with AI-based Task Planning and Motion Planning, manufacturing engineers and integrators can configure autonomous robotic systems for value-added processes that allow manufacturers to achieve more consistency and flexibility in production than ever before.

Continue Exploring The Blog

You may also be interested in

What Is a Time of Flight Sensor (ToF)?

The time of flight sensor (ToF) has a peculiar name. It doesn’t necessarily mean it will calculate the time a flying object is in the air, nor does it measure
Read More

4 Key Takeaways From FABTECH 2022

FABTECH has once again come and gone and this year’s rendition offered, for the first time since the pandemic, a chance to see everyone’s faces again. With hundreds of companies
Read More

Recurring Problems When Programming Robots and How to Move Past Them

Table of Contents With the increasing abundance of robots, you need a relative increase in engineers to set them up for each company. So how complicated of a process is
Read More