Robot vision or vision-guided robot systems are a novel means by which many manufacturers avoid the jigging and positioning constraints that come with standard industrial robots.
These constraints exist because a robot must typically follow a bounded or precise program in order to conduct a repeatable process with a reliable output. You can think of these as welding a car chassis together or helping assemble components on a mass assembly line – these are precise and highly repeatable operations that can justify significant and precise setup costs while still having a return on investment.
In many scenarios, however, the variety of parts or their size and volume simply make such precise jigging impractical. Robot vision offers an alternative means to fulfill what are still highly repetitive processes. With specialized technology in addition to vision capacities, robot vision can also open the door to “self-programming” solutions that don’t just overcome the limits of existing jigging constraints, but actually allow robots to respond in real process time to never before seen parts and positions – all with the added value of no manual programming.
Robots have commonly been used by automotive manufacturers and other mass market firms. It’s estimated that nearly 40% of North American manufacturing robots are used in the automotive sector. These robots assemble vehicles and apply production processes to their parts or the final product.
This works for this type of manufacturer because cars 1) are expensive to buy, 2) are costly to make, 3) have large batch sizes (there are many models of cars which sell tens or even hundreds of thousands of units per year), 4) only see major design changes every 5 to 7 years.
Despite the way in which industries like automotive and electronics have powered different kinds of robot operations forward, robots themselves have not become as responsive to other forms of operations. Robot or machine vision has tried to change that by giving robots the sensing capabilities to respond to their environment in real-time or identify key objects and manipulate them in space.
As it stands today, most of these operations have been fairly repetitive – think of palletizing, pick and place or automated operations where a robot must identify and assemble or conjoin small pieces and components in space. These applications still function within With new uses of artificial intelligence, however, industrial robots using machine vision have the ability to become as responsive as skilled human workers in industrial environments, benefitting high-mix manufacturers most of all.
Robot vision attempts have been made through static images, radar, lidar or other formulations, while advancements in computer vision have also created other opportunities to develop more autonomous robots.
The most recognizable forms of robot vision that we see in the media today are self-driving cars, autonomous mobile robots and “picker” robots. Self-driving cars, while still in development and slowly achieving a strong safety case, essentially function autonomously on roads and highways.
Autonomous driving effectively uses a combination of maps, GPS and situational awareness to evaluate where it is going and what impediments are in the way, while also generally navigating the various issues of traffic, lights, stop signs, pedestrians, speed limits and objects in the road in order to get to their destination. This technology is already available in cars like Tesla today and from other brands in the form of lane assistance and different safety features that augment a driver’s own capabilities.
As for autonomous mobile robots, they are mostly used in warehouses and sometimes in package delivery or remote monitoring scenarios (like, for instance, a drone). In these situations, AI is also used to process general visual information and identify specific objectives. Doordash has already started deploying small delivery robots to carry food to their destination.
Various warehouse robots have also been seen as capable in materials handling and even in enabling packing, while “picker” robots based on technology like Covariant’s allow robots to distinguish between objects on a conveyor and package or sort them, whether for wholesale distribution or retail shipment. This takes things a step further in terms of robot vision because it is meant to distinguish between different types of parts and adequately sort them – a big step forward, but still limited to warehouse applications, materials handling and not value-added or craft-based processes.
For value-added processes, Robot vision has been used in a variety of ways, but they are often part of still-programmatic machine frameworks that don’t necessarily function in real-time with respect to programming or part mixes. At the same time, they are more needed than ever. Between social distancing, the baby boomer retirement crunch and a shortage of skilled labor amongst younger workers, firms don’t have enough automation options ready to maintain production output without driving up costs – for themselves, customers and consumers.
Combining robot vision with effective and process-specialized AI is the last step to giving factories and facilities real autonomy in the workplace.
At Omnirobotic, we’ve created infrared sensors systems that allow robots to visualize and interpret shapes as they are placed in front of them. This system has sufficient depth perception and field-of-view that it can generate a digitalized picture of various parts, shapes and positions in a manufacturing environment to a similar degree that a skilled worker might have “in their own head.” Using AI, our system can then generate unique robot motions in real process time that both cuts short the traditional programming process and allow robots to function autonomously regardless of part variety or many common manufacturer requirements.
This technology takes a variety of process limitations into account. For instance, with a spray process, do you need a certain tool type? Is there standoff distance needed between the tool and the part? Are there only faces of parts that need painting?
After having these functions specified, you ultimately get the benefit of machine vision used directly in the process of identifying, interpreting the orientation of parts and generating a unique robot motion in order to achieve this. These requirements all ultimately necessitate a clear “division of labor” within the AI used to process it. By identifying part, process and technique parameters, machines can finally actually interpret all the necessary information that allows a robot to achieve near real-time programming through robot vision.
Of course, seeing is believing, and this kind of stuff sometimes sounds too good to be true! If you’d like to understand the entirety of how our Shape-to-Motion™ technology works, check out the video below.
So, what’s the difference between a robot that does one job compared to a robot that can process nearly any part you throw at it?
Well, basically, robot vision is a good start, but being able to think for itself about how to execute an operation is essential.
Of course, there are still specific instructions required and certain limitations to overcome, but the point is, industrial robots are closer to “set it and forget it” than they’ve ever been before.
What’s more, this also gives your team more flexibility to focus on materials handling, other skilled or even creative tasks and achieve higher quality and productivity in terms of outputs than ever before, while also overcoming the limitations and barriers that have come to high-mix manufacturers in terms of finding skilled labor in recent years.
You might call it an “absolute win”, but we just like to call it Shape-to-Motion™ technology. Reach out to us today to learn more.
Omnirobotic provides Self-Programming Technology for Robots that allows them to see, plan and execute critical industrial spray and finishing processes. Omnirobotic’s team combines decades of experience with new AI capabilities to provide this through something called Shape-to-Motion™ Technology, which generates unique robot motions in real-time for each part and specific requirement. See what kind of payback you can get from it here.