Skip to content

What’s Holding Industrial Robot Integration Back?

The integration of industrial robots is a highly specialized, highly creative, and cunning process, while the people who practice it are often a credit to the very idea of human ingenuity and resourcefulness itself. In many ways, robotics integrators represent a corps of highly specialized, highly capable solution architects who help manufacturers rocket their productivity forward.

Unfortunately, there’s been a few things holding the reach and scope of industrial robot integration back. Furthermore, it’s the tried and true craftspeople – the robotics integrators we trust – who suffer most for it. This isn’t just because it’s difficult to find people with the right skills for robotics integration, but also because the job itself can be demanding, tedious, and have limited payoff in many manufacturing scenarios.

What is it about robotics that’s holding them back? Well, for one, programming is such a demanding process that efforts to make it easier haven’t had the material impact they should for many integrators. Even where programming is easier, robots today aren’t built to adapt to a high variety of parts or unstructured environments, which effectively keeps them out of a lot of factories. 3D vision, sensor fusion, and a variety of technologies promise to make robots more autonomous, but the right skills and software just haven’t been there to make things easy enough just yet.

Fortunately, the right solutions can help robot integrators finally tackle all of these problems at once. When they do, they’ll open the doors to a variety of industries, processes and service opportunities that simply haven’t been possible in the past.

Making Programming Easier Hasn’t Made It Easy Enough

The concept of robotics itself is, relative to other practices in modern engineering, still somewhat bordering on the territory of science fiction. How can one reliably automate a process one hundred, one thousand, or even one million times without the risk of breakdown, shutdown, or catastrophic impairment?

The industrial robots provided today – whether from FANUC, Kuka, ABB, Kawasaki, Universal Robots, or more emerging and niche providers – do actually commonly meet these sorts of standards for performance, where of course their process constraints and maintenance requirements are respected.

Advances in materials have further made robots from each of these high-profile vendors, lighter, more agile, and more precise beyond what we would ever expect from human workers. This has enabled different jerks, driver capabilities and more to be developed by each.

In all of these cases, essential development and engineering choices were made to achieve features and necessary levels of usability. While this process is painstaking, it also creates a divergence in the capabilities, programming, and suitability of different robots for different operations. Expert robot integrators know how to manage models and work in preferred scenarios, but when it comes to scarcity of skills and environments to deploy in – where they could otherwise grow and diversify their business – this mix of robot programming requirements can seem to stand in the way.

A Robot Operating System could – theoretically – make application development as simple for robots as it is for personal computers or smartphones. In reality, however, the multidimensional hardware variation of each robot has simply meant that the most robust robots of today simply don’t make use of it. Source: Wikipedia

In this breach, a unifying middleware could be thought by some to be the main way in which every different type and function of robot could be “harmonized” in order to simplify the planning and programming of robot operations.

ROS (or, creatively enough, “Robot Operating System”) was released in 2007 to achieve this. ROS did at least provide a system that was useful for academics in developing and sharing robot applications. Unfortunately, this ultimately did not solve the problem that industrial integrators needed to be solved: the ability to fundamentally skip over many of the limitations that come with planning, programming, and processing across various parts and spatial constraints. All this could instead allow them to incorporate, integrate and sell more robots into new processes and industries.

An operating system does empower humans to “teach” machines how to do things, but it doesn’t empower the robots to “learn” any better. Ultimately, the need for planning and programming has gotten so intense that robot integration has been restricted to the same set of tasks for decades – all in the same industries. Source: ROS Industrial.

The Need for Limited or Minimal Part Runs

Ultimately, a robot middleware doesn’t fundamentally accelerate robot integration because the “loop remains open” – with each product, process, and program a robot takes on, each and every step must be programmed tested, and validated in a 100% predictable set of scenarios. This ultimately means that no matter how incremental or assistive a scenario is, it’s going to take the same amount of time to prepare as your most important robotic process. Even if you make programming easier, a process that requires constant repetition, it’s not easy enough.

Above is a diagram of the typical planning process in a Human Robot Collaboration model. This common process is ultimately limited by the fact that it must be defined to each and every part, meaning work multiplies (to the point of being impractical) as more parts require processing. Source: ResearchGate

While certain industrial robot vendors advertise easier programming languages and methodologies – including the increased use of HMI’s (Human Machine Interfaces) – each process must still be manually programmed to some degree. That program, as generated by a human being, must be validated. This means only a limited amount of time can ultimately be saved in the robotics integration process.

For instance, there is a growing trend among machine shops in using robots with limited in-person programming for a repeatable process. This process will allow a run of a few hundred metal parts, for instance, to be rapidly automated with a decent degree of accuracy, but ultimately still require a significant amount of human effort, oversight, and rework.

While this can be considered to accomplish much of the “legwork”, it only adds a limited degree of productivity to a given shop, and furthermore, it presents limited opportunities for robot integrators to grow by selling services into new customer environments. At the same time, it doesn’t solve the issue of robot changeover between parts or in unstructured environments, or simply those that don’t involve extensive jigging.

Ultimately, this still falls victim to the law of “Diminishing Marginal Utility”. For mass manufacturers, the marginal utility of robots is relatively high. For machine shops, it is now higher than it used to be, but it’s still too low to offer a reliable customer base for robotics integrators. For high-mix manufacturers – those with thousands of SKUs and who make up most of the manufacturing industry – the marginal utility of robots is so low that few, if any, make regular use of robotics.

In place of robots that need programming for every part they work on, integrators need to find ways to shortcut or eliminate the programming process in order to make their robots applicable to a wide variety of parts, ultimately increasing the marginal utility of their solution and allowing them to sell more. Source: ScienceDirect

In these cases, robots need the ability to actually respond to parts and environments in as close to a “real-time” way as possible. New advances in 3D vision hold an opportunity for robotics integrators to give robots these senses and, ultimately, overcome their programming challenges once and for all – if, of course, robots can be given the ability to program themselves.

Limited Sensing and Vision Capabilities

About the same time as ROS was in its infancy, a whole new way of approaching 3D vision was being developed. Sensor Fusion – initially developed for 3D virtual environments like what was accomplished with Microsoft’s Xbox Kinect system – enabled relatively accurate rendering of objects and environments in ways a computer could understand.

Fast forward a few years later and sensor fusion is integrated into self-driving cars and autonomous mobile robots while finding even more uses in virtual reality systems which birthed them. The concept – rather than purely visual sensors – has roots in the Global Positioning System (GPS) while also finding relative traction in everything from HVAC monitoring to medical devices.

Just as sensor fusion is being applied in the development of self-driving cars, industrial integrators who want part and environment-agnostic robots to succeed need to use technology that can merge views from multiple sensors and even sensor types. Source: Edge AI Vision

For industrial manufacturers, however, sensor fusion is only just getting started as a useful application. While many are riding the Industry 4.0/IoT wave and trying to integrate sensors in more places in order to achieve remote monitoring, edge processing use cases or predictive maintenance, incorporating machine vision and robotics is actually the best way they can some kind of responsiveness to their robots in industrial processes. By giving robots the ability to identify and process objects in space, they can bring the same autonomy that comes with self-driving car technology to their factory’s robots.

Some of these applications exist today, but only for highly refined scenarios or out-of-the-box sensor dev kits that come from major industrial robot and peripheral equipment manufacturers. These offer a great starting point to integrating more robots in high-mix environments, but they are limited primarily to picking use cases and not value-added processes, where production bottlenecks are most common. Ultimately, until a robot has the capacity to generate a program itself from what it sees, it will only offer incremental improvements in high-mix environments.

Making it Easier to Step Into New Industries

As a robotics integrator, you may not really have the time to generate new solutions incorporating 3D vision, sensor fusion or new and more elaborate types of high-mix offerings. Many may be quite happy with their business in automotive and similar industries. Many more may be happy slowly entering more machine shops and high-mix operations where batch size is large enough to justify a robotic solution.

However, for integrators that are looking for a way to deploy more robotics solutions in industries that haven’t yet seen them and don’t know where to start, Omnirobotic’s AutonomyOS™ may offer a way. Using 3D Perception with AI-based Task Planning and Motion Planning, manufacturing engineers and integrators can configure autonomous robotic systems for value-added processes that reduce labor shortages, increase productivity, save energy, waste and rework and allow manufacturers to achieve more consistency and flexibility in production than ever before.

This technology ultimately allows robotics integrators to handle implementation in high-mix scenarios, whether it’s aerospace, heavy equipment, major furniture and appliances. These are just some – but not all – of the scenarios where coating applications can benefit from more enhanced and precise robotic operations, and yet have simply had too many types of parts to justify the programming time required from any of today’s manual programming solutions.

With AutonomyOS™ and AutonomyStudio™, it’s never been easier to deploy an autonomous robotic system.

Continue Exploring The Blog

You may also be interested in

What Is a Time of Flight Sensor (ToF)?

The time of flight sensor (ToF) has a peculiar name. It doesn’t necessarily mean it will calculate the time a flying object is in the air, nor does it measure
Read More

4 Key Takeaways From FABTECH 2022

FABTECH has once again come and gone and this year’s rendition offered, for the first time since the pandemic, a chance to see everyone’s faces again. With hundreds of companies
Read More

Recurring Problems When Programming Robots and How to Move Past Them

Table of Contents With the increasing abundance of robots, you need a relative increase in engineers to set them up for each company. So how complicated of a process is
Read More