What an Autonomous Robot Can and Can’t Do

Autonomous robots can perform varied tasks in a flexible, responsive manner with limited human oversight. New forms of walking, lifting and rolling robots as well as improvements to traditional industrial robots all make these systems more flexible and adaptable – particularly in unstructured environments. These new capabilities – whether it’s in remote inspection, materials handling, delivery services or manufacturing processes – all enable robots to limit the need for humans to execute dangerous, dull, and sometimes even deadly tasks.

This is particularly important because, as it happens, skilled labor is in ever shorter supply. The world is going through a first-of-its-kind “demographic inversion” which actually limits the availability of labor for a variety of traditionally human-led jobs. While people can be more picky in the roles they choose to perform, robots are even more necessary to execute the basic jobs upon which our economy and high standards of living depend. 

If you’re thinking about how to contextualize autonomous robots in your own work environment, here are a few CANs and CANTs to guide you. As in all things AI, autonomous robots can cause our imaginations to run away with us, but with the right references, we can bring things a little more down to earth (and explore the practical benefits)!

CAN DO: Help With Repeatable Tasks

Robots have always been best suited to helping with repeatable tasks. Autonomous robots serve the same function, but in a broader array of circumstances. Why is this the case? Traditional robots require extensive planning, programming and fixed environments to ensure that everything can be done as predictably as possible. 

This simply hasn’t been useful outside of mass manufacturing, logistics and research facilities. With autonomy robots will begin appearing in more everyday environments – as well as in more places within those manufacturing, logistics and research facilities! This is possible because autonomous robots generally have the ability to perceive their environment and use pre-developed strategies to “program” themselves towards a goal, while taking into account obstacles, safety concerns and their own constraints (everything from joint rotations to battery power). 

Because of this level of flexibility, manufacturers with highly varied applications, warehouses that serve a variety of consumer needs and service businesses that require deliveries or transportation out on busy and unpredictable streets will all benefit from a new wave of robotic autonomy. 

CAN’T DO: Set Their Own Goals

We’re not philosophers, but the ability of humans to dream up objectives, set goals or even invent ideologies is usually considered as a consequence of evolution or our own sensory mechanisms. Whether a divine hand was involved is another story entirely, but when we think about the “evolution” of autonomous robots certainly hasn’t been guided by gods, but at the same time, they simply haven’t been built to think abstractly about their own goals and motivations.

Ultimately, robots need goals set for them, which disregards the “AI super predator” dystopian future scenario almost entirely. While it’s conceivable that an engineer or scientist could attempt to build this sort of consciousness, we don’t even understand how this consciousness works in humans – so how would it be possible to then replicate it in robots? 

While advances in neuroscience are coming fast and furious, you can probably rest assured that your autonomous robots will still need some everlasting guidance within your lifetime and far beyond. 

CAN DO: Respond to Unstructured Stimuli

With the process models and behavioral frameworks used by autonomous robots, various kinds of sensory data can be accounted for. Whether it’s visual, auditory or data connected to the robot from the environment around them (hello, IoT), there is potential for unexpected surges of data to occur within the framework of whatever a robot can functionally interpret. In this circumstance, the architecture of a robot’s processing capabilities must be adapted to account for the possibility of unexpected inputs, however this doesn’t mean that responding to those inputs is impossible. 

In fact, it is mostly what autonomous robots must be used for. While a certain predictability is still required for them to function, we know that certain edge cases or tasks may be even more complex. In these circumstances, robots that can respond to different shapes, orientations, tasks and recognizable objects can do so in a hierarchical fashion, meaning it can address increased complexity without a concomitant increase in processing complexity. Ultimately, circumstances where the human eye may not be entirely reliable could prove exceedingly efficient with autonomous robots. 

CAN’T DO: Work Outside Process Models

Ultimately, process models are there to keep robots on task and limit the need to optimize them whenever a new surprise comes up. If a robot does achieve a useful goal outside of a pre-defined process model, however, that success is simply a matter of coincidence – it’s no indication that the robot is thinking for itself. 

Autonomous robots function autonomously – that does not mean they’ll invent a new form of poetry and buy you a bagel every morning, unless of course that is what you design them to do. One day, robots may design themselves. Until then, take solace in the fact that they can offer 10 to 1, or even 100 to 1 payback on tasks that people simply don’t like doing. 

CAN DO: Create More and More Attractive Jobs

There are two problems in the world today: a changing environment, and human demographics that simply don’t allow us to rapidly adapt. We are older, a little weaker and less skilled than we used to be, while large-scale challenges and an essential re-invention of how our society actually works is needed to maintain a high standard of living without offloading work onto increasingly vulnerable populations. 

The usual wrap on robots here is that they kill jobs. While this could possibly be true if you think about assembly lines where humans are swapped out for robots, the truth is most robots will not be added here, but rather in more unstructured environments and towards tasks which are relatively unproductive for the ingenuity and creativity of most humans. Once cost effective autonomous robot applications come online, most of them will work within bottlenecks, freeing up humans and actually creating new kinds of jobs, and even creating more of them according to Statistics Canada.

CAN’T DO: Take Over the World

Taking over the world requires the motivation to do so. While that can be vanity, greed, resentment or any number of emotional triggers at its core, an autonomous robot (or robot army, as it were) would only take over the world if it was modelled to do so, and ultimately would only do that at the “behest” of somebody. That’s why working groups like those on autonomous military robots are so important, however the fact remains – robots will never take over the world on their own accord alone, which means that accountability is always necessary for the human actors who might use them to malevolent ends. 

With AutonomyOS™ and AutonomyStudio™, it’s never been easier to deploy an autonomous robotic system. Using 3D Perception with AI-based Task Planning and Motion Planning, manufacturing engineers and integrators can configure autonomous robotic systems for value-added processes that allow manufacturers to achieve more consistency and flexibility in production than ever before.

What’s the Real Difference Between an Autonomous Robot and an HMI?

Autonomous robots are any robot that performs a task or behavior with limited human oversight. While this is an emerging technology, robots already function in this way in many circumstances – unfortunately, every one of those are highly structured, mass manufacturing operations. For manufacturers who have more variation in their parts, or businesses that simply don’t function in structured environments, autonomous robots truly are the next step in automation.

However, in the rush to proclaim autonomous “supremacy” many companies have foregone the fundamental principle of limited oversight in unstructured environments and instead opted for HMIs – Human Machine Interfaces. These are interfaces that generally rely on touchscreens or simple UIs by which humans can set parameters for defined operations, allowing robots to adapt in limited circumstances (i.e., parts all have the same shape) without the need for additional programming. 

With that key difference in mind, it’s important to understand what is an autonomous robot and what is simply an HMI-based solution – built on custom integration. A few of the considerations go as follows:

  • Autonomous robots function in various unstructured environments, while HMIs rely on defined workflows with explicit commands
  • Autonomous robots allow in-house engineers to evolve systems over time, while HMIs only all solution providers to deliver a one-off integration
  • Autonomous robots can provide value beyond what human oversight can anticipate, while HMIs are explicitly operator-driven and controlled

Autonomous Robots Function in Varied Unstructured Environments

Autonomous robots are primarily useful because they can function on the basis of parameters and process models. Rather than requiring specific programming for each and every action they take. They can move and respond to their environment while respecting constraints and basic modelling, as well as while taking in information from integrated and related systems. 

For example: a programmable access controller on a factory floor might be able to indicate the position of parts hung on a conveyor and even tell the conveyor to start and stop when positioned in front of an autonomous robot. Depending on the robot’s means of perceiving the product it operates on, that robot could then execute a process (whether it’s painting, assembly, welding or other critical value-adds) and subsequently adapt to different parts as they come along. 

In this circumstance, the robot is fully integrated with other industrial control systems, allowing it to act in a coordinated and intelligent manner while being able to operate with broad general instructions, rather than specifics for each and every part. 

Autonomous Robots allow even the most demanding manufacturers (like those in aerospace) to meet high quality goals with limited operator inputs.

An HMI Is Built Specific to Individual Workflows

An HMI is about as simple as a robot interface can be, but simplicity for its own sake is not always a good thing. For instance, if your HMI is too simple, you may be missing out on certain process optimization options or – worse yet – only be allowing for certain parts and workflows to be respected while that sort of functioning may not be useful to every context or type of manufacturer. 

For instance, you may have an HMI installed to assemble or paint different large windows and doors. Ultimately, the HMI can be scoped with a certain part window in range – let’s say doors up to 10 feet high and window frames up to 6 feet long. As long as parts never exceed or confound those specifications, the HMI may be a cost-effective solution (though integration can still be costly). However, once an 11 foot door or a 7 foot window frame comes along, all hell may break loose.

HMIs are convenient for predictable operations, but as soon as workflows evolve, they can get left behind. Source: Packaging Strategies.

An Autonomous Robot Can Give Engineers New Process Models and Capabilities

Ultimately, the ability of an autonomous robot to function depends on the durability of its process models and its ability to integrate with peripheral sensors and hardware. Where this is possible, integrated development environments could give autonomous robots the ability to learn new applications according to the specification of engineers, without the need to be completely reengineered by their original provider for each and every application. 

For instance, Omnirobotic’s AutonomyOS™ is built to be both robot and process agnostic, meaning whatever hardware or layout is required could one day be addressed by the same motion planning logic as customized to your individual facility. With an HMI, the limits here may be palpable in that each and every integration may require its own motion planning and process strategy that is built on the same traditional programming models which currently limit integrators from deploying robots in high-mix environments. 

AutonomyOS™ has no technical limit when it comes to autonomous robotic operations, meaning any process and hardware can be adapted to in the long term.

An HMI Is Built Solely by the Original Provider

An HMI is effectively a one-stop-shop for very specific parts and applications. They can rarely be adapted or improved upon without being replaced entirely. They may not always be the right investment because, ultimately, manufacturing is moving faster than ever, meaning that the way in which you process parts could be forced to change in order to meet exacting customer demands. The longer you fail to adapt, the more likely it becomes that someone else will.

As such, most HMIs can only even be built and modified by original integrators. Usually, trying to adapt those systems to new layouts will render them almost entirely useless. In a world where flexible manufacturing is more essential to everyday success, is it really possible for HMIs to measure up?

HMIs require many of the same inputs that manually programmed robots do. Even where some systems are adaptive, they may not actually adapt to every aspect of your production. Source: Siemens.

An Autonomous Robot Provides Value-Adds Beyond What Human Oversight Can Validate

The core of autonomous manufacturing robots is autonomous motion generation. This means, for certain processes, that greater precision can be achieved for any application no matter what the actual instructions for the robot are. 

Why is this the case? Let’s consider a curve, for example. While a traditional programming tool might allow a programmer to set different points and articulate a radius, most cases require point-to-point programming that is not always set at a fixed angle compared to the surface being processed. In the case of an autonomous robot, a machine can break down a 3D reconstruction of a curved surface and process an infinitesimally accurate robot motion in seconds.

The fact remains, with an HMI, you may simply not be getting high-quality programs – and that’s ok. Human beings were not put on this earth to program robots – at least not down to the mundane levels of detail that very often lead to quality issues, rework, and rejections that manufacturers struggle with in traditional deployments. 

An HMI Only Automates Operations, but Every Value-Add Must Be Operator Driven

HMIs request instructions and operate based on specific sequential instructions each and every time. While there may be exceptions, HMIs clearly don’t function “autonomously” and while many “autonomy providers” may claim to provide this, what they’re often doing is providing a sophisticated HMI that actually limits the workflows and means of engineering that a manufacturer can use to achieve their process goals.

Are you a manufacturer that values throughput? Do you expect high quality outputs no matter what process you use? Are you wanting to adapt to market changes without having to completely recapitalize your facilities? In all of these circumstances, HMIs may be an answer, but autonomous robots are DEFINITELY one.

With AutonomyOS™ and AutonomyStudio™, it’s never been easier to deploy an autonomous robotic system. Using 3D Perception with AI-based Task Planning and Motion Planning, manufacturing engineers and integrators can configure autonomous robotic systems for value-added processes that allow manufacturers to achieve more consistency and flexibility in production than ever before.

How Does a Robot Reach Full Autonomy?

When most people think of fully autonomous robots, they worry about fanciful AI scenarios that have little basis in the real facts – and mysteries – we know about consciousness. While fixating on that, it becomes harder for people to see the practical value of robots that can function independently and without substantial oversight when there are a variety of jobs, processes and industries that NEED much more help to improve both their profitability and positive impact.

At the same time, robot autonomy has been victim to some runaway definitions – and expectations – that are not necessarily helpful to understanding what a robot needs to become autonomous and where autonomy can be most rapidly achieved.

In allowing a robot to reach full autonomy, multiple criteria must be met:

  1. The robot must be able to gain meaningful information about its environment on its own
  2. The robot must be able to process that information in a structured and usable way
  3. The robot must be able to plan their actions in response to that information
  4. The robot must be able to execute the plan it generates in a timely manner

In all of these circumstances, it’s important to understand a few more things in terms of how these parameters are structured:

  1. A robot must have a goal. While we traditionally associate human autonomy with the ability to set one’s own goals, there is no expected parameter within which the robots of today will become self-aware.
  2. The autonomy of a robot must be use case-specific. A fully autonomous car can function at different levels – some on the highway, others in all terrains. Depending on the circumstance, both can exercise the same degree of autonomy.
  3. The design works best when it minimizes the need for human input – while certain autonomous “cobot” applications can help optimize productivity or the achievement of a desired goal, in most circumstances, autonomy would not be an adequate qualifier where direct human engagement is required. 

So, how do we get from point A to B? How does a traditional “programmable” robot become fully autonomous? Well, that’s where the fun begins!

Gaining Information About the Environment

There are a broad range of solutions to let a robot know what’s going on around it. Lidar, radar, sonar, tactile sensors, all different kinds of vision systems, an endless number of different communications mechanisms from extra-robotic sensors, cameras or local information systems that might be audio or video in nature. All of these are simply an attempt to give robots the same kinds of senses that a human has, and have long been the standard in fields like automatic machine control for highly specialized automation processes.

It’s important to consider that sensing is not the only limitation here. For instance, in industrial systems that are now IoT (Internet of Things)-enabled – or at least networked – different process and programmable logic controllers can be connected together in different strings used to execute different processes. What can be done from there, however, is actually incorporating that information into robotic processes to understand the position, orientation and needed manipulation to take place on an object.

Outside industrial circumstances, the same models can be applied, but they need to be contextualized with the right types of connected information. For instance, if someone is looking to create a robot for medical or elder care, external sensing capabilities may be useful to manage a patient’s health, but without proper security and anonymization they could also offer privacy or agency risks when it comes to who is being cared for and by what robot under what circumstance. 

Much the same way, 5G is seen as a major opportunity to coordinate between future self-driving cars and generate efficiencies in automating every aspect of driving and transportation – all while optimizing around things like road conditions, bottlenecks, blocked roads and more. While these are very powerful applications, the way in which information is translated to a robot must be handled delicately. With great power comes great responsibility, after all. 

Processing the Information in a Structured Way

How is one supposed to digest information in a usable way? When we think of how we do so as humans, it’s second nature or often instinctive in terms of how we make decisions. What we often fail to realize is that so many of our decisions are based on evolutionarily, socially or behaviorally acquired traits – as well as characteristics of our personalities – that make the way we operate sometimes seem deterministic, although not without any surprises or fun!

When it comes to robots, however, we don’t like surprises! Happy surprises, sure – things like finding out a robot is better or more efficient than we could hope for are great, but if it means that we need to anticipate every possible function of an autonomous robot – whether it’s in delivery, transportation, care provision, materials handling, inspection, predictive maintenance, industrial processes or simply a robot dog that does backflips – managing our expectations are far more preferable than a “failure to function”.

Because of this, it’s important to realize that creating an autonomous robot will rarely mean that things work out of the box. While some companies and academics are working on ways to simplify the fundamental sensing and processing models of autonomous robots, those models must ultimately contribute to process models that simplify the ability of a robot to use information in an actionable way.

Sensor Fusion is a necessary step to providing robots real-time perception capabilities that allow autonomy to become a reality.

Planning Actions from Data

The data generated by a robot’s sensing mechanisms can be digested in a variety of ways. Ultimately, for 3D visual data, the simplest possible way is to break down and re-integrate the data on a shape into a whole object. This is a parallel process that requires an accumulation of many small operations – while the human brain functions very flexibly in this way, generating reliable models for this type of processing require significant repetition and validation.

Here, industrial parts are broken down into tiny triangles, making it easier to interpret the part as a machine.

The most useful aspect of this process is in injecting data into an overall process model for whatever the robot needs to do. Lift and carry something? Drop mail in a letterbox? Paint or weld something together? Each of these actions require a holistic understanding of the nature, location, position and non-compliant outcomes of a goal (e.g. the mail goes in the box, but there’s a hole in the bottom and it falls in a bush). 

At the same time, to expand on this mail example, fallback goals must be established in order to reach a still acceptable outcome in an automated way without overtaxing the main priorities of the robot’s autonomous function. More simply: autonomous robots must be able to improvise, but planning that improvisation takes a lot of work.

As Mark Twain once said, “It usually takes me more than three weeks to prepare a good impromptu speech.” This isn’t simply a glib bon mot, but actually very instructive as to how humans work. While we may often be focused on tasks at hand, we have a sophisticated set of subconscious habits with added talents like “proprioreception”, which are not well recognized. 

While these are perfect skills for a robot that will need to do everything from swing from trees to hunt on the savannah and perhaps invent the wheel and fire a little bit afterward. It took millions of years to develop these capabilities, so don’t be surprised that getting a robot to function autonomously might take more than an afternoon.

So, in this context, what is the answer for robots? Well, much in the same way that humans visualize their actions before doing them, generating simulations or digital twins of an autonomous robots function and using that to both inject process model expectations (like where a mailbox usually is or what it looks like) is the biggest step you can take to creating a useful autonomous robot. 

AutonomyOS™ uses 3D Perception with AI-based Task Planning and Motion Planning so manufacturing engineers and integrators can configure autonomous robotic systems for value-added processes.

Executing the Plan

Once you have a process model, enough simulations and the correctly calibrated sensing mechanisms, execution is simply a matter of observation and optimization. If the autonomous robot you’re building meets your standards right out of the box, then no need to even do that. The fundamental value of robots is their consistency, and existing modern industrial robots are exceedingly reliable. Adding layers of perception and intelligence to make them responsive to different parts and positions creates that whole new world of autonomy we’re all looking to explore

What’s next for that autonomy? Simplifying the way in which applications are built, like mentioned above, but also increasing the breadth of sensors, robot arrangements, tasks, environments and more that can be executed. The silly thing is that automation creates productivity, which ultimately increase growth, incomes and the demand for labor. Viewing automation as a threat to the workforce is the real threat, while bringing automation to more of the spaces workers dislike will make jobs more creative, innovative and fun than ever before – if people even need to work at all. Enjoy!

Autonomous manufacturing robots for paint and spray processes are key to eliminating rework and enhancing the quality and productivity of existing finishing operations.

With AutonomyOS™ and AutonomyStudio™, it’s never been easier to deploy an autonomous robotic system. Using 3D Perception with AI-based Task Planning and Motion Planning, manufacturing engineers and integrators can configure autonomous robotic systems for value-added processes that allow manufacturers to achieve more consistency and flexibility in production than ever before.