When most people think of fully autonomous robots, they worry about fanciful AI scenarios that have little basis in the real facts – and mysteries – we know about consciousness. While fixating on that, it becomes harder for people to see the practical value of robots that can function independently and without substantial oversight when there are a variety of jobs, processes and industries that NEED much more help to improve both their profitability and positive impact.

At the same time, robot autonomy has been victim to some runaway definitions – and expectations – that are not necessarily helpful to understanding what a robot needs to become autonomous and where autonomy can be most rapidly achieved.

In allowing a robot to reach full autonomy, multiple criteria must be met:

  1. The robot must be able to gain meaningful information about its environment on its own
  2. The robot must be able to process that information in a structured and usable way
  3. The robot must be able to plan their actions in response to that information
  4. The robot must be able to execute the plan it generates in a timely manner

In all of these circumstances, it’s important to understand a few more things in terms of how these parameters are structured:

  1. A robot must have a goal. While we traditionally associate human autonomy with the ability to set one’s own goals, there is no expected parameter within which the robots of today will become self-aware.
  2. The autonomy of a robot must be use case-specific. A fully autonomous car can function at different levels – some on the highway, others in all terrains. Depending on the circumstance, both can exercise the same degree of autonomy.
  3. The design works best when it minimizes the need for human input – while certain autonomous “cobot” applications can help optimize productivity or the achievement of a desired goal, in most circumstances, autonomy would not be an adequate qualifier where direct human engagement is required. 

So, how do we get from point A to B? How does a traditional “programmable” robot become fully autonomous? Well, that’s where the fun begins!

Gaining Information About the Environment

There are a broad range of solutions to let a robot know what’s going on around it. Lidar, radar, sonar, tactile sensors, all different kinds of vision systems, an endless number of different communications mechanisms from extra-robotic sensors, cameras or local information systems that might be audio or video in nature. All of these are simply an attempt to give robots the same kinds of senses that a human has, and have long been the standard in fields like automatic machine control for highly specialized automation processes.

It’s important to consider that sensing is not the only limitation here. For instance, in industrial systems that are now IoT (Internet of Things)-enabled – or at least networked – different process and programmable logic controllers can be connected together in different strings used to execute different processes. What can be done from there, however, is actually incorporating that information into robotic processes to understand the position, orientation and needed manipulation to take place on an object.

Outside industrial circumstances, the same models can be applied, but they need to be contextualized with the right types of connected information. For instance, if someone is looking to create a robot for medical or elder care, external sensing capabilities may be useful to manage a patient’s health, but without proper security and anonymization they could also offer privacy or agency risks when it comes to who is being cared for and by what robot under what circumstance. 

Much the same way, 5G is seen as a major opportunity to coordinate between future self-driving cars and generate efficiencies in automating every aspect of driving and transportation – all while optimizing around things like road conditions, bottlenecks, blocked roads and more. While these are very powerful applications, the way in which information is translated to a robot must be handled delicately. With great power comes great responsibility, after all. 

Processing the Information in a Structured Way

How is one supposed to digest information in a usable way? When we think of how we do so as humans, it’s second nature or often instinctive in terms of how we make decisions. What we often fail to realize is that so many of our decisions are based on evolutionarily, socially or behaviorally acquired traits – as well as characteristics of our personalities – that make the way we operate sometimes seem deterministic, although not without any surprises or fun!

When it comes to robots, however, we don’t like surprises! Happy surprises, sure – things like finding out a robot is better or more efficient than we could hope for are great, but if it means that we need to anticipate every possible function of an autonomous robot – whether it’s in delivery, transportation, care provision, materials handling, inspection, predictive maintenance, industrial processes or simply a robot dog that does backflips – managing our expectations are far more preferable than a “failure to function”.

Because of this, it’s important to realize that creating an autonomous robot will rarely mean that things work out of the box. While some companies and academics are working on ways to simplify the fundamental sensing and processing models of autonomous robots, those models must ultimately contribute to process models that simplify the ability of a robot to use information in an actionable way.

Sensor Fusion is a necessary step to providing robots real-time perception capabilities that allow autonomy to become a reality.

Planning Actions from Data

The data generated by a robot’s sensing mechanisms can be digested in a variety of ways. Ultimately, for 3D visual data, the simplest possible way is to break down and re-integrate the data on a shape into a whole object. This is a parallel process that requires an accumulation of many small operations – while the human brain functions very flexibly in this way, generating reliable models for this type of processing require significant repetition and validation.

Here, industrial parts are broken down into tiny triangles, making it easier to interpret the part as a machine.

The most useful aspect of this process is in injecting data into an overall process model for whatever the robot needs to do. Lift and carry something? Drop mail in a letterbox? Paint or weld something together? Each of these actions require a holistic understanding of the nature, location, position and non-compliant outcomes of a goal (e.g. the mail goes in the box, but there’s a hole in the bottom and it falls in a bush). 

At the same time, to expand on this mail example, fallback goals must be established in order to reach a still acceptable outcome in an automated way without overtaxing the main priorities of the robot’s autonomous function. More simply: autonomous robots must be able to improvise, but planning that improvisation takes a lot of work.

As Mark Twain once said, “It usually takes me more than three weeks to prepare a good impromptu speech.” This isn’t simply a glib bon mot, but actually very instructive as to how humans work. While we may often be focused on tasks at hand, we have a sophisticated set of subconscious habits with added talents like “proprioreception”, which are not well recognized. 

While these are perfect skills for a robot that will need to do everything from swing from trees to hunt on the savannah and perhaps invent the wheel and fire a little bit afterward. It took millions of years to develop these capabilities, so don’t be surprised that getting a robot to function autonomously might take more than an afternoon.

So, in this context, what is the answer for robots? Well, much in the same way that humans visualize their actions before doing them, generating simulations or digital twins of an autonomous robots function and using that to both inject process model expectations (like where a mailbox usually is or what it looks like) is the biggest step you can take to creating a useful autonomous robot. 

Shape-to-Motion™ Technology uses process models and a hierarchical approach to prioritize actions for robots, making autonomous function accessible in industrial environments.

Executing the Plan

Once you have a process model, enough simulations and the correctly calibrated sensing mechanisms, execution is simply a matter of observation and optimization. If the autonomous robot you’re building meets your standards right out of the box, then no need to even do that. The fundamental value of robots is their consistency, and existing modern industrial robots are exceedingly reliable. Adding layers of perception and intelligence to make them responsive to different parts and positions creates that whole new world of autonomy we’re all looking to explore

What’s next for that autonomy? Simplifying the way in which applications are built, like mentioned above, but also increasing the breadth of sensors, robot arrangements, tasks, environments and more that can be executed. The silly thing is that automation creates productivity, which ultimately increase growth, incomes and the demand for labor. Viewing automation as a threat to the workforce is the real threat, while bringing automation to more of the spaces workers dislike will make jobs more creative, innovative and fun than ever before – if people even need to work at all. Enjoy!

Autonomous manufacturing robots for paint and spray processes are key to eliminating rework and enhancing the quality and productivity of existing finishing operations.

Omnirobotic provides Autonomous Robotics Technology for Spray Processes, allowing industrial robots to see parts, plan their own motion program and execute critical industrial coating and finishing processes. See what kind of payback you can get from it here, or learn more about how you can benefit from autonomous manufacturing systems.