Skip to content

ROS: How Well Does it Address Manufacturers’ Needs?

The first time you see a robot perform a specific action, it can be quite awe-inspiring. Seeing robots like the Personal Robot 2 (PR2) clean tables and fetch drinks is certainly a sign that the future is now. Though the concept of having a robot understand what it needs to do is fascinating, how does it actually know what to do and how to do it?

There isn’t a universal answer to this. Robots have, for the longest time, been able to simplify some elements of programming thanks to robotics middleware such as Urbi, OpenRDK, and ROS. Though these platforms all offer different advantages and limitations, ROS stands out from the crowd thanks to one thing: its open-source nature. ROS’s repository is free to access, meaning that anyone who’s interested in programming robots can start with this middleware for free.

How ROS Came To Life

The Robot Operating System, more commonly known as ROS, started as a project at Stanford University by Keenan Wyrobek and Eric Berger. During the time in grad school, the duo had noticed their peers were wasting way too much time trying to program robots – Wyrobek even heard people say they had spent four years trying to make a robot work with no success – and decided to create a universal, open-source platform that would allow developers to share their knowledge.

“People who are good at one part of the robotics stack are usually crippled by another[…]” said Berger in an interview for IEEE Spectrum. “Your task planning is good, but you don’t know anything about vision; your hardware is decent, but you don’t know anything about software. So we set out to make something that didn’t suck, in all of those different dimensions. Something that was a decent place to build on top of.”

 
Since 2018, robot installation numbers have fluctuated namely due to the pandemic causing significant changes to the labor market. Graph via IFR.

In a separate guest editorial by Wyrobek for IEEE Spectrum, he specified that he had seen developers spend 90% of their time re-writing other people’s codes, with the other 10% allocated to innovating. Afterward, Wyrobek found donors to help fund the building of 10 robots and shipped them off to 10 different universities in order to have teams of software engineers build developer tools that would allow other developers to innovate and build on the software. Essentially, Wyrobek was tired of seeing developers attempt to reinvent the wheel each time, so he and Berger wanted to simplify everyone’s lives.

How You Can and Can’t Use ROS

On its own, ROS can’t really do much. There are vast libraries of packages included in the ROS repositories, but ROS itself only provides the canvas on which developers can program and execute their desired tasks.

Using ROS, developers can build the three main components of a robot: the actuators, sensors, and control systems. These components are then unified with ROS tools, namely topics and messages. The messages are used to plan the robot’s movement and, using a digital twin, developers can ensure that their code works without having to actually test it on a real robot.

These messages can travel throughout ROS using nodes, which is essentially an executable file within a ROS package. Each node is registered to the ROS Master, which sets up node-to-node communication. All this technical information to say that programming is an essential part of ROS. Developers and programmers have to code each action they want the robot to perform. Without ROS, this would be a daunting task, since developers always tend to reinvent the wheel. With ROS, however, this is a much simpler task thanks to its open-source nature.

ROS allows developers to simplify the job by using nodes to register requests to the robot and how exactly it will respond to them.

ROS succeeds in providing a canvas for its developers due to its large community size. While other robotics middleware, like URBI, aim to solve the same problems, there was one key difference in their success. URBI was an expensive software to license, and while developers still used it, it failed to build a community similar to ROS’. With a large community comes more tools for developers to share. Consequently, more projects could be pushed to completion in record time. 

In fact, the robotics middleware has become so widespread that, as per Bloomberg’s reporting in 2019, 55% of robots shipped by 2024, over 915,000 units will “will have at least one ROS package installed, creating a large installed base of ROS-enabled robots.”

Additionally, Lian Jye Sue, Principal Analyst of ABI Research claimed that “the success of ROS is due to its wide range of interoperability and compatibility with other open-source projects.” The more ROS expands through community-based packages, the more adoption rates for ROS will climb in the future.

ROS’ free entry point allowed developers from anywhere in the world to start tinkering with different projects and upload them to ROS’ repositories whenever they feel comfortable with its status or if another developer wants to take a chance and try to improve upon it.

A look at ROS' user interface running on a UBUNTU system (Image via ROSIndustrial)

The Limitations of ROS

When things are free, they tend to have some serious trade-offs. For a project with the breadth and depth of ROS, it’s understandable that it has its limitations. Developers aren’t paid when they upload their packages on ROS’ repositories, nor are they compensated for keeping them updated. Updates to the ROS platform are done regularly, but they rarely, if ever, increase the range of tasks it can accomplish. As stated earlier, open-source middleware like ROS is built to help, not reinvent the wheel.

While ROS can do a lot, its limitations can severely affect a company trying to think outside the box or simply trying to narrow down the effectiveness of its product. One of the main downsides of ROS is the potential lack of updates for certain packages. If a certain company has been working on a package but the project for which the package was made is nearing its conclusion, then updates afterward will become scarce or non-existent. The packages are left to die and can become obsolete quickly. If other developers are using these packages, then their product might suffer if bugs arise with no one to patch them.

Another area in which ROS suffers is its lack of compatibility with computer operating systems – it only works on Ubuntu. (Its successor, however, works on Windows and Mac as well but ROS 2 is far from a finished product and doesn’t offer the same consistency as ROS.) Ubuntu is not a hard real-time operating system, which means ROS could become obsolete quickly depending on industrial robotics needs. As the middleware uses more power and space, there’s no guarantee of real-time control.

Finally, ROS lacks support for micro-controllers and embedded chips – it has to run on a computer. The only real alternative for this is to run ROS on Raspberry Pi (and similar type) boards.

Though the number of flaws and limitations of ROS isn’t necessarily high, they are impactful. Still, if a company has a more narrow and focused idea of what needs to be done, then they should be mindful of these caveats. A platform like ROS was never meant to please everybody, but for a company with simple goals or for a student trying to acclimate themselves to the world of robotics, ROS can provide a solution.

Who Uses ROS?

According to ROS’ website, hundreds of companies, from startups to Fortune 500 enterprises, have downloaded over 500,000 different ROS packages for use on their projects. One Canadian company, in particular, succeeded in using ROS to develop their robots. Clearpath Robotics, founded in 2009, develops several robots based on ROS and are programmable using ROS right out of the box.

One of their most popular ROS-powered robots is the Jackal, an unmanned ground vehicle that can autonomously drive itself around a multitude of different terrains. It’s an entry-level robot, but one of the most widely used ROS-powered vehicles at the moment. With over a decade of success and usability with ROS, Clearpath Robotics is even making the switch to ROS’s successor, ROS 2, which aims to fix all of ROS’ limitations. 

 
Clearpath Robotics uses ROS and ROS2 to ensure that their deployed robots continue to develop and execute complex processes.

But it’s not just Clearpath Robotics using ROS, companies like Fetch Robotics and TurtleBot use the middleware to fill their different needs. Where the former focuses on developing robots designed for warehousing, the latter develops inexpensive, personal robot kits made more for enthusiasts and researchers, rather than whole solutions for a given industry. 

The versatility of ROS can benefit a myriad of different companies in different industries, but it’s not quite the world-changing plug-and-play solution it aims to be.

No matter how a robotic system is configured, most often an HMI will be required to make it easy for operators to manage - ROS doesn't necessarily make that process easy, however.

Enter the World of AutonomyOS

In contrast to the open-source middleware that is ROS, there exist a plethora of proprietary platforms designed for more specific uses. Omnirobotic’s AutonomyOS™ is a middleware meant to simplify and widen how robots are being used. While they both aim to achieve similar results, AutonomyOS™ flips the script by removing the need to code – something that still drives ROS.

By removing the lengthy coding process, AutonomyOS™ allows better resource allocation. Gone are the days of spending countless hours trying to find the perfect code to make the robots execute the desired tasks. The logical question to ask after reading this is “How does it work if no one is required to program it?”

Before the robot executes its actions, it needs to know what object it will be working on first. In order to analyze the object, it must first pass through a set of 3D perception cameras that will digitally reconstruct it and make it visible with AutonomyStudio™, the integrated development environment that allows for the configuration of a system in a virtual space. Though 3D perception can be costly, Omnirobotic enables integrators to deploy 3D cameras using HDR-enhanced sensor fusion, effectively eliminating the need to adjust camera parameters.

With AutonomyOS™, setting up the behaviors of the robot is essential to executing a task. That means no more wasted time on programming movements.

Once the reconstruction is complete, that’s when AutonomyOS™ shines the most. AutonomyOS™ includes a built-in task planner that can interpret any process model and can plan the desired motion to execute the tasks at hand. Using HTN planning, scenario exploration, and behavioral patterns that the end-user can design themselves, AutonomyOS™ can convert the specifics of the object’s part positions and overall geometry into usable toolpaths.

When the toolpaths are ready, AutonomyOS™ can then generate a proper motion for the robot to execute the necessary actions. Several elements are considered when planning out the proper motion, such as managing collidable spaces, avoiding singularities and joint pressure, and streaming motion through a robot controller for real-time production workflows.

Where ROS Can't Compete

AutonomyOS™ can be primarily used by High-Mix manufacturers for a variety of different applications like paint spray processes, welding, and sanding. What is “High-Mix” Manufacturing? It is generally defined as any manufacturer or production that processes more than 100 different SKUs in batches fewer than 1000 each year – basically, a lot more variation than mass manufacturing.

For AutonomyOS™ to analyze and understand the task it needs to execute, it just goes through the steps listed in the previous section and, well, does what it needs to. ROS, on the other hand, would have to be programmed to understand the shapes and technicalities of each piece it needs to work on.

With a good set of behaviors, AutonomyOS™ can execute a large number of functions

Let’s say a factory needs to paint over a batch of items – say stools, desks, and drawers. Let’s also presume that there are at least 5 different models of each item. If you use ROS to get a robot to paint over them, then you’d be required to program the robot to understand the shapes and sizes of each item, as well as to go after odd forms and intricate spaces to maximize the surface area onto which the robot is painting.

AutonomyOS™, though, will execute these tasks after having analyzed the items with its 3D perception cameras. Then, using AutonomyStudio™, the end-user can set up the appropriate behaviors to ensure that programs will be properly executed – and this before the robot has even begun moving.

All Good Things Have A Cost

ROS has its fair share of uses. Without repeating what was listed above, it’s clear that, up until a certain point ROS can help develop automated systems. Who it helps is more important than how it helps however. Its limitations are succinctly explained above, but it can be especially useful for a company with limited resources and funding to get their feet wet with automation. Given its free entry point, ROS is more of a learning software than a software that can solve a plethora of manufacturing problems.

AutonomyOS™ doesn’t share the low-cost entry point but its uses far exceed that of ROS. As well, unlike ROS packages, AutonomyOS™ won’t become obsolete because a developer has stopped working on their project. AutonomyOS™ has a monthly subscription fee but with that comes a platform that continues to grow, enabling support for more machines and robotic systems far into the future.

ROS vs. AutonomyOS™: A Uneven Battle

AutonomyOS™ expands the scope of what a robotics software can do for manufacturers. That doesn’t mean ROS is bad, it just means that, as free middleware, it limits itself given that there are no developers paid to create new packages. It’s a community-driven project that, even with constant updates, can’t revolutionize the robotics industry. AutonomyOS™ is more advanced in nature, but is also for those who are ready for full robot automation in their factories.

With AutonomyOS™ and AutonomyStudio™, it’s never been easier to deploy an autonomous robotic system. Using 3D Perception with AI-based Task Planning and Motion Planning, manufacturing engineers and integrators can configure autonomous robotic systems for value-added processes that allow manufacturers to achieve more consistency and flexibility in production than ever before. Contact us to learn more!

Continue Exploring The Blog

You may also be interested in

What Is a Time of Flight Sensor (ToF)?

The time of flight sensor (ToF) has a peculiar name. It doesn’t necessarily mean it will calculate the time a flying object is in the air, nor does it measure
Read More

4 Key Takeaways From FABTECH 2022

FABTECH has once again come and gone and this year’s rendition offered, for the first time since the pandemic, a chance to see everyone’s faces again. With hundreds of companies
Read More

Recurring Problems When Programming Robots and How to Move Past Them

Table of Contents With the increasing abundance of robots, you need a relative increase in engineers to set them up for each company. So how complicated of a process is
Read More