CIO

'Freeing' robots: From 3D mapping to autonomous systems

The CSIRO's Michael Bruenig looks forward to a future where robotic systems will be able to easily navigate complex environments, whether a disaster site or a factory floor
The CSIRO's Zebedee. Image credit: CSIRO.

The CSIRO's Zebedee. Image credit: CSIRO.

There isn't a radio-control handset in sight as a nimble robot briskly weaves itself in and out of the confined tunnels of an underground mine. Powered by ultra-intelligent sensors, the robot intuitively moves and reacts to the changing conditions of the terrain, entering areas unfit for human testing. As it does so, it transmits a detailed 3D map of the entire location to the other side of the world. While this might read like a scenario from a George Orwell novel; it is actually a glimpse into the future of the next generation of robots.

Disruptive technology

Although earlier research prototypes have shown the principal feasibility of robotics tackling the challenges posed by remote locations or harsh environments, we are only just beginning to see the final pieces the technology puzzle coming together.

According to a recent report by the McKinsey Institute, disruptive technologies like advanced robotics, mobile internet and 3D printing will return between $14 trillion and $33 trillion globally per year by 2025. Many companies already incorporate autonomous technologies to offer better and safer customer experiences.

For service robots this started decades ago with simple stationary devices like garage door openers and has extended to autonomous vacuum cleaners or self driving lawn mowers that are now able to map our garden and cut the lawn in beautiful transects.

The automotive industry is discovering a market for driver assistance systems that now include parking assistance, autonomous driving in ‘stop and go’ traffic and emergency braking. In a recent demonstration by Mercedes-Benz of their ‘self-driving S Class’, they drove the same 60 mile route from Mannheim to Pforzheim that 125 years earlier Bertha Benz drove in the first ever automobile.

The car they used for the experiment looks entirely like a production car and used most of the standard sensors on board, relying on vision and radar to complete the task. However, similar to other autonomous cars, it also used a crucial extra piece of information to make the task feasible: It had access to a detailed 3D-digital map to accurately localize itself in the environment.

Navigating complex and dynamic environments

In these examples, the task (localisation, navigation, obstacle avoidance) is either constrained enough to be solvable or can be solved with the provision of extra information.

There is a third category, where humans and autonomous systems augment each other to solve tasks. This can be highly effective but requires a human remote operator or, depending on real time constraints, a human on stand-by.

The question arises how to realise a robot that can navigate complex and dynamic environments without 3D maps as prior information while keeping the cost and complexity for the device to a minimum. Using as few sensors as possible, it needs to be able to get a consistent picture of its environment and its surroundings to enable it to respond to changing and unknown conditions.

This is, of course, the same question that stood before us at the dawn of robotics research and was addressed in the 1980s and 1990s to deal with spatial uncertainty. However, the decreasing cost of sensors, the increasing computing power of embedded systems and the ability to provide 3D maps, has reduced the importance of answering this key research question.

Combining 3D laser mapping with autonomous robotics systems

In an attempt to refocus on this central question, we tried to stretch the limits of what’s possible with a single sensor, in our case a laser scanner. In 2007, we took a vehicle equipped with laser scanners facing to the left and to the right.

We asked if it was possible to create a 2D map of the surroundings and to localise the vehicle to that same map without using GPS, inertial systems or digital maps. We were able to achieve the goal, including the correction of the map based on loop closures after driving about 100 miles, re-identifying the environment we had already “seen”.

With this encouraging result, we went a step further and developed “Zebedee”. This is a handheld 3D mapping system incorporating a laser scanner that sways on a spring to capture millions of detailed measurements of a site as fast as an operator can walk through it.

While the system does add a simple inertial measurement unit, it still maximises information flow from a very simple and low cost setup. It achieves this by moving the smarts away from the sensor and into the software to compute a continuous trajectory of the sensor, specifying its position and orientation at any time and taking its actual acquisition speed into account to precisely compute a 3D point cloud.

Page Break

We’ve recently used the breakthrough technology to create the first 3D map of the interior of Italy’s landmark Leaning Tower of Pisa. Previously, tight spaces and the repetitive nature of the internal structure prevented it from being captured. Our detailed record may one day be critical in being able to reconstruct the site if it was to suffer catastrophic damage due to natural disasters such as a fire or an earthquake.

In 2012, CSIRO worked with 3D Laser Mapping, a global developer of laser scanning solutions to commercialise the research into the ZEB1 product. It is being used to increase efficiencies and improve productivity in a number of different industries. For example, the technology is assisting mining companies to better manage their operations and helping security forces to quickly scan crime scenes. It has been used for cave mapping as well as for mapping cultural heritage sites.

However, the crucial step of bringing the technology back to the robot still has to be completed. Imagine what is possible when you remove the barrier of using an autonomous vehicle to enter unknown environments (or actively collaborating with humans) by equipping robots with such mobile 3D mapping technologies.

Due the simplicity, they can be significantly smaller and cheaper while still being robust in terms of localisation and mapping accuracy. Imagine how small such a robot could be using just a laser scanner and an inertial unit. If needed, the available 3D information can be easily augmented with vision or hyper spectral imaging, providing a comprehensive understanding of the surroundings, for example real time 3D heat maps.

Driving innovation in agile manufacturing

A specific area of interest for this robust mapping and localisation is the manufacturing sector where non-static environments are becoming more and more common and where the cost and complexity for each device has to be kept to a minimum. With a trend towards more agile manufacturing setups, the technology enables lightweight robots that are able to navigate safely and quickly through unstructured and dynamic environments like conventional manufacturing workplaces.

The steps towards simpler and more robust robotic systems almost certainly involve maximizing the information that can be derived from individual sensors. While using multiple sensors and fusing the sensor information is a necessary step to exploit the information available to a robotic system, one might wonder if this step is often too early in the design phase.

Currently, the resulting systems carry more sensors than necessary, are difficult to optimize and are less robust than leaner robotic systems for the same task. The increasing computing power available to small systems further exacerbates the problem as robotic systems can be designed without much consideration of what information power individual sensors provide.

However, it is worth pushing the boundaries of what information can be extracted from very simple systems. Similar to the Zebedee system, it may be the feasibility of innovative system designs that opens the door to entirely new applications and markets.

Michael Bruenig is deputy chief of CSIRO’s Computational Informatics Division