Meet the engineer teaching robots how to get a grip

Doug Morrison, PhD researcher at the Australian Centre for Robotic Vision (ACRV).

Robots might be adept at working on a production line, but ask them to load a dishwasher and you’re likely to end up with a few broken plates.

Australian research into ‘active perception’ could change this – and eventually see high-tech assistants installed in our homes.

The key lies in teaching robots to behave more like humans, said Doug Morrison, PhD researcher at the Australian Centre for Robotic Vision (ACRV).

“You can have a robot doing the same thing over and over very quickly, but as soon as anything changes, you need to reprogram it,” he told create.

“What we want are robots that can work in unstructured environments like your home, workplace, a hospital – you name it.”

Morrison studied electrical engineering and worked in mining and automation before landing at the ACRV, which is headquartered at the Queensland University of Technology in Brisbane. The centre also spans labs at the University of Adelaide, Monash University and the Australian National University.

He has been creating algorithms to help robots respond to their surroundings for the past three years, focusing on teaching them to grasp objects in real time. The ultimate goal is creating robots with the ability to move and think at the same time.

“My research is looking at ways to let the robot be an active participant in its world,” Morrison said

“We want robots to be able to pick up objects that they’ve never seen before in environments they’ve never seen before.”

Doug Morrison, PhD researcher at the Australian Centre for Robotic Vision (ACRV).

Engineering active perception

A key pillar of this is the Generative Grasping Convolutional Neural Network (GG-CNN) Morrison created in 2018, which lets robots more accurately and quickly grasp moving objects in cluttered spaces.

Before this, a robot would look at an object, take up to a minute to think, and then attempt to pick it up. Morrison’s approach speeds up the process.

He has since built on this by adding another layer that allows a robot to look around, which is not a skill robots have really possessed before.

Morrison’s main innovation is the development of a multi-view picking controller, which selects informative viewpoints for an eye-in-hand camera while reaching to grasp. This reveals high-quality grasps that would be hidden in a static view.

According to the ACRV, this active perception approach is the first in the world to focus on real-time grasping by stepping away from a static camera position or fixed data collecting routines.

It is also unique in the way it builds a ‘map’ of grasps in a pile of objects, which updates as the robot moves. This real-time mapping predicts the quality and pose of grasps at every pixel in a depth image.

“The beauty of our active perception approach is that it’s smarter and quicker than static, single viewpoint grasp detection methods thanks to our GG-CNN, which is 10 times faster than other systems,” Morrison said.

“We strip out lost time by making the act of reaching towards an object a meaningful part of the grasping pipeline rather than just a mechanical necessity.

“Like humans, this allows the robot to change its mind on-the-go in order to select the best object to grasp and remove from a messy pile of others.”

Morrison validated his approach by having a robotic arm remove 20 objects, one at a time, from a pile. He achieved an 80 per cent success rate, which is about 12 per cent more than traditional single viewpoint grasp detection methods.

“[The approach] allows the robot to act very fluidly and gain a better understanding of its world,” he said.

“It makes them much more useful and practical. It means they’re better at grasping in unstructured environments such as a cluttered home or warehouse, where a robot needs to walk around an object, understand what’s there and be able to compute the best way to pick it up.”

Grasping with intent

Currently, one of the main limitations of robotic learning is how data-hungry the algorithms are. To teach robots to grasp, researchers rely on either huge masses of precompiled data sets of objects or on an extensive amount of trial and error.

“Neither of these are practical if you want a robot that can learn to adapt to its environment very quickly,” Morrison said.

“We’re taking a step back and looking at automatically developing data sets that allow robots to efficiently learn to grasp effectively any object they encounter, even if they’ve never seen it – or anything like it – before.”

The next step is to teach robots to ”grasp with intent”.

“It’s all well and good to be able to pick things up, but there are a lot of tasks where we need to pick up objects in a specific way,” Morrison said.

“We’re looking at everything from household tasks to stacking shelves and doing warehouse tasks like packing things into a box.”

He is also exploring how to use evolutionary algorithms to create new, diverse shapes that can be tested in simulations and also 3D printed for robots to practice on.

“We’re looking at coming up with ways of automatically developing shapes that fill gaps in a robot’s knowledge,” Morrison said.

“If you’ve learnt to grasp one object with a handle, seeing more objects with a handle might not help you very much … Seeing something completely different will probably fill a gap in your knowledge and accelerate your training lot faster than seeing 10,000 more handles.”

Exit mobile version