How robots can pick unknown objects

How robots can pick unknown objects

AI enables robots to see, perceive and interact with their environment to solve real-world problems. Find out how it works in this blog article.

Many customers ask us at Sereact how robots can reliably handle unknown objects, especially when objects are unstructured and in a chaotic arrangement in a container.

So can the robot see? In a sense, yes. With the help of artificial intelligence, it is indeed possible for robots to perceive their environment. To illustrate this process, I have summarized the main steps AI goes through.

1 Perception

Robotic perception describes the capability of a robot to perceive the environment in which it operates through sensors. Color cameras or 3D sensors are often used for this purpose. Deep learning methods are used to process the high-dimensional sensor data and to obtain compact environmental representations. For this purpose, the current sensor data is processed with neural networks to recognize or localize individual objects, for example.

2 Reasoning

Reasoning aims to make intelligent decisions and perform high-level task planning to solve the robot task, e.g. picking an object. You can think of this as a to-do list with multiple tasks in a defined order. Reinforcement learning (RL) approaches are often used to make intelligent decisions. Robots can learn to make decisions through trial-and-error learning. In this process, they interact with the environment and can try actions and behaviors. After execution, the robot receives a feedback signal, which it can use to optimize its behavior.

3 Motion Planning

After the robot has executed a plan to solve a task (think of the to-do list), the next step is motion planning. This involves calculating how the robot dexterously moves past obstacles. Especially the avoidance of collisions is essential, which is why the perception results are taken into account. Different objectives for the movement are considered, e.g., it should perform the task as quickly as possible or as energy-efficiently as possible.

4 Execution

The last stage is to execute the planned motion. Sensors and perception algorithms are used to constantly monitor whether an unforeseen situation occurs – for instance, that the pick object is lost during handling. In such a case the current execution is interrupted to find a new solution for the sub-problem. Since the conditions of the problem have changed, the entire pipeline is restarted at stage 1 to call up the perception, reasoning, and motion planning stages again. If the robot task is finally completed successfully, it is rewarded with a cool drink (respectively with positive feedback).