Skip to main content



The crucial role of A.I.

All of the developed hardware solutions necessitate the integration of artificial intelligence (A.I.) components for activation.

To facilitate the collaboration between a humanoid robot and a human in the context of biomechanical risk reduction, it becomes essential to enhance the robot's cognitive and "physical" intelligence. To enable the movements, self-localization within its environment, object recognition, and estimation of object shape and position, as well as perceiving humans intending to collaborate with the robot. The incorporation of A.I. into the sensorized suits and shoes enables the processing of data crucial for effective risk prevention strategies.




Collaborative Lifting Tasks for Risk Reduction

Control, planning, estimation, and A.I. techniques allow the ergoCub robot to perform lifting tasks in collaboration with a sensorized human. Particularly interesting is the lifting task, which, when performed without robot support, becomes highly risky, while collaboration with the robot reduces the overall risk during the task.


Another essential aspect the project has focused on is robot locomotion, for example, walking. The ergoCub robot can currently walk at a speed similar to a human, which was not possible for its predecessor, iCub3.

Load Transportation

The ergoCub robot can walk while carrying heavy loads. Currently, the robot can transport several kilograms of load, but the goal is to work on the robot's artificial intelligence to significantly increase this carrying capacity.

Autonomous Navigation in a Simulated Warehouse

Based on the robot's sensors, A.I. components have been developed to enable the robot to locate itself in a warehouse and plan a path necessary for its movement. For example, the robot can plan a path between two warehouse shelves while avoiding unexpected obstacles, potentially optimizing the arrangement of objects inside the warehouse for workers' ergonomics.

Recognition of Workers' Intentions

To collaborate with workers, the ergoCub robot must be able to recognize their intentions and the shape and position of objects they interact with. For this purpose, A.I. vision modules have been developed for object recognition, localization, shape estimation, grip control, and manipulation.

Automatic Manipulation

To interact with people, algorithms have been developed to recognize human intentions, such as whether the person is delivering or receiving a load or simply wants to shake hands or greet the robot. The vision techniques developed in this context can be extended or complemented to integrate risk prevention algorithms using information from wearable sensors.



Postural analysis

In real-time, A.I. provides an assessment of the posture of the human wearing the sensorized suit. Specifically, the implemented A.I. can evaluate the position, velocity, and acceleration of each sensorized limb of the worker.

Effort analysis

In real-time, A.I. provides an evaluation of joint efforts and compression and shear forces - e.g., between L5 and S1 - based on information from the sensorized suit and shoes, knowing the subject's height and weight.

Prediction of risk with early warning

In real-time and for predefined tasks, A.I. predicts future risk by alerting the user, through vibrations, when they are about to engage in a potentially high biomechanical risk task.