Nowadays, manufacturing organizations have to provide a quick response to market changes (Agile Manufacturing), re-programming their robots frequently and quickly to achieve the ability to produce different products by using the same system (Routing Flexibility).
Robot programming remains a crucial point in the robot’s spread. Despite the great strides made in the “Intuitive Robot Programming” field, the significant part of the lifetime cost of a robotic cell still lies in the application software.
Learning from demonstration (LfD) established, over the years, as a promising method to transfer skills from humans to robots. In general, three phases in LfD have been identified: teaching, learning, and autonomous execution. The scientific community provided several tools to demonstrate a task to a robot as well as methods to encode and generalize the learned actions in a new situation. Although the relation between perception and action has been demonstrated through several psychological and neurobiological studies, it is important to establish and select what is relevant during the execution of a given task.
This line of research presents a framework that encapsulates the three phases involved during robot programming, grouping perceptions according to their nature and establishing rules for the selection of salient perceptions. In addition, the proposed framework is compatible with the different sub-methods offered by the literature in the field of “tasks segmentation” and “actions generalization”. Finally, all the learned tasks are represented as a network, which is able to evolve and reorganize automatically in case new tasks are learned (video).