Abstract : Efficient methods to perform motion recognition have been developed using statistical tools. Those methods rely on primitives learning in a suitable space, for example the latent space of the joint angle and/or adequate task spaces. The learned primitives are often sequential : a motion is segmented according to the time axis. When working with a humanoid robot, a motion can be decomposed into simultaneous sub-tasks. For example in a waiter scenario, the robot has to keep some plates horizontal with one of his arms, while placing a plate on the table with its free hand. Recognition can thus not be limited to one task per consecutive segment of time. The method presented in this work takes advantage of the knowledge of what tasks the robot is able to do and how the motion is generated from this set of known controllers to perform a reverse engineering of an observed motion. This analysis is intended to recognize simultaneous tasks that have been used to generate a motion. The method relies on the task-function formalism and the projection operation into the null space of a task to decouple the controllers. The approach is successfully applied on a real robot to disambiguate motion in different scenarios where two motions look similar but have different purposes.