Abstract : The problem of learning behaviors on an autonomous robot raises many issues related to motor control, behavior encoding, behavioral strategies and action selection. Using a developmental approach is of particular interest in the context of autonomous robotics. The behavior of the robot is based on low level mechanisms that together can make more complex behaviors emerge. Moreover, the robot has no a priori information about its own physical characteristics or on its environment, it must learn its own sensori-motor dynamic. For instance, I started my thesis by studying a model of low level imitation. From a developmental point of view, imitation is present from birth and accompanies the development of young children under multiple forms. It has a learning function and shows up as an asset in term of performance in time of behaviors acquisition, as well as a communication function playing a role in the bootstrap and the maintenance of nonverbal and natural interactions. Moreover, even if there is not a real intention to imitate, the observation of another agent allows to extract enough information to be able to reproduce the task. Initially, my work consisted in applying and testing a developmental model allowing emergence of low level imitation behaviors on an autonomous robot. This model is built like a homeostatic system which tends to balance its rough perceptive information (movement detection, color detection, angular information from motors of a robotic arm) by its action. Thus, when a human moves his hand in the robot visual field, the perception ambiguity of the robot makes it consider the human hand as its own arm extremity. From the resulting error a immediate imitation behavior emerges. Of course, such a model implies that the robot is initially able to associate the visual positions of its effector with the proprioceptive informations of its motors. Thanks to imitation behavior, the robot makes movements from which it can learn to build more complex behaviors. Then, how to go from a simple movement to a more complex gesture which can imply an object or a place ? I then proposed an architecture allowing a robot to learn a behavior as a complex temporal sequences (with repetition of elements) of movements. Two models allowing to learn sequences have been developed and tested. The first, based on a model of the hippocampus, learns on-line the timing of simple temporal sequences. The second, based on the properties of a dynamic reservoir, learns on-line complex temporal sequences. Based on these works, an architecture learning the timing of a complex temporal sequence has been proposed. The tests in simulation and on actual robot have shown the necessity to add a resynchronization mechanism that allows to find the correct hidden states for starting a complex sequence by an intermediate state. In a third time, my work consisted in studying how two sensori-motor strategies can cohabit in the context of navigation task. The first strategy codes the behavior from spatial informations, then the second uses temporal informations. Both architectures have been independently tested on the same task. Then, both strategies were merged and executed in parallel. Responses of both strategies were merged with the use of dynamical neural filed. A mechanism of "chunking" which represents the instantaneous state of the robot (current place with current action) allows to resynchronize the temporal sequences dynamics. In parallel, a number of programming and design problems about neural networks have appeared. In fact, our networks can be made of many hundreds of thousands of neurons. It becomes hard to execute them on one computational unit. How to design neural architectures with parallel computation, network communication and real time constraints ? Another part of my work consisted in providing tools allowing the design, communication and real time execution of distributed architectures. Finally, in the context of the Feelix Growing European project, I contribute to integrate my work with those of the LASA laboratory of EPFL for the learning of complex behaviors mixing navigation, gesture and object. To conclude, this thesis allowed me to develop new models for learning behaviors - in time and in space, new tools to handle very large neural networks, and to discuss, beyond limitations of the current system, the important elements for an action selection system.