Abstract : This thesis investigates the use of vision to ensure robotic tasks, especially for mobile target tracking by visual servoing.
The aim of the thesis is to estimate the 2-D projection of the object of interest in the image sequence. Such an estimation is necessary to rewarding mobile target pursuit. Visual servoing is substantially improved with a robust and accurate motion estimation. We have developed an original method for motion estimation of simple objects which is based on an adaptive Kalman filter scheme. The proposed method does not require any prior information and does not depend on motion model and sensor model. This adaptive Kalman filter auto-adapts its state representation with the current observation to fit at best the system's dynamics. The method is extended to a multiple model approach. Different realizations of this filter, with different noise models, are used and an artificial neural network appraises the probability for each filter to compute the optimal estimation, given an input vector. This learning process supervises the filters and allows to compensate for the non stationary properties of the object's movements.
Visual servoing needs image data as input to realize robotics tasks. Thus, supervised extensions of Self Organizing Maps are used to learn the complex relationship between the sensor space and the robot's joint angle space. This neuro-controller is then used to combine the estimated image information to produces robotic control signals. The proposed method fulfills real-time constraints as well as online learning, based on the sensory-motor correlation during the robot movements.
The efficiency of this method is demonstrated through simulation results and real experiments. Visual servoing tasks are presented, which consists in tracking a mobile object with the end-effector of a three degree of freedom robot with respect to different objects.