Abstract : This thesis focuses on the use of computer vision in the context of tightly coupled interaction (TCI) between people and computers. The interaction is tightly coupled within a time interval when the human and artificial systems are continuously engaged in the accomplishment of physical actions that are mutually observable and dependent on this interval. Moving a graphical object with a mouse involves a TCI. We model the TCI as a closed-loop system composed of two stimulus - response subsystems. This model permits the identification of requirements relevent to the conception, the realization or the evaluation of devices in terms of their ability to support TCI. In particular, their ability to operate with a latency of less than 50 ms., with both a resolution and a static stability suitable for the user's task. We then consider the use of computer vision in this context. A review of the two dominant approaches in the domain, model-based vision and appearance-based vision, justifies our choice of the latter. Its techniques are more suitable because they are less costly in terms of computational complexity and consequently more likely to satisfy the latency requirement. We present computer vision techniques that we have developed in accordance with our resolutely task-driven approach to design. The two final chapters present our technical and ergonomic investigations of two prototype systems: the magic board and the perceptual window. The former uses a computer-vision finger tracker to manipulate drawings in order to implement electronic services on an ordinary physical whiteboard. The latter uses a computer-vision face tracker as a new kind of spatial input stream for an ordinary graphical user interface. This input stream is used to navigate in a graphical window.