Human‐computer interaction in 3D object manipulation in virtual environments: A cognitive ergonomics contribution - TEL - Thèses en ligne Accéder directement au contenu
Thèse Année : 2010

Human‐computer interaction in 3D object manipulation in virtual environments: A cognitive ergonomics contribution

Résumé

It is proposed to investigate the cognitive processes involved in assembly/disassembly tasks, and then to apply the findings to the design of 3D virtual environments (VEs). Virtual Environments are interactive systems that enable one or more users to interact with the simulation of objects and scenes usually in three dimensions, in a realistic fashion, by means of a set of computational techniques covering one or more sensory modalities (vision, touch, haptic, hearing, etc.). Often described as the ultimate direct manipulation interface, this technology seeks to make the interface eventually 'disappear' in order to provide users with a 'natural' mode of interaction. Virtual reality (VR) is the experience of being within a VE. One objective of the VR technology is indeed to exploit natural human behaviour without requiring any learning from their users [Fuchs2003], [Bowman2005]. Moreover, VEs are a stimulating field of research because they involve perceptually and cognitively novel situations [Burkhardt2003]. VEs also offer a large potential of innovative solutions to existing application problems. Among others, assembly tasks are a major focus for VEs [Boud2000], [Brooks1999], [Lok2003‐a], [Lok2003‐b], due to their numerous potential applications, such as assembly/disassembly of objects, scientific research (e.g., molecular docking [Ferey2009] etc.). The common feature in VEs is the use of representations and devices to support the users in handling and arranging several distinct elements in a three dimensional (3D) space under specific constraints. Most of the current devices and interaction techniques have focused on providing users with high‐fidelity sensory stimulations, rather than targeting real‐life or task‐centred functions associated with the corresponding interfaces. While many contributions have been made to the field of VR, there are only few empirical data that have been published. We believe that it is very unlikely that more adapted VEs and assistance to users' task - in the specific context of assembly tasks - will follow either just by chance [Brooks1999], by making repeated trials, by tuning what we already have at hand, or by more realistic sensory renderings, without any reference to the 'specific properties of the tasks' including its cognitive dimension. Consequently, a clear picture of the cognitive processes and constraints in real tasks involving spatial manipulation should lead to a significant enhancement of the users' interactions with VEs. This enhancement can be made by creating better or new guidance mechanisms (e.g., video feedback, object collision detection, or avoidance mechanisms) adapted to the users' goals and strategies. This project thus involves work both from the cognitive side and its implications on 3D interactions in industrial VEs. The objective of this doctoral work is to contribute to a better understanding of human factors (HF) - including performance and cognitive processes - related to assisting spatial 3D manipulation and problem‐solving in assembly/disassembly tasks in VEs. For that purpose, we compared performance and strategies of subjects while they solve a simplified spatial task requiring them to assemble pieces to form a specified shape in various conditions of interfacing actions in real and virtual environments. The assembly task chosen was neither very easy such as put peg‐in‐a‐hole type task, as in [Zhang2005], [Pettinaro1999], or [Unger2001], nor highly complex and specific, such as performing open heart or liver surgery [Torkington2001] (one whose results could be applied only to that specific kind of task). The chosen task was semi‐complex, in which the users were required to construct a 3D cube using seven rectangular blocks of different sizes and shapes. The methodology used had two tiers: real and virtual. For the chosen assembly task, a study was first conducted in real settings, which was to provide inspiration, input, and insight for the main experiment to follow. The main experiment that followed was similar in design, but the difference was that it was conducted in virtual settings. The experiment in virtual settings was conducted in three modalities - the classical keyboard‐mouse, the gestural modality, and the vocal modality.
Fichier principal
Vignette du fichier
ThA_se_Abbasi2010.pdf (4.01 Mo) Télécharger le fichier
Loading...

Dates et versions

tel-00603331 , version 1 (24-06-2011)

Identifiants

  • HAL Id : tel-00603331 , version 1

Citer

Sarwan Abbasi. Human‐computer interaction in 3D object manipulation in virtual environments: A cognitive ergonomics contribution. Computer Science [cs]. Université Paris Sud - Paris XI, 2010. English. ⟨NNT : ⟩. ⟨tel-00603331⟩
705 Consultations
1408 Téléchargements

Partager

Gmail Facebook X LinkedIn More