Skip to Main content Skip to Navigation
Theses

Automatic learning of next generation human-computer interactions

Abstract : Artificial Intelligence (AI) and Human-Computer Interactions (HCIs) are two research fields with relatively few common work. HCI specialists usually design the way we interact with devices directly from observations and measures of human feedback, manually optimizing the user interface to better fit users’ expectations. This process is hard to optimize: ergonomy, intuitivity and ease of use are key features in a User Interface (UI) that are too complex to be simply modelled from interaction data. This drastically restrains the possible uses of Machine Learning (ML) in this design process. Currently, ML in HCI is mostly applied to gesture recognition and automatic display, e.g. advertisement or item suggestion. It is also used to fine tune an existing UI to better optimize it, but as of now it does not participate in designing new ways to interact with computers. Our main focus in this thesis is to use ML to develop new design strategies for overall better UIs. We want to use ML to build intelligent – understand precise, intuitive and adaptive – user interfaces using minimal handcrafting. We propose a novel approach to UI design: instead of letting the user adapt to the interface, we want the interface and the user to adapt mutually to each other. The goal is to reduce human bias in protocol definition while building co-adaptive interfaces able to further fit individual preferences. In order to do so, we will put to use the different mechanisms available in ML to automatically learn behaviors, build representations and take decisions. We will be experimenting on touch interfaces, as these interfaces are vastly used and can provide easily interpretable problems. The very first part of our work will focus on processing touch data and use supervised learning to build accurate classifiers of touch gestures. The second part will detail how Reinforcement Learning (RL) can be used to model and learn interaction protocols given user actions. Lastly, we will combine these RL models with unsupervised learning to build a setup allowing for the design of new interaction protocols without the need for real user data.
Document type :
Theses
Complete list of metadata

https://tel.archives-ouvertes.fr/tel-03102842
Contributor : Abes Star :  Contact
Submitted on : Thursday, January 7, 2021 - 5:53:21 PM
Last modification on : Friday, January 8, 2021 - 8:47:35 AM
Long-term archiving on: : Thursday, April 8, 2021 - 7:49:36 PM

File

these.pdf
Version validated by the jury (STAR)

Identifiers

  • HAL Id : tel-03102842, version 1

Citation

Quentin Debard. Automatic learning of next generation human-computer interactions. Artificial Intelligence [cs.AI]. Université de Lyon, 2020. English. ⟨NNT : 2020LYSEI036⟩. ⟨tel-03102842⟩

Share

Metrics

Record views

70

Files downloads

73