In vision, the interaction that man has with its environment is realized by a dynamic exploration of visual regions of interest through eye movements. Understanding the mechanisms responsible for this efficient information sampling opens perspectives for innovation in human-computer interaction, either by imitating human visual exploration in robots or by creating "natural" interfaces for humans.
Eye tracking is a technique for monitoring and continuously recording eye movements. This is a widespread measuring device with increased ease of use in ecological contexts. In the future, this device will be used in broader disciplines than the initial ones (cognitive psychology or clinical applications), to go towards cognitive ergonomics or enrichment of multimedia content, to take just the two most emblematic examples from digital society.
Our project aims to develop new statistical tools for the analysis of eye movement and multimodal data. We plan to organize our research effort around four principal themes. The first theme is concerned with the development of statistical models for multimodal and eye movement data. The three other themes concern: (i) spatiotemporal data segmentation into comprehensive cognitive phases, (ii) analysis of spatiotemporal dependencies to explain within and between individual differences, and (iii) modelling ocular fixations with a higher spatial resolution to understand the functional roles of microsaccades in visual perception.
The objectives of this Project-Team are in line with those of Persyval-lab and more specifically with the axis “Advanced Data Mining”.
The focus of our theoretical work will be on adapting and improving existing tools from spatial statistics, especially point process models and spatial Markov chains, to the specific challenges of eye movement data, alone, but also coupled with electroencephalographic signals. In addition, we will leverage existing literature on hierarchical modeling to build models able to quantify within and between individual differences. Read more.