Program

Programme

Download here the Workshop proceedings


9:00 - 9:30 Welcome and introduction

9:30 - 10:00 Invited talk: Lucio Davide Spano. Bridging Perspectives in Gestural Interaction


10:00 Jean-Matthieu Maro and Ryad Benosman. Mid-air Gesture Recognition Using an Event-based Vision Sensor

10:20 Carmelo Ardito, Maria Teresa Baldassarre, Paolo Buono, Danilo Caivano, Giuseppe Desolda, Massimiliano Morga and Antonio Piccinno. Mid-Air Gestures in Mixed Reality: Issues and Challenges


10:45 - 11:15 Coffee Break


11:20 Christopher Reeves. Challenges of novel multimodal interaction techniques with smartphones for visually impaired users.

11:40 Giorgina Cantalini. Levels of Gesture / Prosody synchronization in speech: A comparison between spontaneous and acted speech

12:00 Marco Valentino, Antonio Origlia and Francesco Cutugno. Multimodal Speech and Gestures Fusion for Small Groups


12:20 - 13:00 Final discussion


***

We are very pleased to welcome Lucio Davide Spano as our keynote speaker for Multimodal!

Bridging Perspectives in Gestural Interaction

Lucio Davide Spano, University of Cagliari

The introduction in the mass-market of gesture-tracking devices designed for the entertainment and games provided standard devices for developing interfaces that respond to the user’s movement. Even if this fostered the adoption of gestural interfaces, we are far from having well-defined standards for both designing and developing them.

From a design perspective, 3D gestures do not have a well-established vocabulary for supporting the interaction across different applications. Different taxonomies have been defined in the literature, but most of them focus on how the gestures are performed and not on their interaction semantics. This requires designers to find solutions for communicating the association between the movements and their effects. On the one hand, the so-called Natural User Interfaces must be self-revealing for leveraging the interaction on the user’s intuition. On the other hand, the recognition capabilities of the gestural devices force the designers in selecting those that are may be easily recognized, even if less usable or understandable.

Introducing gesture guidance help designers in balancing gesture usability and recognition accuracy. Since gestures have a perceivable duration from the user’s point of view, users need to understand both which parts have been interpreted by the application (feedback) and how s/he can complete the movement (feedforward). Unfortunately, this would require a support for partial gesture recognition, which is not provided by the classification techniques guaranteeing a good recognition accuracy.

In this talk, we will describe the connections and the issues we are currently facing for filling the gap between these different perspectives. We will introduce the declarative gesture modelling for describing their temporal evolution, showing their advantages for supporting feedback and feedforward. In addition, we will show how such techniques can be connected with state-of-the-art classifiers, such as Hidden Markov Models.

Short Bio

Lucio Davide Spano is Assistant Professor at the University of Cagliari, Italy. He teaches Human Computer Interaction and Web Programming in the Computer Science programme. He got his PhD in Computer Science from the School for Graduate Studies “Galileo Galilei” at the University of Pisa, in 2013. He previously worked in the Human Interfaces in Information Systems laboratory at ISTI-CNR in Pisa.

His main research interest is the Human-Computer Interaction (HCI). In particular, he focused on gestural interaction, model-based approaches for gesture interfaces, End User Development, Advanced User Interfaces, novel interaction techniques and visualisations, virtual and augmented reality, mobile museum guides.

He is co-author different papers on refereed journals and international conferences and he has been member of the programme committee of several international conferences and workshops in Human Computer Interaction (CHI, EICS, IUI, Interact, Mobile HCI). He has collaborated to different projects founded by the European Commission (Serenoa FP7 STREP p.n. 258030, ServFace FP STREP 216699, Artemis Smarcos p.n. 100249). He has been a member of the Model-Based User Interface Working Group del World Wide Web Consortium (W3C).