Designing, Implementing and Evaluating Mid-Air Gestures and Speech-Based Interaction

Workshop on Designing, Implementing and Evaluating Mid-Air Gestures and Speech-Based Interaction

September 18, 2017, Cagliari

Download here the Workshop proceedings

The main goal of this half-day workshop is to investigate effective ways to leverage the recent advances in the automatic recognition of mid-air gestures and speech commands. In particular, the workshop invites academic researchers and industry practitioners to submit position papers demonstrating research, design, innovative solutions, concepts and hardware and/or software solutions in areas related to mid-air gestures, speech-based and multimodal interaction with an emphasis on (but not limited to) mobile applications and inclusive design for older adults and people with special needs.

The workshop aims to investigate these issues from a multidisciplinary perspective by bringing together experts on interaction design, user experience, usability, accessibility, innovative hardware and software solutions related to technology based on mid-air gestures, speech and multimodal interaction.


Call for papers

One important topic in human-computer interaction (HCI) is making interaction with technology easier, more intuitive and inclusive. Mid-air gestures and speech-based interaction are considered a natural, intuitive and fun way of interacting with technology, as they can accommodate the needs of a variety of users, including people with age-related cognitive, sensory and motor decline or people with visual impairments.

The main goal of this workshop is to create momentum for interdisciplinary dialogue and collaboration among experts by investigating the design, implementation and evaluation of effective and efficient multi-modal (mid-air and speech-based) interaction.

Position papers centered on mid-air gestures, speech-based and multimodal interaction are solicited. They should be related but not limited to the following topics:

  • Design for mobile interaction
  • Design of interaction for people with special needs (e.g., older adults and people with visual impairments)
  • Design challenges, user experience, usability and accessibility
  • Feedback and feedforward for mid-air gestures
  • Machine learning and algorithms for mid-air gestures and/or speech recognition
  • System components for mid-air gestures, speech-based and/or multi-modal platforms
  • Showcase of systems, technologies, prototypes and/or interactive applications
  • Datasets, validation
  • Evaluation of mid-air gestures, speech-based and/or multi-modal interaction


All paper submission must be in English and should not exceed four (4) pages in length, including references. Position papers must be submitted on EasyChair as .pdf files formatted according to the ACM SIGCHI Publications Format. All the papers will be subjected to a double review process by the members of the Programme Committee.

Please notice that at least one author of each accepted paper must register to the workshop by the early registration deadline (August 11, 2017). Moreover, please notice that the workshop registration fee includes two half-day workshops: one in the morning and one in the afternoon. For further information please see the CHItaly registration page.

Important dates

July 16 (extended): Position paper submission

July 24: Position paper acceptance notification

August 4: Camera-ready

August 11: Early registration

September 15: Late registration

September 18: Multimodal2017 Workshop

September 19-20: Main conference

Contact us

multimodal2017 [at]