
September 2, 2017, Kos island, Greece.
In conjunction with EUSIPCO 2017
With this workshop we plan to bring together researchers from different disciplines around signal processing, machine learning, computer vision and robotics with application in human robot/computer interaction (HRI/HCI) fields, as related to multimodal and multi-sensor processing.
During the last decades, an enormous number of socially interactive systems have been developed constituting the field of Human-Computer and Human-Robot Interaction (HCI/HRI) an actual motivating challenge. This challenge has become even greater, due to the relocation of such systems outside the lab environment and into real use cases. The growing potential of multimodal interfaces in human-robot and human-machine communication setups has stimulated people’s imagination and motivated significant research efforts in the fields of computer vision, speech recognition, multimodal sensing, fusion, and human-computer interaction (HCI) and nowadays lies at the heart of such interfaces. In parallel, there is an emerging interdisciplinary research on applications of multimodal modeling, fusion and recognition when seen from an interdisciplinary perspective such as assistive, clinical, affective and psychological aspects e.g. dealing with cognitive and/or mobility impairments. From the robotics perspective, designing and controlling robotic devices constitutes an emerging research field on its own. The integration with multimodal machine learning models poses many challenging scientific and technological problems that need to be addressed in order to build efficient and effective interactive robotic systems. These may include, but are not limited to: (a) human motion tracking, multimodal actions and gestures recognition, as well as intention prediction, while fusing multimodal sensorial data, (b) analysing and modelling human behaviour in the context of physical and non-physical human-robot interaction, (c) developing context- and affect-aware, human-centred systems that act both proactively and adaptively in order to optimally combine physical, sensorial and cognitive modalities, (d) intuitive and natural human-robot communication ultimately achieving robotic behaviours that emulate the way humans operate and behave while taking into account social interaction and ethical constraints. The above are even further challenging when considering special groups of interest, such as children, aging population or other cases that would benefit by assistive, educative or entertaining capable technologies, while being based on multi-modal sensing and natural HCI/HRI.
Arranging this Satellite Workshop around EUSIPCO-2017 will make it possible to bring together many researchers from different backgrounds to discuss and advance the current state-of-the-art w.r.t. (a) signal and speech processing, machine learning, computer vision and robotics with application in HRI/HCI fields, (b) studies and models by clinical, psychological issues related to real-life constraints and use cases such as cognitive impairments, autism, dementia; (c) effective usage of large datasets, corpora, communication models on language, semantics and data annotations. EUSIPCO constitutes a flagship conference addressing all the latest developments in research and technology for general signal processing and its many applications. It offers a unique opportunity to attract the breadth of knowledge required.
What MultiLearn workshop is about
![]() Maja Pantic |
---|