Résumé des résultats (Abstract)
(Anglais)
|
AMI is concerned with new multimodal technologies to support human interaction, in the context of smart meeting rooms and remote meeting assistants. The project aims to enhance the value of multimodal meeting recordings and to make human interaction more effective in real time. These goals will be achieved by developing new tools for computer-supported cooperative work and by designing new ways to search and browse meetings as part of an integrated multimodal group communication, captured from a wide range of devices. The present Integrated Project (IP) is thus very ambitious and addresses a wide range of critical multi-disciplinary activities and applications, covering: 1. Multimodal input interface: including multilingual speech signal processing (natural speech recognition. speaker tracking and segmentation) and visual input (e.g., shape tracking, gesture recognition. and handwriting recognition). . 2. Integration of modalities and coordination among modalities, including (asynchronous) multi-channel processing (e.g., audio-visual tracking) and multimodal dialogue modelling. 3. Meeting dynamics and human-human interaction modelling, including the definition of meeting scenarios, analysing human interaction and multimodal dialogue modelling. 4. Content abstraction, including multimodal information indexing, summarising, and retrieval. 5. Technology transfer through exploration and evaluation of advanced end-user applications, evaluating the advantages and drawbacks of the above functionalities in different prototype systems. 6. Training activities, including an international exchange programme. AMI will definitely build upon existing European expertise in the field, while consolidating the ERA in multimodal interaction and directly leveraging (through its partners) upon several European and national initiatives
|