Résumé des résultats (Abstract)
(Anglais)
|
LUMINOUS aims at the creation of the next generation of Language Augmented XR systems, where natural languagebased
communication and Multimodal Large Language Models (MLLM) enable adaptation to individual, not predefined
user needs and unseen environments. This will enable future XR users to interact fluently with their environment,
while having instant access to constantly updated global as well as domain- specific knowledge sources to accomplish
novel tasks. We aim to exploit MLLMs injected with domain specific knowledge for describing novel tasks on user
demand. These are then communicated through a speech interface and/or a task adaptable avatar (e.g., coach/teacher)
in terms of different visual aids and procedural steps for the accomplishment of the task. Language driven specification
of the style, facial expressions, and specific attitudes of virtual avatars will facilitate generalisable and situation-aware
communication in multiple use cases and different sectors. LLMs will benefit in parallel in identifying new objects that
were not part of their training data and then describing them in a way that they become visually recognizable. Our results
will be prototyped and tested in three pilots, focussing on neurorehabilitation (support of stroke patients with language
impairments), immersive industrial safety training, and 3D architectural design review. A consortium of six leading R&D
institutes experts in six different disciplines (AI, Augmented Vision, NLP, Computer Graphics, Neurorehabilitation,
Ethics) will follow a challenging workplan, aiming to bring about a new era at the crossroads of two of the most promising
current technological developments (LLM/AI and XR), made in Europe.
|