ServicenavigationHauptnavigationTrailKarteikarten


Forschungsstelle
EU FRP
Projektnummer
00.0603
Projekttitel
COGVISYS: Cognitive Vision Systems
Projekttitel Englisch
COGVISYS: Cognitive Vision Systems

Texte zu diesem Projekt

 DeutschFranzösischItalienischEnglisch
Schlüsselwörter
-
-
-
Anzeigen
Alternative Projektnummern
-
-
-
Anzeigen
Forschungsprogramme
-
-
-
Anzeigen
Kurzbeschreibung
-
-
-
Anzeigen
Partner und Internationale Organisationen
-
-
-
Anzeigen
Abstract
-
-
-
Anzeigen
Datenbankreferenzen
-
-
-
Anzeigen

Erfasste Texte


KategorieText
Schlüsselwörter
(Englisch)
High-level vision; cognitive vision; scene understanding;
Information Processing; Information Systems; Innovation; Technology Transfer; Telecommunications
Alternative Projektnummern
(Englisch)
EU project number: IST-2000-29404
Forschungsprogramme
(Englisch)
EU-programme: 5. Frame Research Programme - 1.2.4 Essential technologies and infrastructures
Kurzbeschreibung
(Englisch)
See abstract
Partner und Internationale Organisationen
(Englisch)
Coordinator: Universität Karlsruhe (D)
Abstract
(Englisch)
Based on the currently available digital sensor-, storage- and processing-technologies, as well as based on recent progress in computer vision and artificial intelligence, the design of robust and versatile computer vision systems will be pushed to a level where one can start to work seriously on the development of a `cognitive vision system'. The project aims at improving cue extraction and integration, to develop categorisation techniques, and to collaborate with the AI community on the explicit representation of knowledge. Cognitive vision will be demonstrated for traffic surveillance, sign language understanding, and video bases to different application annotation. Explication of the knowledge incorporated into these systems provides the basis to adapt them during the third year by exchange of required knowledge domains.

Objectives:
The central goal is to build a vision system that can be used in a wider variety of fields and that is re-usable by introducing self-adaptation at the level of perception, by providing categorisation capabilities, and by making explicit the knowledge base at the level of reasoning, and thereby enabling the knowledge base to be changed. In order to make these ideas concrete CogViSys aims at developing a virtual commentator, which is able to translate visual information into a textual description. This is the unifying theme of the project.

In order to build this virtual commentator, several conceptual subgoals have to be achieved: It is crucial that the more cognitive processes can start from a firm basis. Hence, some effort will go into state-of-the-art cue integration. Rather than recognising particular textures, objects, motions, CogViSys aims at recognising instantiations of classes thereof, hence a key goal is to make important progress in the area of categorisation. Approaches will be developed to express and use knowledge about the interpretation of scenes explicitly.

Work description:
CogViSys will consider different aspects which together should form a vision system of higher robustness and flexibility than those around now. A first issue is the extraction of high-quality input, i.e. to make sure that the initial cues on which all further steps are built are of as good quality as possible. This work includes the development of a framework to integrate cues, and to use them for image segmentation and object tracking. After that follows a categorisation step. At that point, classes of surface types, motions, objects, actions, events, and scenes are recognised. The work will include ways of easily learning models of such classes. Also, classes will build hierarchies, although at this early stage of categorisation work, emphasis will be on the `basic' levels as perception psychology would call them. In contradistinction with traditional vision, this part is not about the recognition of identical representatives of such classes, but about the determination of class membership. A third step brings in domain knowledge. This is formulated in explicit terms, so that the field of application can be changed by simply changing the explicit rules. New query languages and qualitative spatio-temporal representations are studied. The work will culminate in a number of demonstrators, each another example of a virtual commentator. The specific applications are in the domains of traffic surveillance, visual sign language understanding, and video annotation and textual description. The latter demonstrator will also be carried out in a wearable form. These demonstrators will be used to show the transferability to related fields of application.

Milestones:
M6: preliminary cue extraction ready for use
M12: demonstration of surface categorisation, specification of query language, joint facilities on Cognitive Vision
M18: cue integrated data-driven tracking
M24: demonstration of object categorisation, automatic vehicle categorisation and tracking, knowledge base made explicit for the exemplars, first version of demonstrators, joint Workshop on Cognitive Vision
M30: model-based feedback for tracking, demonstration of scene categorisation
M36: transfer of knowledge base for the exemplars, final demonstrators, final scientific workshop.
Datenbankreferenzen
(Englisch)
Swiss Database: Euro-DB of the
State Secretariat for Education and Research
Hallwylstrasse 4
CH-3003 Berne, Switzerland
Tel. +41 31 322 74 82
Swiss Project-Number: 00.0603