Short description
(English)
|
The project aims at elaborating solution for the robust detection of unusual events potentially representing a risk e.g. in airports, railway stations or industrial plants, by using multi-view cameras and signal processing / analysis methods applied to smart video surveillance. The research will encompass the study and development of an innovative system architecture for on-line multi-view scene analysis, object classification and tracking, by properly handling occlusion phenomena, out of which appropriate metadata will be generated for the detection of unusual events, the overall system being then duly validated through tests. While primarily targeting the video surveillance application field, the results will be exploitable in further areas like monitoring and caring of elderly or disabled persons. An important outcome of the project will be the contributions to the emerging new standards JPSearch and MPEG-A MAF on video surveillance.
|
Partners and International Organizations
(English)
|
BE, BG, CH, DE, ES, FI, FR, GR, HR, HU, IE, IT, NL, PT, RS, SK, TR, UK
|
Abstract
(English)
|
In this project, we consider the challenging problem of unusual event detection in video surveillance systems. The proposed approach makes a step toward generic and automatic detection of unusual events in terms of velocity and acceleration. At first, the moving objects in the scene are detected and tracked. Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this project, apart from unusual events detection, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate object-tracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all view. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios. To ensure inter-operability between devices and systems, the proposed system generates and stores metadata (information about trajectories) in a form of MPEG-7 descriptors. Since the tracking is performed automatically, object trajectories are often imperfect and noisy. A better representation of moving objects trajectories is then achieved by means of appropriate pre-processing techniques. Trajectories are then re-sampled at equal distance intervals, and features, such as location, velocity and acceleration, are extracted. Each trajectory is thus represented by a feature vector of fixed dimension. A supervised Support Vector Machine method is then used to train the system with one or more typical sequences, and the resulting model is then used for testing the proposed method with other typical sequences (different scenes and scenarios). Experimental results are shown to be promising. The presented approach is capable of determining similar unusual events as in the training sequences.
|