Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Compressive Acquisition of Dynamic Scenes  [PDF]
Aswin C Sankaranarayanan,Pavan K Turaga,Rama Chellappa,Richard G Baraniuk
Computer Science , 2012,
Abstract: Compressive sensing (CS) is a new approach for the acquisition and recovery of sparse signals and images that enables sampling rates significantly below the classical Nyquist rate. Despite significant progress in the theory and methods of CS, little headway has been made in compressive video acquisition and recovery. Video CS is complicated by the ephemeral nature of dynamic events, which makes direct extensions of standard CS imaging architectures and signal models difficult. In this paper, we develop a new framework for video CS for dynamic textured scenes that models the evolution of the scene as a linear dynamical system (LDS). This reduces the video recovery problem to first estimating the model parameters of the LDS from compressive measurements, and then reconstructing the image frames. We exploit the low-dimensional dynamic parameters (the state sequence) and high-dimensional static parameters (the observation matrix) of the LDS to devise a novel compressive measurement strategy that measures only the dynamic part of the scene at each instant and accumulates measurements over time to estimate the static parameters. This enables us to lower the compressive measurement rate considerably. We validate our approach with a range of experiments involving both video recovery, sensing hyper-spectral data, and classification of dynamic scenes from compressive data. Together, these applications demonstrate the effectiveness of the approach.
Categorization of Natural Dynamic Audiovisual Scenes  [PDF]
Olli Rummukainen, Jenni Radun, Toni Virtanen, Ville Pulkki
PLOS ONE , 2014, DOI: 10.1371/journal.pone.0095848
Abstract: This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.
Video Inpainting of Complex Scenes  [PDF]
Alasdair Newson,Andrés Almansa,Matthieu Fradet,Yann Gousseau,Patrick Pérez
Computer Science , 2015, DOI: 10.1137/140954933
Abstract: We propose an automatic video inpainting algorithm which relies on the optimisation of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality results on high definition videos. Finally, we provide specific algorithmic details to make implementation of our algorithm as easy as possible. The resulting algorithm requires no segmentation or manual input other than the definition of the inpainting mask, and can deal with a wider variety of situations than is handled by previous work. 1. Introduction. Advanced image and video editing techniques are increasingly common in the image processing and computer vision world, and are also starting to be used in media entertainment. One common and difficult task closely linked to the world of video editing is image and video " inpainting ". Generally speaking, this is the task of replacing the content of an image or video with some other content which is visually pleasing. This subject has been extensively studied in the case of images, to such an extent that commercial image inpainting products destined for the general public are available, such as Photoshop's " Content Aware fill " [1]. However, while some impressive results have been obtained in the case of videos, the subject has been studied far less extensively than image inpainting. This relative lack of research can largely be attributed to high time complexity due to the added temporal dimension. Indeed, it has only very recently become possible to produce good quality inpainting results on high definition videos, and this only in a semi-automatic manner. Nevertheless, high-quality video inpainting has many important and useful applications such as film restoration, professional post-production in cinema and video editing for personal use. For this reason, we believe that an automatic, generic video inpainting algorithm would be extremely useful for both academic and professional communities.
GPU-based Ray Tracing of Dynamic Scenes  [cached]
Martin Reichl,Robert Dünger,Alexander Schiewe,Thomas Klemmer
Journal of Virtual Reality and Broadcasting , 2010,
Abstract: Interactive ray tracing of non-trivial scenes is just becoming feasible on single graphics processing units (GPU). Recent work in this area focuses on building effective acceleration structures, which work well under the constraints of current GPUs. Most approaches are targeted at static scenes and only allow navigation in the virtual scene. So far support for dynamic scenes has not been considered for GPU implementations. We have developed a GPU-based ray tracing system for dynamic scenes consisting of a set of individual objects. Each object may independently move around, but its geometry and topology are static.
Video-to-Video Dynamic Super-Resolution for Grayscale and Color Sequences  [cached]
Farsiu Sina,Elad Michael,Milanfar Peyman
EURASIP Journal on Advances in Signal Processing , 2006,
Abstract: We address the dynamic super-resolution (SR) problem of reconstructing a high-quality set of monochromatic or color super-resolved images from low-quality monochromatic, color, or mosaiced frames. Our approach includes a joint method for simultaneous SR, deblurring, and demosaicing, this way taking into account practical color measurements encountered in video sequences. For the case of translational motion and common space-invariant blur, the proposed method is based on a very fast and memory efficient approximation of the Kalman filter (KF). Experimental results on both simulated and real data are supplied, demonstrating the presented algorithms, and their strength.
A New Motion Segmentation Method for Dynamic Scenes  [PDF]
Zhenping Xie,Shitong Wang
Information Technology Journal , 2011,
Abstract: Because of its requirement of precisely extracting moving objects, motion segmentation especially for dynamic scenes is more difficult than motion tracking. So, efficient image segmentation methods may be employed to solve above problem, which drives us to develop more novel motion segmentation method for dynamic scenes. In this study, a novel image segmentation method using level set is employed to design a new motion segmentation method for dynamic scenes. From theoretical analysis, level set method and Gaussian Mixture Model (GMM) are two very valuable tools for natural image segmentation. The former aims to acquire good geometrical continuity of segmentation boundaries, while the latter focuses on analyzing statistical properties of image feature data. Derived from this common knowledge, a novel level set image segmentation method integrated with GMM (called as GMMLS) has been proposed in previous studies. Wherein, Gaussian mixture model is used to analyze image feature, moreover the effectiveness and good performance of GMMLS also have been demonstrated. Based on GMMLS, a new motion segmentation method for dynamic scenes is proposed in this study and experimental results on several moving objects in dynamic scenes indicate that new method owns some excellent and particular worthiness on such practical applications.
A Hajj And Umrah Location Classification System For Video Crowded Scenes  [PDF]
Hossam M. Zawbaa,Salah A. Aly,Adnan A. Gutub
Computer Science , 2012,
Abstract: In this paper, a new automatic system for classifying ritual locations in diverse Hajj and Umrah video scenes is investigated. This challenging subject has mostly been ignored in the past due to several problems one of which is the lack of realistic annotated video datasets. HUER Dataset is defined to model six different Hajj and Umrah ritual locations[26]. The proposed Hajj and Umrah ritual location classifying system consists of four main phases: Preprocessing, segmentation, feature extraction, and location classification phases. The shot boundary detection and background/foregroud segmentation algorithms are applied to prepare the input video scenes into the KNN, ANN, and SVM classifiers. The system improves the state of art results on Hajj and Umrah location classifications, and successfully recognizes the six Hajj rituals with more than 90% accuracy. The various demonstrated experiments show the promising results.
Tracking of Objects in Video Scenes with Time Varying Content  [cached]
Dominique Barba,Jenny Benois-Pineau,Amal Mahboubi
EURASIP Journal on Advances in Signal Processing , 2002, DOI: 10.1155/s1687617202000902
Abstract: We propose a method for tracking of objects contained in video sequences. Each video object is represented by a set of polygonal regions. A bottom up approach (spatial segmentation/motion estimation) is applied for the initialisation of the method, a limited human interaction is used to build the semantic map of the first frame in video sequence. The tracking of this model along a video sequence is based on detecting and indexing new objects in a video scene. Semantic rules are used to label new objects and, the current state of segmentation is validated by forward projection of the background.
Hand-held Video Deblurring via Efficient Fourier Aggregation  [PDF]
Mauricio Delbracio,Guillermo Sapiro
Computer Science , 2015,
Abstract: Videos captured with hand-held cameras often suffer from a significant amount of blur, mainly caused by the inevitable natural tremor of the photographer's hand. In this work, we present an algorithm that removes blur due to camera shake by combining information in the Fourier domain from nearby frames in a video. The dynamic nature of typical videos with the presence of multiple moving objects and occlusions makes this problem of camera shake removal extremely challenging, in particular when low complexity is needed. Given an input video frame, we first create a consistent registered version of temporally adjacent frames. Then, the set of consistently registered frames is block-wise fused in the Fourier domain with weights depending on the Fourier spectrum magnitude. The method is motivated from the physiological fact that camera shake blur has a random nature and therefore, nearby video frames are generally blurred differently. Experiments with numerous videos recorded in the wild, along with extensive comparisons, show that the proposed algorithm achieves state-of-the-art results while at the same time being much faster than its competitors.
Dynamic Scenes Implementation for Radio Detector Echo Simulator  [PDF]
Lu Zhaogan,Liu Long
Information Technology Journal , 2012,
Abstract: Up to now, the dynamic scene echo simulator for radio detector is still difficult to implementation with computer simulation. In this report, one dynamic scene implementation scheme was presented which was based on the radio detector scene models constructed by 3ds Max (3D studio Max) 2010. The idea is that the scene could be set up with more geometric data, material properties and other auxiliary information. Then, the scene must be converted into geometric data files and VRML 97 format files. The two file are actually the data version and VRML version and could be used separately from 3ds Max 2010. Thus, they could be integrated into the simulator constructed by our research project. During the simulation progress, the objects in the scene could have different location and movement states. And, these location and movement information could be updated timely in the geometric data files and VRML 97 scenes. Thus, the dynamic scene could be displayed by most VRML 97 browsers when the simulation last for a period time. At last, the echo simulator of radio detector verified the dynamic scene.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.