Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99


Any time

2019 ( 35 )

2018 ( 233 )

2017 ( 216 )

2016 ( 276 )

Custom range...

Search Results: 1 - 10 of 8870 matches for " Video processing "
All listed articles are free for downloading (OA Articles)
Page 1 /8870
Display every page Item
Automated neurosurgical video segmentation and retrieval system  [PDF]
Engin Mendi, Songul Cecen, Emre Ermisoglu, Coskun Bayrak
Journal of Biomedical Science and Engineering (JBiSE) , 2010, DOI: 10.4236/jbise.2010.36084
Abstract: Medical video repositories play important roles for many health-related issues such as medical imaging, medical research and education, medical diagnostics and training of medical professionals. Due to the increasing availability of the digital video data, indexing, annotating and the retrieval of the information are crucial. Since performing these processes are both computationally expensive and time consuming, automated systems are needed. In this paper, we present a medical video segmentation and retrieval research initiative. We describe the key components of the system including video segmentation engine, image retrieval engine and image quality assessment module. The aim of this research is to provide an online tool for indexing, browsing and retrieving the neurosurgical videotapes. This tool will allow people to retrieve the specific information in a long video tape they are interested in instead of looking through the entire content.
Continuous Arabic Sign Language Recognition in User Dependent Mode  [PDF]
K. Assaleh, T. Shanableh, M. Fanaswala, F. Amin, H. Bajaj
Journal of Intelligent Learning Systems and Applications (JILSA) , 2010, DOI: 10.4236/jilsa.2010.21003
Abstract: Arabic Sign Language recognition is an emerging field of research. Previous attempts at automatic vision-based recog-nition of Arabic Sign Language mainly focused on finger spelling and recognizing isolated gestures. In this paper we report the first continuous Arabic Sign Language by building on existing research in feature extraction and pattern recognition. The development of the presented work required collecting a continuous Arabic Sign Language database which we designed and recorded in cooperation with a sign language expert. We intend to make the collected database available for the research community. Our system which we based on spatio-temporal feature extraction and hidden Markov models has resulted in an average word recognition rate of 94%, keeping in the mind the use of a high perplex-ity vocabulary and unrestrictive grammar. We compare our proposed work against existing sign language techniques based on accumulated image difference and motion estimation. The experimental results section shows that the pro-posed work outperforms existing solutions in terms of recognition accuracy.
Architectural Model of a Biological Retina Using Cellular Automata  [PDF]
Fran?ois Devillard, Bernard Heit
Journal of Computer and Communications (JCC) , 2014, DOI: 10.4236/jcc.2014.214008
Abstract: Developments in neurophysiology focusing on foveal vision have characterized more and more precisely the spatiotemporal processing that is well adapted to the regularization of the visual information within the retina. The works described in this article focus on a simplified architectural model based on features and mechanisms of adaptation in the retina. Similarly to the biological retina, which transforms luminance information into a series of encoded representations of image characteristics transmitted to the brain, our structural model allows us to reveal more information in the scene. Our modeling of the different functional pathways permits the mapping of important complementary information types at abstract levels of image analysis, and thereby allows a better exploitation of visual clues. Our model is based on a distributed cellular automata network and simulates the retinal processing of stimuli that are stationary or in motion. Thanks to its capacity for dynamic adaptation, our model can adapt itself to different scenes (e.g., bright and dim, stationary and moving, etc.) and can parallelize those processing steps that can be supported by parallel calculators.
Video Based Vehicle Detection and its Application in Intelligent Transportation Systems  [PDF]
Naveen Chintalacheruvu, Venkatesan Muthukumar
Journal of Transportation Technologies (JTTs) , 2012, DOI: 10.4236/jtts.2012.24033
Abstract: Video based vehicle detection technology is an integral part of Intelligent Transportation System (ITS), due to its non-intrusiveness and comprehensive vehicle behavior data collection capabilities. This paper proposes an efficient video based vehicle detection system based on Harris-Stephen corner detector algorithm. The algorithm was used to develop a stand alone vehicle detection and tracking system that determines vehicle counts and speeds at arterial roadways and freeways. The proposed video based vehicle detection system was developed to eliminate the need of complex calibration, robustness to contrasts variations, and better performance with low resolutions videos. The algorithm performance for accuracy in vehicle counts and speed was evaluated. The performance of the proposed system is equivalent or better compared to a commercial vehicle detection system. Using the developed vehicle detection and tracking system an advance warning intelligent transportation system was designed and implemented to alert commuters in advance of speed reductions and congestions at work zones and special events. The effectiveness of the advance warning system was evaluated and the impact discussed.
Main Processes for OVS-1A & OVS-1B: From Manufacturer to User  [PDF]
Shixiang Cao, Wenwen Qi, Wei Tan, Nan Zhou, Yongfu Hu
Journal of Computer and Communications (JCC) , 2018, DOI: 10.4236/jcc.2018.611012
Commercial remote sensing has boosted a new revolution in traditional processing chain. During the development of OVS-1A and OVS-1B, we construct the main processing pipeline for ground and calibration system. Since these two satellites utilize colorful video imaging pattern, the underlying video stabilization and color adjustment is vital for end user. Besides that, a full explanation is given for researchers to shed light on how to promote the imagery quality from manufacturing satellite camera to generate video products. From processing system, the demo cases demonstrate its potential to satisfy end user. Our team also releases the possible improvement for video imaging satellite in the coming future.
A Survey on Digital Video Watermarking
Swati Patel,Anilkumar Katharotiya,Mahesh Goyani
International Journal of Computer Technology and Applications , 2011,
Abstract: At the leading edge of the information world everything is in available in form of digital media. Digital watermarking was introduced to provide the copy right protection and owners’ authentication. Digital video watermarking is the process to embedding a digital code into digital video sequences. Digital video watermarking nothing but a sequence of consecutive still images. In recent few years the applications based on video like, pay-per-view, video-on-demand, video broadcasting are becomes more and more popular, so the requirement of a secure video distribution increases. In this paper, the concept of digital video watermarking, its terminology, principle, properties, applications, and classification is introduced. Classification is based on the types of key used for embedding/detecting purpose, types of carriers and the working domains of watermark embedding are included.
Uncertainty-aware video visual analytics of tracked moving objects
Markus H?ferlin,Benjamin H?ferlin,Daniel Weiskopf,Gunther Heidemann
Journal of Spatial Information Science , 2011,
Abstract: Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues, we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration, hypotheses generation, and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG) for visualization and enable users to provide filter-based relevance feedback. Additionally, users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making, we gather uncertainties introduced by the computer vision step, communicate these information to users through uncertainty visualization, and grant fuzzy hypothesis formulation to interact with the machine. Finally, we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009.
Real-Time Adaptive Foreground/Background Segmentation
Butler Darren E,Bove V Michael,Sridharan Sridha
EURASIP Journal on Advances in Signal Processing , 2005,
Abstract: The automatic analysis of digital video scenes often requires the segmentation of moving objects from a static background. Historically, algorithms developed for this purpose have been restricted to small frame sizes, low frame rates, or offline processing. The simplest approach involves subtracting the current frame from the known background. However, as the background is rarely known beforehand, the key is how to learn and model it. This paper proposes a new algorithm that represents each pixel in the frame by a group of clusters. The clusters are sorted in order of the likelihood that they model the background and are adapted to deal with background and lighting variations. Incoming pixels are matched against the corresponding cluster group and are classified according to whether the matching cluster is considered part of the background. The algorithm has been qualitatively and quantitatively evaluated against three other well-known techniques. It demonstrated equal or better segmentation and proved capable of processing PAL video at full frame rate using only 35%–40% of a GHz Pentium 4 computer.
VLSI Architecture for 8-Point AI-based Arai DCT having Low Area-Time Complexity and Power at Improved Accuracy
Amila Edirisuriya,Arjuna Madanayake,Vassil S. Dimitrov,Renato J. Cintra,Jithra Adikari
Journal of Low Power Electronics and Applications , 2012, DOI: 10.3390/jlpea2020127
Abstract: A low complexity digital VLSI architecture for the computation of an algebraic integer (AI) based 8-point Arai DCT algorithm is proposed. AI encoding schemes for exact representation of the Arai DCT transform based on a particularly sparse 2-D AI representation is reviewed, leading to the proposed novel architecture based on a new final reconstruction step (FRS) having lower complexity and higher accuracy compared to the state-of-the-art. This FRS is based on an optimization derived from expansion factors that leads to small integer constant-coefficient multiplications, which are realized with common sub-expression elimination (CSE) and Booth encoding. The reference circuit [1] as well as the proposed architectures for two expansion factors α? = 4.5958 and α′ = 167.2309 are implemented. The proposed circuits show 150% and 300% improvements in the number of DCT coefficients having error ≤ 0:1% compared to [1]. The three designs were realized using both 40 nm CMOS Xilinx Virtex-6 FPGAs and synthesized using 65 nm CMOS general purpose standard cells from TSMC. Post synthesis timing analysis of 65 nm CMOS realizations at 900 mV for all three designs of the 8-point DCT core for 8-bit inputs show potential real-time operation at 2.083 GHz clock frequency leading to a combined throughput of 2.083 billion 8-point Arai DCTs per second. The expansion-factor designs show a 43% reduction in area (A) and 29% reduction in dynamic power (PD) for FPGA realizations. An 11% reduction in area is observed for the ASIC design for α? = 4.5958 for an 8% reduction in total power ( PT ). Our second ASIC design having α′ = 167.2309 shows marginal improvements in area and power compared to our reference design but at significantly better accuracy.
MISD Compiler for Feature Vector Computation in Serial Input Images
Lucas Leiva,Nelson Acosta
ARPN Journal of Systems and Software , 2011,
Abstract: In this paper a compiler capable of generate Multiple Instruction Single Data (MISD) architectures for feature vector calculation is presented. The input is a high-level language, avoiding to developers to involve in low level design. Instead, the output is expressed in a Hardware Description Language (HDL), and can be used for FPGA configuration. A FPGA is a programmable device which allows parallelism, increasing the system speed up. The tool is intended to use in feature vector calculation of region of interest (ROI) for real time video applications. These ROIs arrives serially. Also, is possible to evaluate the vector in design time, allowing system prototyping. The compiler optimizes the response time and the number of registers required to meet real time constraints.
Page 1 /8870
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.