Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
EI Videos  [PDF]
Michael Courtney,Tom Slusher,Amy Courtney
Physics , 2012,
Abstract: The Quantitative Reasoning Center (QRC) at USAFA has the institution's primary responsibility for offering after hours extra instruction (EI) in core technical disciplines (mathematics, chemistry, physics, and engineering mechanics). Demand has been tremendous, totaling over 3600 evening EI sessions in the Fall of 2010. Meeting this demand with only four (now five) full time faculty has been challenging. EI Videos have been produced to help serve cadets in need of well-modeled solutions to homework-type problems. These videos have been warmly received, being viewed over 14,000 times in Fall 2010 and probably contributing to a significant increase in the first attempt success rate on the Algebra Fundamental Skills Exam in Calculus 1. EI Video production is being extended to better support Calculus 2, Calculus 3, and Physics 1.
Trending Videos: Measurement and Analysis  [PDF]
Iman Barjasteh,Ying Liu,Hayder Radha
Computer Science , 2014,
Abstract: Unlike popular videos, which would have already achieved high viewership numbers by the time they are declared popular, YouTube trending videos represent content that targets viewers attention over a relatively short time, and has the potential of becoming popular. Despite their importance and visibility, YouTube trending videos have not been studied or analyzed thoroughly. In this paper, we present our findings for measuring, analyzing, and comparing key aspects of YouTube trending videos. Our study is based on collecting and monitoring high-resolution time-series of the viewership and related statistics of more than 8,000 YouTube videos over an aggregate period of nine months. Since trending videos are declared as such just several hours after they are uploaded, we are able to analyze trending videos time-series across critical and sufficiently-long durations of their lifecycle. In addition, we analyze the profile of users who upload trending videos, to potentially identify the role that these users profile plays in getting their uploaded videos trending. Furthermore, we conduct a directional-relationship analysis among all pairs of trending videos time-series that we have monitored. We employ Granger Causality (GC) with significance testing to conduct this analysis. Unlike traditional correlation measures, our directional-relationship analysis provides a deeper insight onto the viewership pattern of different categories of trending videos. Trending videos and their channels have clear distinct statistical attributes when compared to other YouTube content that has not been labeled as trending. Our results also reveal a highly asymmetric directional-relationship among different categories of trending videos. Our directionality analysis also shows a clear pattern of viewership toward popular categories, whereas some categories tend to be isolated.
Forecasting popularity of videos in YouTube  [PDF]
Cedric Richier,Rachid Elazouzi,Tania Jimenez,Eitan Altman,Georges Linares
Computer Science , 2015,
Abstract: This paper proposes a new prediction process to explain and predict popularity evolution of YouTube videos. We exploit our recent study on the classification of YouTube videos in order to predict the evolution of videos' view-count. This classification allows to identify important factors of the observed popularity dynamics. Our experimental results show that our prediction process is able to reduce the average prediction errors compared to a state-of-the-art baseline model. We also evaluate the impact of adding popularity criteria in our classification.
The Entropy of Attention and Popularity in YouTube Videos  [PDF]
Jonathan Scott Morgan,Iman Barjasteh,Cliff Lampe,Hayder Radha
Computer Science , 2014,
Abstract: The vast majority of YouTube videos never become popular, languishing in obscurity with few views, no likes, and no comments. We use information theoretical measures based on entropy to examine how time series distributions of common measures of popularity in videos from YouTube's "Trending videos" and "Most recent" video feeds relate to the theoretical concept of attention. While most of the videos in the "Most recent" feed are never popular, some 20% of them have distributions of attention metrics and measures of entropy that are similar to distributions for "Trending videos". We analyze how the 20% of "Most recent" videos that become somewhat popular differ from the 80% that do not, then compare these popular "Most recent" videos to different subsets of "Trending videos" to try to characterize and compare the attention each receives.
Geometric Context from Videos  [PDF]
S. Hussain Raza,Matthias Grundmann,Irfan Essa
Computer Science , 2015,
Abstract: We present a novel algorithm for estimating the broad 3D geometric structure of outdoor video scenes. Leveraging spatio-temporal video segmentation, we decompose a dynamic scene captured by a video into geometric classes, based on predictions made by region-classifiers that are trained on appearance and motion features. By examining the homogeneity of the prediction, we combine predictions across multiple segmentation hierarchy levels alleviating the need to determine the granularity a priori. We built a novel, extensive dataset on geometric context of video to evaluate our method, consisting of over 100 ground-truth annotated outdoor videos with over 20,000 frames. To further scale beyond this dataset, we propose a semi-supervised learning framework to expand the pool of labeled data with high confidence predictions obtained from unlabeled data. Our system produces an accurate prediction of geometric context of video achieving 96% accuracy across main geometric classes.
Modeling and Annotating the Expressive Semantics of Dance Videos  [PDF]
Rajkumar Kannan,Balakrishnan Ramadoss
Computer Science , 2010,
Abstract: Dance videos are interesting and semantics-intensive. At the same time, they are the complex type of videos compared to all other types such as sports, news and movie videos. In fact, dance video is the one which is less explored by the researchers across the globe. Dance videos exhibit rich semantics such as macro features and micro features and can be classified into several types. Hence, the conceptual modeling of the expressive semantics of the dance videos is very crucial and complex. This paper presents a generic Dance Video Semantics Model (DVSM) in order to represent the semantics of the dance videos at different granularity levels, identified by the components of the accompanying song. This model incorporates both syntactic and semantic features of the videos and introduces a new entity type called, Agent, to specify the micro features of the dance videos. The instantiations of the model are expressed as graphs. The model is implemented as a tool using J2SE and JMF to annotate the macro and micro features of the dance videos. Finally examples and evaluation results are provided to depict the effectiveness of the proposed dance video model. Keywords: Agents, Dance videos, Macro features, Micro features, Video annotation, Video semantics.
International Journal of Machine Intelligence , 2011,
Abstract: In this paper we propose two different approaches to segment and extract moving vehicles in traffic videos. Background subtraction is used to extract foreground frames. Different types of moving vehicles are then segmented by first proposed hybrid approach which combines a connected component analysis and a semi-supervised thresholding. In the second proposed approach, Gabor filtering method is used for segmentation of moving vehicles. The robustness and efficacy of our proposed approaches are elaborated by experiments conducted on real traffic videos captured under complex background, variations in illumination, motion, position of a camera and different moving directions during day time. The presented results are as well compared with 2 well-known methods of GMM and W4 used in extraction of moving vehicles.
Augmented Segmentation and Visualization for Presentation Videos  [PDF]
Alexander Haubold,John R. Kender
Computer Science , 2005,
Abstract: We investigate methods of segmenting, visualizing, and indexing presentation videos by separately considering audio and visual data. The audio track is segmented by speaker, and augmented with key phrases which are extracted using an Automatic Speech Recognizer (ASR). The video track is segmented by visual dissimilarities and augmented by representative key frames. An interactive user interface combines a visual representation of audio, video, text, and key frames, and allows the user to navigate a presentation video. We also explore clustering and labeling of speaker data and present preliminary results.
Skin-color Based Videos Categorization
Rehanullah Khan,Asad Maqsood,Zeeshan Khan,Muhammad Ishaq
International Journal of Computer Science Issues , 2012,
Abstract: On dedicated websites, people can upload videos and share it with the rest of the world. Currently these videos are categorized manually by the help of the user community. In this paper, we propose a combination of color spaces with the Bayesian network approach for robust detection of skin color followed by an automated video categorization. Experimental results show that our method can achieve satisfactory performance for categorizing videos based on skin color.
Skin-color based videos categorization  [PDF]
Rehanullah Khan,Asad Maqsood,Zeeshan Khan,Muhammad Ishaq,Arsalan Arif
Computer Science , 2012,
Abstract: On dedicated websites, people can upload videos and share it with the rest of the world. Currently these videos are cat- egorized manually by the help of the user community. In this paper, we propose a combination of color spaces with the Bayesian network approach for robust detection of skin color followed by an automated video categorization. Exper- imental results show that our method can achieve satisfactory performance for categorizing videos based on skin color.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.