oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Effect of Flanking Sounds on the Auditory Continuity Illusion  [PDF]
Maori Kobayashi, Makio Kashino
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0051969
Abstract: Background The auditory continuity illusion or the perceptual restoration of a target sound briefly interrupted by an extraneous sound has been shown to depend on masking. However, little is known about factors other than masking. Methodology/Principal Findings We examined whether a sequence of flanking transient sounds affects the apparent continuity of a target tone alternated with a bandpass noise at regular intervals. The flanking sounds significantly increased the limit of perceiving apparent continuity in terms of the maximum target level at a fixed noise level, irrespective of the frequency separation between the target and flanking sounds: the flanking sounds enhanced the continuity illusion. This effect was dependent on the temporal relationship between the flanking sounds and noise bursts. Conclusions/Significance The spectrotemporal characteristics of the enhancement effect suggest that a mechanism to compensate for exogenous attentional distraction may contribute to the continuity illusion.
Learning $\ell_1$-based analysis and synthesis sparsity priors using bi-level optimization  [PDF]
Yunjin Chen,Thomas Pock,Horst Bischof
Computer Science , 2014,
Abstract: We consider the analysis operator and synthesis dictionary learning problems based on the the $\ell_1$ regularized sparse representation model. We reveal the internal relations between the $\ell_1$-based analysis model and synthesis model. We then introduce an approach to learn both analysis operator and synthesis dictionary simultaneously by using a unified framework of bi-level optimization. Our aim is to learn a meaningful operator (dictionary) such that the minimum energy solution of the analysis (synthesis)-prior based model is as close as possible to the ground-truth. We solve the bi-level optimization problem using the implicit differentiation technique. Moreover, we demonstrate the effectiveness of our leaning approach by applying the learned analysis operator (dictionary) to the image denoising task and comparing its performance with state-of-the-art methods. Under this unified framework, we can compare the performance of the two types of priors.
The Opponent Channel Population Code of Sound Location Is an Efficient Representation of Natural Binaural Sounds  [PDF]
Wiktor M?ynarski
PLOS Computational Biology , 2015, DOI: 10.1371/journal.pcbi.1004294
Abstract: In mammalian auditory cortex, sound source position is represented by a population of broadly tuned neurons whose firing is modulated by sounds located at all positions surrounding the animal. Peaks of their tuning curves are concentrated at lateral position, while their slopes are steepest at the interaural midline, allowing for the maximum localization accuracy in that area. These experimental observations contradict initial assumptions that the auditory space is represented as a topographic cortical map. It has been suggested that a “panoramic” code has evolved to match specific demands of the sound localization task. This work provides evidence suggesting that properties of spatial auditory neurons identified experimentally follow from a general design principle- learning a sparse, efficient representation of natural stimuli. Natural binaural sounds were recorded and served as input to a hierarchical sparse-coding model. In the first layer, left and right ear sounds were separately encoded by a population of complex-valued basis functions which separated phase and amplitude. Both parameters are known to carry information relevant for spatial hearing. Monaural input converged in the second layer, which learned a joint representation of amplitude and interaural phase difference. Spatial selectivity of each second-layer unit was measured by exposing the model to natural sound sources recorded at different positions. Obtained tuning curves match well tuning characteristics of neurons in the mammalian auditory cortex. This study connects neuronal coding of the auditory space with natural stimulus statistics and generates new experimental predictions. Moreover, results presented here suggest that cortical regions with seemingly different functions may implement the same computational strategy-efficient coding.
Wrong Priors  [PDF]
Carlos C. Rodriguez
Physics , 2007, DOI: 10.1063/1.2821300
Abstract: All priors are not created equal. There are right and there are wrong priors. That is the main conclusion of this contribution. I use, a cooked-up example designed to create drama, and a typical textbook example to show the pervasiveness of wrong priors in standard statistical practice.
Variable Selection Using Shrinkage Priors  [PDF]
Hanning Li,Debdeep Pati
Statistics , 2015,
Abstract: Variable selection has received widespread attention over the last decade as we routinely encounter high-throughput datasets in complex biological and environment research. Most Bayesian variable selection methods are restricted to mixture priors having separate components for characterizing the signal and the noise. However, such priors encounter computational issues in high dimensions. This has motivated continuous shrinkage priors, resembling the two-component priors facilitating computation and interpretability. While such priors are widely used for estimating high-dimensional sparse vectors, selecting a subset of variables remains a daunting task. In this article, we propose a general approach for variable selection with shrinkage priors. The presence of very few tuning parameters makes our method attractive in comparison to adhoc thresholding approaches. The applicability of the approach is not limited to continuous shrinkage priors, but can be used along with any shrinkage prior. Theoretical properties for near-collinear design matrices are investigated and the method is shown to have good performance in a wide range of synthetic data examples.
Hierarchical sparsity priors for regression models  [PDF]
Jim E. Griffin,Philip J. Brown
Statistics , 2013,
Abstract: We focus on the increasingly important area of sparse regression problems where there are many variables and the effects of a large subset of these are negligible. This paper describes the construction of hierarchical prior distributions when the effects are considered related. These priors allow dependence between the regression coefficients and encourage related shrinkage towards zero of different regression coefficients. The properties of these priors are discussed and applications to linear models with interactions and generalized additive models are used as illustrations. Ideas of heredity relating different levels of interaction are encompassed.
Of priors and prejudice  [PDF]
M. G. Bowler
Quantitative Biology , 2012,
Abstract: If species abundance distributions are dominated by the simple processes of individuals in a community giving birth and death independently, the result is a log series distribution. I calculate this in a number of different ways, using both master equations and also the combinatoric methods of elementary statistical mechanics. Considerable light is shed on the nature and role of 'priors' in the language of MaxEnt.
Expectation Propagation for Neural Networks with Sparsity-promoting Priors  [PDF]
Pasi Jyl?nki,Aapo Nummenmaa,Aki Vehtari
Statistics , 2013,
Abstract: We propose a novel approach for nonlinear regression using a two-layer neural network (NN) model structure with sparsity-favoring hierarchical priors on the network weights. We present an expectation propagation (EP) approach for approximate integration over the posterior distribution of the weights, the hierarchical scale parameters of the priors, and the residual scale. Using a factorized posterior approximation we derive a computationally efficient algorithm, whose complexity scales similarly to an ensemble of independent sparse linear models. The approach enables flexible definition of weight priors with different sparseness properties such as independent Laplace priors with a common scale parameter or Gaussian automatic relevance determination (ARD) priors with different relevance parameters for all inputs. The approach can be extended beyond standard activation functions and NN model structures to form flexible nonlinear predictors from multiple sparse linear models. The effects of the hierarchical priors and the predictive performance of the algorithm are assessed using both simulated and real-world data. Comparisons are made to two alternative models with ARD priors: a Gaussian process with a NN covariance function and marginal maximum a posteriori estimates of the relevance parameters, and a NN with Markov chain Monte Carlo integration over all the unknown model parameters.
Natural statistics of binaural sounds  [PDF]
Wiktor M?ynarski,Jürgen Jost
Computer Science , 2014,
Abstract: Binaural sound localization is usually considered a discrimination task, where interaural time (ITD) and level (ILD) disparities at pure frequency channels are utilized to identify a position of a sound source. In natural conditions binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. In order to process binaural sounds efficiently, the auditory system should be adapted to naturally encountered cue distributions. Statistics of cues encountered naturally and their dependence on the physical properties of an auditory scene have not been studied before. Here, we performed binaural recordings of three auditory scenes with varying spatial properties. We have analyzed empirical cue distributions from each scene by fitting them with parametric probability density functions which allowed for an easy comparison of different scenes. Higher order statistics of binaural waveforms were analyzed by performing Independent Component Analysis (ICA) and studying properties of learned basis functions. Obtained results can be related to known neuronal mechanisms and suggest how binaural hearing can be understood in terms of adaptation to the natural signal statistics.
Reproducing kernel Hilbert spaces of Gaussian priors  [PDF]
A. W. van der Vaart,J. H. van Zanten
Mathematics , 2008, DOI: 10.1214/074921708000000156
Abstract: We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described through a concentration function that is expressed in the reproducing Hilbert space. Absolute continuity of Gaussian measures and concentration inequalities play an important role in understanding and deriving this result. Series expansions of Gaussian variables and transformations of their reproducing kernel Hilbert spaces under linear maps are useful tools to compute the concentration function.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.