Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99


Any time

2019 ( 12 )

2018 ( 8 )

2017 ( 8 )

2016 ( 9 )

Custom range...

Search Results: 1 - 10 of 5800 matches for " Jeff Schneider "
All listed articles are free for downloading (OA Articles)
Page 1 /5800
Display every page Item
Maximum Margin Output Coding
Yi Zhang,Jeff Schneider
Computer Science , 2012,
Abstract: In this paper we study output coding for multi-label prediction. For a multi-label output coding to be discriminative, it is important that codewords for different label vectors are significantly different from each other. In the meantime, unlike in traditional coding theory, codewords in output coding are to be predicted from the input, so it is also critical to have a predictable label encoding. To find output codes that are both discriminative and predictable, we first propose a max-margin formulation that naturally captures these two properties. We then convert it to a metric learning formulation, but with an exponentially large number of constraints as commonly encountered in structured prediction problems. Without a label structure for tractable inference, we use overgenerating (i.e., relaxation) techniques combined with the cutting plane method for optimization. In our empirical study, the proposed output coding scheme outperforms a variety of existing multi-label prediction methods for image, text and music classification.
Real-valued All-Dimensions search: Low-overhead rapid searching over subsets of attributes
Andrew Moore,Jeff Schneider
Computer Science , 2012,
Abstract: This paper is about searching the combinatorial space of contingency tables during the inner loop of a nonlinear statistical optimization. Examples of this operation in various data analytic communities include searching for nonlinear combinations of attributes that contribute significantly to a regression (Statistics), searching for items to include in a decision list (machine learning) and association rule hunting (Data Mining). This paper investigates a new, efficient approach to this class of problems, called RADSEARCH (Real-valued All-Dimensions-tree Search). RADSEARCH finds the global optimum, and this gives us the opportunity to empirically evaluate the question: apart from algorithmic elegance what does this attention to optimality buy us? We compare RADSEARCH with other recent successful search algorithms such as CN2, PRIM, APriori, OPUS and DenseMiner. Finally, we introduce RADREG, a new regression algorithm for learning real-valued outputs based on RADSEARCHing for high-order interactions.
On the Error of Random Fourier Features
Dougal J. Sutherland,Jeff Schneider
Computer Science , 2015,
Abstract: Kernel methods give powerful, flexible, and theoretically grounded approaches to solving many problems in machine learning. The standard approach, however, requires pairwise evaluations of a kernel function, which can lead to scalability issues for very large datasets. Rahimi and Recht (2007) suggested a popular approach to handling this problem, known as random Fourier features. The quality of this approximation, however, is not well understood. We improve the uniform error bound of that paper, as well as giving novel understandings of the embedding's variance, approximation error, and use in some machine learning methods. We also point out that surprisingly, of the two main variants of those features, the more widely used is strictly higher-variance for the Gaussian kernel and has worse bounds.
Nonparametric Divergence Estimation with Applications to Machine Learning on Distributions
Barnabas Poczos,Liang Xiong,Jeff Schneider
Computer Science , 2012,
Abstract: Low-dimensional embedding, manifold learning, clustering, classification, and anomaly detection are among the most important problems in machine learning. The existing methods usually consider the case when each instance has a fixed, finite-dimensional feature representation. Here we consider a different setting. We assume that each instance corresponds to a continuous probability distribution. These distributions are unknown, but we are given some i.i.d. samples from each distribution. Our goal is to estimate the distances between these distributions and use these distances to perform low-dimensional embedding, clustering/classification, or anomaly detection for the distributions. We present estimation algorithms, describe how to apply them for machine learning tasks on distributions, and show empirical results on synthetic data, real word images, and astronomical data sets.
Copula-based Kernel Dependency Measures
Barnabas Poczos,Zoubin Ghahramani,Jeff Schneider
Computer Science , 2012,
Abstract: The paper presents a new copula based method for measuring dependence between random variables. Our approach extends the Maximum Mean Discrepancy to the copula of the joint distribution. We prove that this approach has several advantageous properties. Similarly to Shannon mutual information, the proposed dependence measure is invariant to any strictly increasing transformation of the marginal variables. This is important in many applications, for example in feature selection. The estimator is consistent, robust to outliers, and uses rank statistics only. We derive upper bounds on the convergence rate and propose independence tests too. We illustrate the theoretical contributions through a series of experiments in feature selection and low-dimensional embedding of distributions.
Submodularity in Batch Active Learning and Survey Problems on Gaussian Random Fields
Yifei Ma,Roman Garnett,Jeff Schneider
Computer Science , 2012,
Abstract: Many real-world datasets can be represented in the form of a graph whose edge weights designate similarities between instances. A discrete Gaussian random field (GRF) model is a finite-dimensional Gaussian process (GP) whose prior covariance is the inverse of a graph Laplacian. Minimizing the trace of the predictive covariance Sigma (V-optimality) on GRFs has proven successful in batch active learning classification problems with budget constraints. However, its worst-case bound has been missing. We show that the V-optimality on GRFs as a function of the batch query set is submodular and hence its greedy selection algorithm guarantees an (1-1/e) approximation ratio. Moreover, GRF models have the absence-of-suppressor (AofS) condition. For active survey problems, we propose a similar survey criterion which minimizes 1'(Sigma)1. In practice, V-optimality criterion performs better than GPs with mutual information gain criteria and allows nonuniform costs for different nodes.
High Dimensional Bayesian Optimisation and Bandits via Additive Models
Kirthevasan Kandasamy,Jeff Schneider,Barnabas Poczos
Computer Science , 2015,
Abstract: Bayesian Optimisation (BO) is a technique used in optimising a $D$-dimensional function which is typically expensive to evaluate. While there have been many successes for BO in low dimensions, scaling it to high dimensions has been notoriously difficult. Existing literature on the topic are under very restrictive settings. In this paper, we identify two key challenges in this endeavour. We tackle these challenges by assuming an additive structure for the function. This setting is substantially more expressive and contains a richer class of functions than previous work. We prove that, for additive functions the regret has only linear dependence on $D$ even though the function depends on all $D$ dimensions. We also demonstrate several other statistical and computational benefits in our framework. Via synthetic examples, a scientific simulation and a face detection problem we demonstrate that our method outperforms naive BO on additive functions and on several examples where the function is not additive.
Neuronal Correlates of the Set-Size Effect in Monkey Lateral Intraparietal Area
Puiu F. Balan,Jeff Oristaglio,David M. Schneider,Jacqueline Gottlieb
PLOS Biology , 2012, DOI: 10.1371/journal.pbio.0060158
Abstract: It has long been known that the brain is limited in the amount of sensory information that it can process at any given time. A well-known form of capacity limitation in vision is the set-size effect, whereby the time needed to find a target increases in the presence of distractors. The set-size effect implies that inputs from multiple objects interfere with each other, but the loci and mechanisms of this interference are unknown. Here we show that the set-size effect has a neural correlate in competitive visuo-visual interactions in the lateral intraparietal area, an area related to spatial attention and eye movements. Monkeys performed a covert visual search task in which they discriminated the orientation of a visual target surrounded by distractors. Neurons encoded target location, but responses associated with both target and distractors declined as a function of distractor number (set size). Firing rates associated with the target in the receptive field correlated with reaction time both within and across set sizes. The findings suggest that competitive visuo-visual interactions in areas related to spatial attention contribute to capacity limitations in visual searches.
Determining Frequentist Confidence Limits Using a Directed Parameter Space Search
Scott F. Daniel,Andrew J. Connolly,Jeff Schneider
Physics , 2012, DOI: 10.1088/0004-637X/794/1/38
Abstract: We consider the problem of inferring constraints on a high-dimensional parameter space with a computationally expensive likelihood function. We propose a machine learning algorithm that maps out the Frequentist confidence limit on parameter space by intelligently targeting likelihood evaluations so as to quickly and accurately characterize the likelihood surface in both low- and high-likelihood regions. We compare our algorithm to Bayesian credible limits derived by the well-tested Markov Chain Monte Carlo (MCMC) algorithm using both multi-modal toy likelihood functions and the 7-year WMAP cosmic microwave background likelihood function. We find that our algorithm correctly identifies the location, general size, and general shape of high-likelihood regions in parameter space while being more robust against multi-modality than MCMC.
Bayesian Optimal Active Search and Surveying
Roman Garnett,Yamuna Krishnamurthy,Xuehan Xiong,Jeff Schneider,Richard Mann
Computer Science , 2012,
Abstract: We consider two active binary-classification problems with atypical objectives. In the first, active search, our goal is to actively uncover as many members of a given class as possible. In the second, active surveying, our goal is to actively query points to ultimately predict the proportion of a given class. Numerous real-world problems can be framed in these terms, and in either case typical model-based concerns such as generalization error are only of secondary importance. We approach these problems via Bayesian decision theory; after choosing natural utility functions, we derive the optimal policies. We provide three contributions. In addition to introducing the active surveying problem, we extend previous work on active search in two ways. First, we prove a novel theoretical result, that less-myopic approximations to the optimal policy can outperform more-myopic approximations by any arbitrary degree. We then derive bounds that for certain models allow us to reduce (in practice dramatically) the exponential search space required by a na?ve implementation of the optimal policy, enabling further lookahead while still ensuring that optimal decisions are always made.
Page 1 /5800
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.