oalib

Publish in OALib Journal

ISSN: 2333-9721

APC: Only $99

Submit

Any time

2019 ( 398 )

2018 ( 708 )

2017 ( 704 )

2016 ( 972 )

Custom range...

Search Results: 1 - 10 of 403883 matches for " Jeff M. Phillips "
All listed articles are free for downloading (OA Articles)
Page 1 /403883
Display every page Item
epsilon-Samples of Kernels
Jeff M. Phillips
Computer Science , 2011,
Abstract: We study the worst case error of kernel density estimates via subset approximation. A kernel density estimate of a distribution is the convolution of that distribution with a fixed kernel (e.g. Gaussian kernel). Given a subset (i.e. a point set) of the input distribution, we can compare the kernel density estimates of the input distribution with that of the subset and bound the worst case error. If the maximum error is eps, then this subset can be thought of as an eps-sample (aka an eps-approximation) of the range space defined with the input distribution as the ground set and the fixed kernel representing the family of ranges. Interestingly, in this case the ranges are not binary, but have a continuous range (for simplicity we focus on kernels with range of [0,1]); these allow for smoother notions of range spaces. It turns out, the use of this smoother family of range spaces has an added benefit of greatly decreasing the size required for eps-samples. For instance, in the plane the size is O((1/eps^{4/3}) log^{2/3}(1/eps)) for disks (based on VC-dimension arguments) but is only O((1/eps) sqrt{log (1/eps)}) for Gaussian kernels and for kernels with bounded slope that only affect a bounded domain. These bounds are accomplished by studying the discrepancy of these "kernel" range spaces, and here the improvement in bounds are even more pronounced. In the plane, we show the discrepancy is O(sqrt{log n}) for these kernels, whereas for balls there is a lower bound of Omega(n^{1/4}).
Chernoff-Hoeffding Inequality and Applications
Jeff M. Phillips
Computer Science , 2012,
Abstract: When dealing with modern big data sets, a very common theme is reducing the set through a random process. These generally work by making "many simple estimates" of the full data set, and then judging them as a whole. Perhaps magically, these "many simple estimates" can provide a very accurate and small representation of the large data set. The key tool in showing how many of these simple estimates are needed for a fixed accuracy trade-off is the Chernoff-Hoeffding inequality[Che52,Hoe63]. This document provides a simple form of this bound, and two examples of its use.
Algorithms for eps-approximations of Terrains
Jeff M. Phillips
Computer Science , 2008,
Abstract: Consider a point set D with a measure function w : D -> R. Let A be the set of subsets of D induced by containment in a shape from some geometric family (e.g. axis-aligned rectangles, half planes, balls, k-oriented polygons). We say a range space (D, A) has an eps-approximation P if max {R \in A} | w(R \cap P)/w(P) - w(R \cap D)/w(D) | <= eps. We describe algorithms for deterministically constructing discrete eps-approximations for continuous point sets such as distributions or terrains. Furthermore, for certain families of subsets A, such as those described by axis-aligned rectangles, we reduce the size of the eps-approximations by almost a square root from O(1/eps^2 log 1/eps) to O(1/eps polylog 1/eps). This is often the first step in transforming a continuous problem into a discrete one for which combinatorial techniques can be applied. We describe applications of this result in geo-spatial analysis, biosurveillance, and sensor networks.
A Gentle Introduction to the Kernel Distance
Jeff M. Phillips,Suresh Venkatasubramanian
Computer Science , 2011,
Abstract: This document reviews the definition of the kernel distance, providing a gentle introduction tailored to a reader with background in theoretical computer science, but limited exposure to technology more common to machine learning, functional analysis and geometric measure theory. The key aspect of the kernel distance developed here is its interpretation as an L_2 distance between probability measures or various shapes (e.g. point sets, curves, surfaces) embedded in a vector space (specifically an RKHS). This structure enables several elegant and efficient solutions to data analysis problems. We conclude with a glimpse into the mathematical underpinnings of this measure, highlighting its recent independent evolution in two separate fields.
Relative Errors for Deterministic Low-Rank Matrix Approximations
Mina Ghashami,Jeff M. Phillips
Computer Science , 2013,
Abstract: We consider processing an n x d matrix A in a stream with row-wise updates according to a recent algorithm called Frequent Directions (Liberty, KDD 2013). This algorithm maintains an l x d matrix Q deterministically, processing each row in O(d l^2) time; the processing time can be decreased to O(d l) with a slight modification in the algorithm and a constant increase in space. We show that if one sets l = k+ k/eps and returns Q_k, a k x d matrix that is the best rank k approximation to Q, then we achieve the following properties: ||A - A_k||_F^2 <= ||A||_F^2 - ||Q_k||_F^2 <= (1+eps) ||A - A_k||_F^2 and where pi_{Q_k}(A) is the projection of A onto the rowspace of Q_k then ||A - pi_{Q_k}(A)||_F^2 <= (1+eps) ||A - A_k||_F^2. We also show that Frequent Directions cannot be adapted to a sparse version in an obvious way that retains the l original rows of the matrix, as opposed to a linear combination or sketch of the rows.
Subsampling in Smoothed Range Spaces
Jeff M. Phillips,Yan Zheng
Computer Science , 2015,
Abstract: We consider smoothed versions of geometric range spaces, so an element of the ground set (e.g. a point) can be contained in a range with a non-binary value in $[0,1]$. Similar notions have been considered for kernels; we extend them to more general types of ranges. We then consider approximations of these range spaces through $\varepsilon $-nets and $\varepsilon $-samples (aka $\varepsilon$-approximations). We characterize when size bounds for $\varepsilon $-samples on kernels can be extended to these more general smoothed range spaces. We also describe new generalizations for $\varepsilon $-nets to these range spaces and show when results from binary range spaces can carry over to these smoothed ones.
Shape Fitting on Point Sets with Probability Distributions
Maarten Loffler,Jeff M. Phillips
Computer Science , 2008,
Abstract: A typical computational geometry problem begins: Consider a set P of n points in R^d. However, many applications today work with input that is not precisely known, for example when the data is sensed and has some known error model. What if we do not know the set P exactly, but rather we have a probability distribution mu_p governing the location of each point p in P? Consider a set of (non-fixed) points P, and let mu_P be the probability distribution of this set. We study several measures (e.g. the radius of the smallest enclosing ball, or the area of the smallest enclosing box) with respect to mu_P. The solutions to these problems do not, as in the traditional case, consist of a single answer, but rather a distribution of answers. We describe several data structures that approximate distributions of answers for shape fitting problems. We provide simple and efficient randomized algorithms for computing all of these data structures, which are easy to implement and practical. We provide some experimental results to assert this. We also provide more involved deterministic algorithms for some of these data structures that run in time polynomial in n and 1/eps, where eps is the approximation factor.
Streaming Kernel Principal Component Analysis
Mina Ghashami,Daniel Perry,Jeff M. Phillips
Computer Science , 2015,
Abstract: Kernel principal component analysis (KPCA) provides a concise set of basis vectors which capture non-linear structures within large data sets, and is a central tool in data analysis and learning. To allow for non-linear relations, typically a full $n \times n$ kernel matrix is constructed over $n$ data points, but this requires too much space and time for large values of $n$. Techniques such as the Nystr\"om method and random feature maps can help towards this goal, but they do not explicitly maintain the basis vectors in a stream and take more space than desired. We propose a new approach for streaming KPCA which maintains a small set of basis elements in a stream, requiring space only logarithmic in $n$, and also improves the dependence on the error parameter. Our technique combines together random feature maps with recent advances in matrix sketching, it has guaranteed spectral norm error bounds with respect to the original kernel matrix, and it compares favorably in practice to state-of-the-art approaches.
The Hunting of the Bump: On Maximizing Statistical Discrepancy
Deepak Agarwal,Jeff M. Phillips,Suresh Venkatasubramanian
Computer Science , 2005,
Abstract: Anomaly detection has important applications in biosurveilance and environmental monitoring. When comparing measured data to data drawn from a baseline distribution, merely, finding clusters in the measured data may not actually represent true anomalies. These clusters may likely be the clusters of the baseline distribution. Hence, a discrepancy function is often used to examine how different measured data is to baseline data within a region. An anomalous region is thus defined to be one with high discrepancy. In this paper, we present algorithms for maximizing statistical discrepancy functions over the space of axis-parallel rectangles. We give provable approximation guarantees, both additive and relative, and our methods apply to any convex discrepancy function. Our algorithms work by connecting statistical discrepancy to combinatorial discrepancy; roughly speaking, we show that in order to maximize a convex discrepancy function over a class of shapes, one needs only maximize a linear discrepancy function over the same set of shapes. We derive general discrepancy functions for data generated from a one- parameter exponential family. This generalizes the widely-used Kulldorff scan statistic for data from a Poisson distribution. We present an algorithm running in $O(\smash[tb]{\frac{1}{\epsilon} n^2 \log^2 n})$ that computes the maximum discrepancy rectangle to within additive error $\epsilon$, for the Kulldorff scan statistic. Similar results hold for relative error and for discrepancy functions for data coming from Gaussian, Bernoulli, and gamma distributions. Prior to our work, the best known algorithms were exact and ran in time $\smash[t]{O(n^4)}$.
Outlier Robust ICP for Minimizing Fractional RMSD
Jeff M. Phillips,Ran Liu,Carlo Tomasi
Computer Science , 2006,
Abstract: We describe a variation of the iterative closest point (ICP) algorithm for aligning two point sets under a set of transformations. Our algorithm is superior to previous algorithms because (1) in determining the optimal alignment, it identifies and discards likely outliers in a statistically robust manner, and (2) it is guaranteed to converge to a locally optimal solution. To this end, we formalize a new distance measure, fractional root mean squared distance (frmsd), which incorporates the fraction of inliers into the distance function. We lay out a specific implementation, but our framework can easily incorporate most techniques and heuristics from modern registration algorithms. We experimentally validate our algorithm against previous techniques on 2 and 3 dimensional data exposed to a variety of outlier types.
Page 1 /403883
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.