oalib
Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Neyman-Pearson classification, convexity and stochastic constraints  [PDF]
Philippe Rigollet,Xin Tong
Computer Science , 2011,
Abstract: Motivated by problems of anomaly detection, this paper implements the Neyman-Pearson paradigm to deal with asymmetric errors in binary classification with a convex loss. Given a finite collection of classifiers, we combine them and obtain a new classifier that satisfies simultaneously the two following properties with high probability: (i) its probability of type I error is below a pre-specified level and (ii), it has probability of type II error close to the minimum possible. The proposed classifier is obtained by solving an optimization problem with an empirical objective and an empirical constraint. New techniques to handle such problems are developed and have consequences on chance constrained programming.
Neyman-Pearson Classification under High-Dimensional Settings  [PDF]
Anqi Zhao,Yang Feng,Lie Wang,Xin Tong
Statistics , 2015,
Abstract: Most existing binary classification methods target on the optimization of the overall classification risk and may fail to serve some real-world applications such as cancer diagnosis, where users are more concerned with the risk of misclassifying one specific class than the other. Neyman-Pearson (NP) paradigm was introduced in this context as a novel statistical framework for handling asymmetric type I/II error priorities. It seeks classifiers with a minimal type II error and a constrained type I error under a user specified level. This article is the first attempt to construct classifiers with guaranteed theoretical performance under the NP paradigm in high-dimensional settings. Based on the fundamental Neyman-Pearson Lemma, we used a plug-in approach to construct NP-type classifiers for Naive Bayes models. The proposed classifiers satisfy the NP oracle inequalities, which are natural NP paradigm counterparts of the oracle inequalities in classical binary classification. Besides their desirable theoretical properties, we also demonstrated their numerical advantages in prioritized error control via both simulation and real data studies.
High-Rate Vector Quantization for the Neyman-Pearson Detection of Correlated Processes  [PDF]
Joffrey Villard,Pascal Bianchi
Mathematics , 2010, DOI: 10.1109/TIT.2011.2158479
Abstract: This paper investigates the effect of quantization on the performance of the Neyman-Pearson test. It is assumed that a sensing unit observes samples of a correlated stationary ergodic multivariate process. Each sample is passed through an N-point quantizer and transmitted to a decision device which performs a binary hypothesis test. For any false alarm level, it is shown that the miss probability of the Neyman-Pearson test converges to zero exponentially as the number of samples tends to infinity, assuming that the observed process satisfies certain mixing conditions. The main contribution of this paper is to provide a compact closed-form expression of the error exponent in the high-rate regime i.e., when the number N of quantization levels tends to infinity, generalizing previous results of Gupta and Hero to the case of non-independent observations. If d represents the dimension of one sample, it is proved that the error exponent converges at rate N^{2/d} to the one obtained in the absence of quantization. As an application, relevant high-rate quantization strategies which lead to a large error exponent are determined. Numerical results indicate that the proposed quantization rule can yield better performance than existing ones in terms of detection error.
High-Rate Quantization for the Neyman-Pearson Detection of Hidden Markov Processes  [PDF]
Joffrey Villard,Pascal Bianchi,Eric Moulines,Pablo Piantanida
Mathematics , 2010, DOI: 10.1109/ITWKSPS.2010.5503209
Abstract: This paper investigates the decentralized detection of Hidden Markov Processes using the Neyman-Pearson test. We consider a network formed by a large number of distributed sensors. Sensors' observations are noisy snapshots of a Markov process to be detected. Each (real) observation is quantized on log2(N) bits before being transmitted to a fusion center which makes the final decision. For any false alarm level, it is shown that the miss probability of the Neyman-Pearson test converges to zero exponentially as the number of sensors tends to infinity. The error exponent is provided using recent results on Hidden Markov Models. In order to obtain informative expressions of the error exponent as a function of the quantization rule, we further investigate the case where the number N of quantization levels tends to infinity, following the approach developed in [Gupta & Hero, 2003]. In this regime, we provide the quantization rule maximizing the error exponent. Illustration of our results is provided in the case of the detection of a Gauss-Markov signal in noise. In terms of error exponent, the proposed quantization rule significantly outperforms the one proposed by [Gupta & Hero, 2003] for i.i.d. observations.
Constrained Convex Neyman-Pearson Classification Using an Outer Approximation Splitting Method  [PDF]
Michel Barlaud,Patrick L. Combettes,Lionel Fillatre
Statistics , 2015,
Abstract: This paper presents an algorithm for Neyman-Pearson classification. While empirical risk minimization approaches focus on minimizing a global risk, the Neyman-Pearson framework minimizes the type II risk under an upper bound constraint on the type I risk. Since the $0/1$ loss function is not convex, optimization methods employ convex surrogates that lead to tractable minimization problems. As shown in recent work, statistical bounds can be derived to quantify the cost of using such surrogates instead of the exact 1/0 loss. However, no specific algorithm has yet been proposed to actually solve the resulting minimization problem numerically. The contribution of this paper is to propose an efficient splitting algorithm to address this issue. Our method alternates a gradient step on the objective surrogate risk and an approximate projection step onto the constraint set, which is implemented by means of an outer approximation subgradient projection algorithm. Experiments on both synthetic data and biological data show the efficiency of the proposed method.
Neyman-Pearson Detection of a Gaussian Source using Dumb Wireless Sensors  [PDF]
Pascal Bianchi,Jeremie Jakubowicz,Francois Roueff
Mathematics , 2010,
Abstract: We investigate the performance of the Neyman-Pearson detection of a stationary Gaussian process in noise, using a large wireless sensor network (WSN). In our model, each sensor compresses its observation sequence using a linear precoder. The final decision is taken by a fusion center (FC) based on the compressed information. Two families of precoders are studied: random iid precoders and orthogonal precoders. We analyse their performance in the regime where both the number of sensors k and the number of samples n per sensor tend to infinity at the same rate, that is, k/n tends to c in (0, 1). Contributions are as follows. 1) Using results of random matrix theory and on large Toeplitz matrices, it is proved that the miss probability of the Neyman-Pearson detector converges exponentially to zero, when the above families of precoders are used. Closed form expressions of the corresponding error exponents are provided. 2) In particular, we propose a practical orthogonal precoding strategy, the Principal Frequencies Strategy (PFS), which achieves the best error exponent among all orthogonal strategies, and which requires very few signaling overhead between the central processor and the nodes of the network. 3) Moreover, when the PFS is used, a simplified low-complexity testing procedure can be implemented at the FC. We show that the proposed suboptimal test enjoys the same error exponent as the Neyman-Pearson test, which indicates a similar asymptotic behaviour of the performance. We illustrate our findings by numerical experiments on some examples.
On the Neyman-Pearson problem for law-invariant risk measures and robust utility functionals  [PDF]
Alexander Schied
Mathematics , 2004, DOI: 10.1214/105051604000000341
Abstract: Motivated by optimal investment problems in mathematical finance, we consider a variational problem of Neyman-Pearson type for law-invariant robust utility functionals and convex risk measures. Explicit solutions are found for quantile-based coherent risk measures and related utility functionals. Typically, these solutions exhibit a critical phenomenon: If the capital constraint is below some critical value, then the solution will coincide with a classical solution; above this critical value, the solution is a superposition of a classical solution and a less risky or even risk-free investment. For general risk measures and utility functionals, it is shown that there exists a solution that can be written as a deterministic increasing function of the price density.
Neyman-Pearson Detection of Gauss-Markov Signals in Noise: Closed-Form Error Exponent and Properties  [PDF]
Youngchul Sung,Lang Tong,H. Vincent Poor
Mathematics , 2005,
Abstract: The performance of Neyman-Pearson detection of correlated stochastic signals using noisy observations is investigated via the error exponent for the miss probability with a fixed level. Using the state-space structure of the signal and observation model, a closed-form expression for the error exponent is derived, and the connection between the asymptotic behavior of the optimal detector and that of the Kalman filter is established. The properties of the error exponent are investigated for the scalar case. It is shown that the error exponent has distinct characteristics with respect to correlation strength: for signal-to-noise ratio (SNR) >1 the error exponent decreases monotonically as the correlation becomes stronger, whereas for SNR <1 there is an optimal correlation that maximizes the error exponent for a given SNR.
基于风险熵和Neyman-Pearson准则的安防网络风险评估研究  [PDF]
胡瑞敏, 吕海涛, 陈军
自动化学报 , 2014, DOI: 10.3724/SP.J.1004.2014.02737
Abstract: ?为应对严峻复杂的安全形势,我国通过构建安全防范系统实现对公共安全的防护.安全防范系统是由人和安防设备组成的复杂系统,安全防范系统风险评估是判断其防护效能好坏的重要度量标准.本文将部署在一个区域的安全防范系统抽象看成由多个安防节点组成的网络,根据熵理论和Neyman-Pearson准则,提出一种利用防护最薄弱路径定量度量安全防范系统风险的模型,并给出安全防范网络防护最弱路径的表达式,以及基于Dijkstra最短路径算法求解防护最薄弱路径的方法.最后本文研究模型参数和安全防范系统部署的数量与安全防范网络的风险之间的关系,给出相应的仿真结果,并进行实际应用场景的风险评估实验.实验结果表明,本文提出的模型可以定量评估多节点的安全防范系统的风险,提高评估结果的科学性.
Radar detection with the Neyman--Pearson criterion using supervised-learning-machines trained with the cross-entropy error
María Pilar Jarabo-Amores, David de la Mata-Moya, Roberto Gil-Pita and Manuel Rosa-Zurera
EURASIP Journal on Advances in Signal Processing , 2013, DOI: 10.1186/1687-6180-2013-44
Abstract: The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman?Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
Page 1 /100
Display every page Item


Home
Copyright © 2008-2017 Open Access Library. All rights reserved.