Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
Changing the paradigm of fixed significance levels: Testing Hypothesis by Minimizing Sum of Errors Type I and Type II  [PDF]
Luis Pericchi,Carlos Pereira
Statistics , 2013,
Abstract: Our purpose, is to put forward a change in the paradigm of testing by generalizing a very natural idea exposed by Morris DeGroot (1975) aiming to an approach that is attractive to all schools of statistics, in a procedure better suited for the needs of science. DeGroot's seminal idea is to base testing statistical hypothesis on minimizing the weighted sum of type I plus type II error instead of of the prevailing paradigm which is fixing type I error and minimizing type II error. DeGroot's result is that in simple vs simple hypothesis the optimal criterion is to reject, according to the likelihood ratio as the evidence (ordering) statistics using a fixed threshold value, instead of a fixed tail probability. By defining expected type I and type II errors, we generalize DeGroot's approach and find that the optimal region is defined by the ratio of evidences, that is, averaged likelihoods (with respect to a prior measure) and a threshold fixed. This approach yields an optimal theory in complete generality, which the Classical Theory of Testing does not. This can be seen as a Bayes-Non-Bayes compromise: the criteria (weighted sum of type I and type II errors) is Frequentist, but the test criterion is the ratio of marginalized likelihood, which is Bayesian. We give arguments, to push the theory still further, so that the weighting measures (priors)of the likelihoods does not have to be proper and highly informative, but just predictively matched, that is that predictively matched priors, give rise to the same evidence (marginal likelihoods) using minimal (smallest) training samples. The theory that emerges, similar to the theories based on Objective Bayes approaches, is a powerful response to criticisms of the prevailing approach of hypothesis testing, see for example Ioannidis (2005) and Siegfried (2010) among many others.
Sensitivity and Specificity Analysis Relation to Statistical Hypothesis Testing and Its Errors: Application to Cryptosporidium Detection Techniques  [PDF]
Emmanuel de-Graft Johnson Owusu-Ansah, Angelina Sampson, Amponsah K. Samuel, Abaidoo Robert
Open Journal of Applied Sciences (OJAppS) , 2016, DOI: 10.4236/ojapps.2016.64022
Abstract: The use of Statistical Hypothesis Testing procedure to determine type I and type II errors was linked to the measurement of sensitivity and specificity in clinical trial test and experimental pathogen detection techniques. A theoretical analysis of establishing these types of errors was made and compared to determination of False Positive, False Negative, True Positive and True Negative. Experimental laboratory detection methods used to detect Cryptosporidium spp. were used to highlight the relationship between hypothesis testing, sensitivity, specificity and predicted values. The study finds that, sensitivity and specificity for the two laboratory methods used for Cryptosporidium detection were low hence lowering the probability of detecting a “false null hypothesis” for the presence of cryptosporidium in the water samples using either Microscopic or PCR. Nevertheless, both procedures for cryptosporidium detection had higher “true negatives” increasing its probability of failing to reject a “true null hypothesis” with specificity of 1.00 for both Microscopic and PCR laboratory detection methods.

- , 2018,
Abstract: 本文研究线性模型中关于误差Markov链齐次性的假设检验问题.利用关于未知参数的拟极大似然估计和鞅差方法,获得了似然比检验统计量的极限分布.
In this paper, we study the hypothesis testing for the homogeneity of the Markov chain of the errors in linear models. By using the quasi-maximum likelihood estimates (QMLEs) of some unknown parameter and the methods of martingale-difference, the limiting distribution for likelihood ratio test statistics is obtained
Capturing the Severity of Type II Errors in High-Dimensional Multiple Testing  [PDF]
Li He,Sanat K. Sarkar,Zhigen Zhao
Statistics , 2014,
Abstract: The severity of type II errors is frequently ignored when deriving a multiple testing procedure, even though utilizing it properly can greatly help in making correct decisions. This paper puts forward a theory behind developing a multiple testing procedure that can incorporate the type II error severity and is optimal in the sense of minimizing a measure of false non-discoveries among all procedures controlling a measure of false discoveries. The theory is developed under a general model allowing arbitrary dependence by taking a compound decision theoretic approach to multiple testing with a loss function incorporating the type II error severity. We present this optimal procedure in its oracle form and offer numerical evidence of its superior performance over relevant competitors.
Generalizations related to hypothesis testing with the Posterior distribution of the Likelihood Ratio  [PDF]
I. Smith,A. Ferrari
Statistics , 2014,
Abstract: The Posterior distribution of the Likelihood Ratio (PLR) is proposed by Dempster in 1974 for significance testing in the simple vs composite hypotheses case. In this hypotheses test case, classical frequentist and Bayesian hypotheses tests are irreconcilable, as emphasized by Lindley's paradox, Berger & Selke in 1987 and many others. However, Dempster shows that the PLR (with inner threshold 1) is equal to the frequentist p-value in the simple Gaussian case. In 1997, Aitkin extends this result by adding a nuisance parameter and showing its asymptotic validity under more general distributions. Here we extend the reconciliation between the PLR and a frequentist p-value for a finite sample, through a framework analogous to the Stein's theorem frame in which a credible (Bayesian) domain is equal to a confidence (frequentist) domain. This general reconciliation result only concerns simple vs composite hypotheses testing. The measures proposed by Aitkin in 2010 and Evans in 1997 have interesting properties and extend Dempster's PLR but only by adding a nuisance parameter. Here we propose two extensions of the PLR concept to the general composite vs composite hypotheses test. The first extension can be defined for improper priors as soon as the posterior is proper. The second extension appears from a new Bayesian-type Neyman-Pearson lemma and emphasizes, from a Bayesian perspective, the role of the LR as a discrepancy variable for hypothesis testing.
Yuri Gulbin
Image Analysis and Stereology , 2008, DOI: 10.5566/ias.v27.p163-174
Abstract: The paper considers the problem of validity of unfolding the grain size distribution with the back-substitution method. Due to the ill-conditioned nature of unfolding matrices, it is necessary to evaluate the accuracy and precision of parameter estimation and to verify the possibility of expected grain size distribution testing on the basis of intersection size histogram data. In order to review these questions, the computer modeling was used to compare size distributions obtained stereologically with those possessed by three-dimensional model aggregates of grains with a specified shape and random size. Results of simulations are reported and ways of improving the conventional stereological techniques are suggested. It is shown that new improvements in estimating and testing procedures enable grain size distributions to be unfolded more efficiently.
On Multiple Hypothesis Testing with Rejection Option  [PDF]
Naira Grigoryan,Ashot Harutyunyan,Svyatoslav Voloshynovskiy,Oleksiy Koval
Mathematics , 2011,
Abstract: We study the problem of multiple hypothesis testing (HT) in view of a rejection option. That model of HT has many different applications. Errors in testing of M hypotheses regarding the source distribution with an option of rejecting all those hypotheses are considered. The source is discrete and arbitrarily varying (AVS). The tradeoffs among error probability exponents/reliabilities associated with false acceptance of rejection decision and false rejection of true distribution are investigated and the optimal decision strategies are outlined. The main result is specialized for discrete memoryless sources (DMS) and studied further. An interesting insight that the analysis implies is the phenomenon (comprehensible in terms of supervised/unsupervised learning) that in optimal discrimination within M hypothetical distributions one permits always lower error than in deciding to decline the set of hypotheses. Geometric interpretations of the optimal decision schemes are given for the current and known bounds in multi-HT for AVS's.
Testing the Ideal Free Distribution Hypothesis: Moose Response to Changes in Habitat Amount  [PDF]
Abbie Stewart,Petr E. Komers
ISRN Ecology , 2012, DOI: 10.5402/2012/945209
Abstract: According to the ideal free distribution hypothesis, the density of organisms is expected to remain constant across a range of habitat availability, provided that organisms are ideal, selecting habitat patches that maximize resource access, and free, implying no constraints associated with patch choice. The influence of the amount of habitat on moose (Alces alces) pellet group density as an index of moose occurrence was assessed within the Foothills Natural Region, Alberta, Canada, using a binary patch-matrix approach. Fecal pellet density was compared across 45 sites representing a gradient in habitat amount. Pellet density in moose habitat increased in a linear or quadratic relationship with mean moose habitat patch size. Moose pellet density decreased faster thanwhatwould be expected from a decrease in habitat amount alone. This change in pellet group density with habitat amount may be because one or both of the assumptions of the ideal free distribution hypothesis were violated. 1. Introduction One of the basic tenets of ecology is to understand the distribution of organisms. The ideal free distribution (IFD) theory [1] relates the distribution of organisms to the availability of resources, specifically describing the equilibrium distribution between the amount of resources and the abundance of organisms. Assumptions associated with the IFD are that organisms are ideal, selecting patches that maximize resource access, and free, implying that there are no constraints associated with patch choice [1, 2]. Within this framework, the IFD predicts that the number of individuals present is proportional to habitats or patches, with respect to the amount of resources available [1, 2]. In doing so, the density of organisms is expected to remain constant per unit of habitat, regardless of the amount of habitat available or regardless of the habitat configuration, provided that access and quality of habitat remain constant. Work with simulated landscapes has established predictions for the relationships between landscape configuration metrics, which measure the spatial arrangement of habitat, and the amount of habitat in the landscape [3–8]. Many of these relationships have been found to change non-linearly with changes in amount of habitat cover, often with abrupt shifts or thresholds in the relationships. This suggests that there may be discontinuous changes in ecosystem functioning in relation to habitat loss [9], such that organism occurrence in the landscape may be affected by both habitat amount and fragmentation. These conceptual frameworks lead to
Testing the suitability of polynomial models in errors-in-variables problems  [PDF]
Peter Hall,Yanyuan Ma
Mathematics , 2008, DOI: 10.1214/009053607000000361
Abstract: A low-degree polynomial model for a response curve is used commonly in practice. It generally incorporates a linear or quadratic function of the covariate. In this paper we suggest methods for testing the goodness of fit of a general polynomial model when there are errors in the covariates. There, the true covariates are not directly observed, and conventional bootstrap methods for testing are not applicable. We develop a new approach, in which deconvolution methods are used to estimate the distribution of the covariates under the null hypothesis, and a ``wild'' or moment-matching bootstrap argument is employed to estimate the distribution of the experimental errors (distinct from the distribution of the errors in covariates). Most of our attention is directed at the case where the distribution of the errors in covariates is known, although we also discuss methods for estimation and testing when the covariate error distribution is estimated. No assumptions are made about the distribution of experimental error, and, in particular, we depart substantially from conventional parametric models for errors-in-variables problems.
Setting an Optimal α That Minimizes Errors in Null Hypothesis Significance Tests  [PDF]
Joseph F. Mudge, Leanne F. Baker, Christopher B. Edge, Jeff E. Houlahan
PLOS ONE , 2012, DOI: 10.1371/journal.pone.0032734
Abstract: Null hypothesis significance testing has been under attack in recent years, partly owing to the arbitrary nature of setting α (the decision-making threshold and probability of Type I error) at a constant value, usually 0.05. If the goal of null hypothesis testing is to present conclusions in which we have the highest possible confidence, then the only logical decision-making threshold is the value that minimizes the probability (or occasionally, cost) of making errors. Setting α to minimize the combination of Type I and Type II error at a critical effect size can easily be accomplished for traditional statistical tests by calculating the α associated with the minimum average of α and β at the critical effect size. This technique also has the flexibility to incorporate prior probabilities of null and alternate hypotheses and/or relative costs of Type I and Type II errors, if known. Using an optimal α results in stronger scientific inferences because it estimates and minimizes both Type I errors and relevant Type II errors for a test. It also results in greater transparency concerning assumptions about relevant effect size(s) and the relative costs of Type I and II errors. By contrast, the use of α = 0.05 results in arbitrary decisions about what effect sizes will likely be considered significant, if real, and results in arbitrary amounts of Type II error for meaningful potential effect sizes. We cannot identify a rationale for continuing to arbitrarily use α = 0.05 for null hypothesis significance tests in any field, when it is possible to determine an optimal α.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.