Abstract:
In this paper, we develop a general theory on the coverage probability of random intervals defined in terms of discrete random variables with continuous parameter spaces. The theory shows that the minimum coverage probabilities of random intervals with respect to corresponding parameters are achieved at discrete finite sets and that the coverage probabilities are continuous and unimodal when parameters are varying in between interval endpoints. The theory applies to common important discrete random variables including binomial variable, Poisson variable, negative binomial variable and hypergeometrical random variable. The theory can be used to make relevant statistical inference more rigorous and less conservative.

Abstract:
Posttest probability is one of the key parameters which can be measured and interpret in dichotomous diagnostic tests. Defined as the proportion of patients, which present particular test result and the target disorder, the posttest probability is a parameter used in assessing the efficiency of the diagnostic. As a point estimated parameter, posttest probability needs a confidence interval in order to interpreting trustworthiness or robustness of the finding. Unfortunately, for post test probability there was no confidence intervals reported in literature. The aim of this paper is to introduce six methods named Wilson, Logit, LogitC, BayesF, Jeffreys, and Binomial as methods of computing confidence intervals for posttest probability and to present theirs performances. Computer implementations of the methods use the PHP language. The performance of each method for different sample sizes and different values of binomial variable was asses using a set of criterions. One criterion was the average of experimental errors and standard deviations. Second, the deviation relative to imposed significance level (α = 5%). Third, the behavior of the methods when the sample size vary from 4 to 103 and on random sample and random binomial variable in 4..1000 domain.The results of the experiments show us that the Binomial method obtain the best performances in computing the confidence intervals for posttest probability for sample size starting with 36.

Abstract:
Adaptive confidence intervals for regression functions are constructed under shape constraints of monotonicity and convexity. A natural benchmark is established for the minimum expected length of confidence intervals at a given function in terms of an analytic quantity, the local modulus of continuity. This bound depends not only on the function but also the assumed function class. These benchmarks show that the constructed confidence intervals have near minimum expected length for each individual function, while maintaining a given coverage probability for functions within the class. Such adaptivity is much stronger than adaptive minimaxity over a collection of large parameter spaces.

Abstract:
Arithmetic constraints on integer intervals are supported in many constraint programming systems. We study here a number of approaches to implement constraint propagation for these constraints. To describe them we introduce integer interval arithmetic. Each approach is explained using appropriate proof rules that reduce the variable domains. We compare these approaches using a set of benchmarks. For the most promising approach we provide results that characterize the effect of constraint propagation. This is a full version of our earlier paper, cs.PL/0403016.

Abstract:
Suppose that X_1,X_2,...,X_n are independent and identically Bernoulli(theta) distributed. Also suppose that our aim is to find an exact confidence interval for theta that is the intersection of a 1-\alpha/2 upper confidence interval and a 1-\alpha/2 lower confidence interval. The Clopper-Pearson interval is the standard such confidence interval for theta, which is widely used in practice. We consider the randomized confidence interval of Stevens, 1950 and present some extensions, including pseudorandomized confidence intervals. We also consider the "data-randomized" confidence interval of Korn, 1987 and point out some additional attractive features of this interval. We also contribute to the discussion about the practical use of such confidence intervals.

Abstract:
Bayesian highest posterior density (HPD) intervals can be estimated directly from simulations via empirical shortest intervals. Unfortunately, these can be noisy (that is, have a high Monte Carlo error). We derive an optimal weighting strategy using bootstrap and quadratic programming to obtain a more compu- tationally stable HPD, or in general, shortest probability interval (Spin). We prove the consistency of our method. Simulation studies on a range of theoret- ical and real-data examples, some with symmetric and some with asymmetric posterior densities, show that intervals constructed using Spin have better cov- erage (relative to the posterior distribution) and lower Monte Carlo error than empirical shortest intervals. We implement the new method in an R package (SPIn) so it can be routinely used in post-processing of Bayesian simulations.

Abstract:
The weak Bruhat order on $ { \mathcal S }_n $ is the partial order $\prec$ so that $\sigma \prec \tau$ whenever the set of inversions of $\sigma$ is a subset of the set of inversions of $\tau$. We investigate the time complexity of computing the size of intervals with respect to $\prec$. Using relationships between two-dimensional posets and the weak Bruhat order, we show that the size of the interval $ [ \sigma_1, \sigma_2 ]$ can be computed in polynomial time whenever $\sigma_1^{-1} \sigma_2$ has bounded width (length of its longest decreasing subsequence) or bounded intrinsic width (maximum width of any non-monotone permutation in its block decomposition). Since permutations of intrinsic width $1$ are precisely the separable permutations, this greatly extends a result of Wei. Additionally, we show that, for large $n$, all but a vanishing fraction of permutations $ \sigma$ in $ { \mathcal S }_n$ give rise to intervals $ [ id , \sigma ]$ whose sizes can be computed with a sub-exponential time algorithm. The general question of the difficulty of computing the size of arbitrary intervals remains open.

Abstract:
We propose here a number of approaches to implement constraint propagation for arithmetic constraints on integer intervals. To this end we introduce integer interval arithmetic. Each approach is explained using appropriate proof rules that reduce the variable domains. We compare these approaches using a set of benchmarks.

Abstract:
This article concerns construction of confidence intervals for the prevalence of a rare disease using Dorfman’s pooled testing procedure when the disease status is classified with an imperfect biomarker. Such an interval can be derived by converting a confidence interval for the probability that a group is tested positive. Wald confidence intervals based on a normal approximation are shown to be inefficient in terms of coverage probability, even for relatively large number of pools. A few alternatives are proposed and their performance is investigated in terms of coverage probability and length of intervals.

Abstract:
This paper deals with the following problem: modify a Bayesian network to satisfy a given set of probability constraints by only change its conditional probability tables, and the probability distribution of the resulting network should be as close as possible to that of the original network. We propose to solve this problem by extending IPFP (iterative proportional fitting procedure) to probability distributions represented by Bayesian networks. The resulting algorithm E-IPFP is further developed to D-IPFP, which reduces the computational cost by decomposing a global EIPFP into a set of smaller local E-IPFP problems. Limited analysis is provided, including the convergence proofs of the two algorithms. Computer experiments were conducted to validate the algorithms. The results are consistent with the theoretical analysis.