Search Results: 1 - 10 of 100 matches for " "
All listed articles are free for downloading (OA Articles)
Page 1 /100
Display every page Item
On root mean square approximation by exponential functions  [PDF]
Ruslan Sharipov
Mathematics , 2014,
Abstract: The problem of root mean square approximation of a square integrable function by finite linear combinations of exponential functions is considered. It is subdivided into linear and nonlinear parts. The linear approximation problem is solved. Then the nonlinear problem is studied in some particular example.
Performance Evaluation of Percent Root Mean Square Difference for ECG Signals Compression.  [PDF]
Fazly Salleh Abas,Rizwan Javaid,Rosli Besar
Signal Processing : An International Journal , 2008,
Abstract: Electrocardiogram (ECG) signal compression is playing a vital role in biomedical applications. The signal compression is meant for detection and removing the redundant information from the ECG signal. Wavelet transform methods are very powerful tools for signal and image compression and decompression. This paper deals with the comparative study of ECG signal compression using preprocessing and without preprocessing approach on the ECG data. The performance and efficiency results are presented in terms of percent root mean square difference (PRD). Finally, the new PRD technique has been proposed for performance measurement and compared with the existing PRD technique; which has shown that proposed new PRD technique achieved minimum value of PRD with improved results.
Proton root-mean-square radii and electron scattering  [PDF]
Ingo Sick,Dirk Trautmann
Physics , 2014, DOI: 10.1103/PhysRevC.89.012201
Abstract: The standard procedure of extracting the proton root-mean-square radii from models for the Sachs form factors $G_e (q)$ and $G_m (q)$ fitted to elastic electron-proton scattering data %has a serious flaw. is more uncertain than traditionally assumed. The extrapolation of $G(q)$, from the region $q_{min} < q < q_{max}$ covered by data to momentum transfer $q=0$ where the $rms$-radius is obtained, often depends on uncontrolled properties of the parameterization used. Only when ensuring that the corresponding densities have a physical behavior at large radii $r$ can reliable $rms$-radii be determined.
Sharp bounds for Neuman-Sándor's mean in terms of the root-mean-square  [PDF]
Wei-Dong Jiang,Feng Qi
Mathematics , 2013, DOI: 10.1007/s10998-014-0057-9
Abstract: In the paper, the authors find sharp bounds for Neuman-S\'andor's mean in terms of the root-mean-square.
Testing Hardy-Weinberg equilibrium with a simple root-mean-square statistic  [PDF]
Rachel Ward,Raymond J. Carroll
Statistics , 2012,
Abstract: We provide evidence that a root-mean-square test of goodness-of-fit can be significantly more powerful than state-of-the-art exact tests in detecting deviations from Hardy-Weinberg equilibrium. Unlike Pearson's chi-square test, the log--likelihood-ratio test, and Fisher's exact test, which are sensitive to relative discrepancies between genotypic frequencies, the root-mean-square test is sensitive to absolute discrepancies. This can increase statistical power, as we demonstrate using benchmark datasets and through asymptotic analysis. With the aid of computers, exact P-values for the root-mean-square statistic can be calculated eeffortlessly, and can be easily implemented using the author's freely available code.
Computing the confidence levels for a root-mean-square test of goodness-of-fit, II  [PDF]
William Perkins,Mark Tygert,Rachel Ward
Statistics , 2010,
Abstract: This paper extends our earlier article, "Computing the confidence levels for a root-mean-square test of goodness-of-fit;" unlike in the earlier article, the models in the present paper involve parameter estimation -- both the null and alternative hypotheses in the associated tests are composite. We provide efficient black-box algorithms for calculating the asymptotic confidence levels of a variant on the classic chi-squared test. In some circumstances, it is also feasible to compute the exact confidence levels via Monte Carlo simulation.
Higher order scrambled digital nets achieve the optimal rate of the root mean square error for smooth integrands  [PDF]
Josef Dick
Mathematics , 2010, DOI: 10.1214/11-AOS880
Abstract: We study a random sampling technique to approximate integrals $\int_{[0,1]^s}f(\mathbf{x})\,\mathrm{d}\mathbf{x}$ by averaging the function at some sampling points. We focus on cases where the integrand is smooth, which is a problem which occurs in statistics. The convergence rate of the approximation error depends on the smoothness of the function $f$ and the sampling technique. For instance, Monte Carlo (MC) sampling yields a convergence of the root mean square error (RMSE) of order $N^{-1/2}$ (where $N$ is the number of samples) for functions $f$ with finite variance. Randomized QMC (RQMC), a combination of MC and quasi-Monte Carlo (QMC), achieves a RMSE of order $N^{-3/2+\varepsilon}$ under the stronger assumption that the integrand has bounded variation. A combination of RQMC with local antithetic sampling achieves a convergence of the RMSE of order $N^{-3/2-1/s+\varepsilon}$ (where $s\ge1$ is the dimension) for functions with mixed partial derivatives up to order two. Additional smoothness of the integrand does not improve the rate of convergence of these algorithms in general. On the other hand, it is known that without additional smoothness of the integrand it is not possible to improve the convergence rate. This paper introduces a new RQMC algorithm, for which we prove that it achieves a convergence of the root mean square error (RMSE) of order $N^{-\alpha-1/2+\varepsilon}$ provided the integrand satisfies the strong assumption that it has square integrable partial mixed derivatives up to order $\alpha>1$ in each variable. Known lower bounds on the RMSE show that this rate of convergence cannot be improved in general for integrands with this smoothness. We provide numerical examples for which the RMSE converges approximately with order $N^{-5/2}$ and $N^{-7/2}$, in accordance with the theoretical upper bound.
Proof of the Conjecture that the Planar Self-Avoiding Walk has Root Mean Square Displacement Exponent 3/4  [PDF]
Irene Hueter
Physics , 2001,
Abstract: This paper proves the long-standing open conjecture rooted in chemical physics (Flory (1949)) that the self-avoiding walk (SAW) in the square lattice has root mean square displacement exponent \nu= 3/4. This value is an instance of the formula \nu=1 on Z and \nu = max(1/2, 1/4 + 1/d) in Z^d for dimensions d \geq 2, which will be proved in a subsequent paper. This expression differs from the one that Flory's arguments suggested. We consider (a) the point process of self-intersections defined via certain paths of the symmetric simple random walk in Z^2 and (b) a ``weakly self-avoiding cone process'' relative to this point process when in a certain "shape". We derive results on the asymptotic expected distance of the weakly SAW with parameter \beta>0 from its starting point, from which a number of distance exponents are immediately collectable for the SAW as well. Our method employs the Palm distribution of the point process of self-intersection points in a cone.
Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model  [PDF]
Amin Zollanvari,Edward R. Dougherty
Statistics , 2013,
Abstract: The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.
P-HGRMS: A Parallel Hypergraph Based Root Mean Square Algorithm for Image Denoising  [PDF]
Tejaswi Agarwal,Saurabh Jha,B. Rajesh Kanna
Computer Science , 2013,
Abstract: This paper presents a parallel Salt and Pepper (SP) noise removal algorithm in a grey level digital image based on the Hypergraph Based Root Mean Square (HGRMS) approach. HGRMS is generic algorithm for identifying noisy pixels in any digital image using a two level hierarchical serial approach. However, for SP noise removal, we reduce this algorithm to a parallel model by introducing a cardinality matrix and an iteration factor, k, which helps us reduce the dependencies in the existing approach. We also observe that the performance of the serial implementation is better on smaller images, but once the threshold is achieved in terms of image resolution, its computational complexity increases drastically. We test P-HGRMS using standard images from the Berkeley Segmentation dataset on NVIDIAs Compute Unified Device Architecture (CUDA) for noise identification and attenuation. We also compare the noise removal efficiency of the proposed algorithm using Peak Signal to Noise Ratio (PSNR) to the existing approach. P-HGRMS maintains the noise removal efficiency and outperforms its sequential counterpart by 6 to 18 times (6x - 18x) in computational efficiency.
Page 1 /100
Display every page Item

Copyright © 2008-2017 Open Access Library. All rights reserved.