Home OALib Journal OALib PrePrints Submit Ranking News My Lib FAQ About Us Follow Us+
 Title Keywords Abstract Author All
Search Results: 1 - 10 of 100 matches for " "
 Page 1 /100 Display every page 5 10 20 Item
 Mathematics , 2005, Abstract: Weighted likelihood, in which one solves Horvitz-Thompson or inverse probability weighted (IPW) versions of the likelihood equations, offers a simple and robust method for fitting models to two phase stratified samples. We consider semiparametric models for which solution of infinite dimensional estimating equations leads to $\sqrt{N}$ consistent and asymptotically Gaussian estimators of both Euclidean and nonparametric parameters. If the phase two sample is selected via Bernoulli (i.i.d.) sampling with known sampling probabilities, standard estimating equation theory shows that the influence function for the weighted likelihood estimator of the Euclidean parameter is the IPW version of the ordinary influence function. By proving weak convergence of the IPW empirical process, and borrowing results on weighted bootstrap empirical processes, we derive a parallel asymptotic expansion for finite population stratified sampling. Whereas the asymptotic variance for Bernoulli sampling involves the within strata second moments of the influence function, for finite population stratified sampling it involves only the within strata variances. The latter asymptotic variance also arises when the observed sampling fractions are used as estimates of those known a priori. A general procedure is proposed for fitting semiparametric models with estimated weights to two phase data. Several of our key results have already been derived for the special case of Cox regression with stratified case-cohort studies, other complex survey designs and missing data problems more generally. This paper is intended to help place this previous work in appropriate context and to pave the way for applications to other models.
 Stephane Chretien Statistics , 2009, Abstract: Many experiments in medicine and ecology can be conveniently modeled by finite Gaussian mixtures but face the problem of dealing with small data sets. We propose a robust version of the estimator based on self-regression and sparsity promoting penalization in order to estimate the components of Gaussian mixtures in such contexts. A space alternating version of the penalized EM algorithm is obtained and we prove that its cluster points satisfy the Karush-Kuhn-Tucker conditions. Monte Carlo experiments are presented in order to compare the results obtained by our method and by standard maximum likelihood estimation. In particular, our estimator is seen to perform better than the maximum likelihood estimator.
 Statistics , 2011, Abstract: Gaussian mixture models are widely used to study clustering problems. These model-based clustering methods require an accurate estimation of the unknown data density by Gaussian mixtures. In Maugis and Michel (2009), a penalized maximum likelihood estimator is proposed for automatically selecting the number of mixture components. In the present paper, a collection of univariate densities whose logarithm is locally {\beta}-H\"older with moment and tail conditions are considered. We show that this penalized estimator is minimax adaptive to the {\beta} regularity of such densities in the Hellinger sense.
 International Journal of Environmental Research and Public Health , 2011, DOI: 10.3390/ijerph8072798 Abstract: The only way for dengue to spread in the human population is through the human-mosquito-human cycle. Most research in this field discusses the dengue-mosquito or dengue-human relationships over a particular study area, but few have explored the local spatial variations of dengue-mosquito and dengue-human relationships within a study area. This study examined whether spatial heterogeneity exists in these relationships. We used Ordinary Least Squares (OLS) and Geographically Weighted Regression (GWR) models to analyze spatial relationships and identify the geographical heterogeneities by using the information of entomology and dengue cases in the cities of Kaohsiung and Fengshan in 2002. Our findings indicate that dengue-mosquito and dengue-human relationships were significantly spatially non-stationary. This means that in some areas higher dengue incidences were associated with higher vector/host densities, but in some areas higher incidences were related to lower vector/host densities. We demonstrated that a GWR model can be used to geographically differentiate the relationships of dengue incidence with immature mosquito and human densities. This study provides more insights into spatial targeting of intervention and control programs against dengue outbreaks within the study areas.
 Mathematics , 2015, Abstract: Gaussian mixture models are central to classical statistics, widely used in the information sciences, and have a rich mathematical structure. We examine their maximum likelihood estimates through the lens of algebraic statistics. The MLE is not an algebraic function of the data, so there is no notion of ML degree for these models. The critical points of the likelihood function are transcendental, and there is no bound on their number, even for mixtures of two univariate Gaussians.
 Sensors , 2011, DOI: 10.3390/s110606297 Abstract: Distributed estimation of Gaussian mixtures has many applications in wireless sensor network (WSN), and its energy-efficient solution is still challenging. This paper presents a novel diffusion-based EM algorithm for this problem. A diffusion strategy is introduced for acquiring the global statistics in EM algorithm in which each sensor node only needs to communicate its local statistics to its neighboring nodes at each iteration. This improves the existing consensus-based distributed EM algorithm which may need much more communication overhead for consensus, especially in large scale networks. The robustness and scalability of the proposed approach can be achieved by distributed processing in the networks. In addition, we show that the proposed approach can be considered as a stochastic approximation method to find the maximum likelihood estimation for Gaussian mixtures. Simulation results show the efficiency of this approach.
 Statistics , 2013, Abstract: Mixtures of factor analyzers are becoming more and more popular in the area of model based clustering of high-dimensional data. According to the likelihood approach in data modeling, it is well known that the unconstrained log-likelihood function may present spurious maxima and singularities and this is due to specific patterns of the estimated covariance structure, when their determinant approaches 0. To reduce such drawbacks, in this paper we introduce a procedure for the parameter estimation of mixtures of factor analyzers, which maximizes the likelihood function in a constrained parameter space. We then analyze and measure its performance, compared to the usual non-constrained approach, via some simulations and applications to real data sets.
 Statistics , 2015, Abstract: Mixtures of Gaussian factors are powerful tools for modeling an unobserved heterogeneous population, offering - at the same time - dimension reduction and model-based clustering. Unfortunately, the high prevalence of spurious solutions and the disturbing effects of outlying observations, along maximum likelihood estimation, open serious issues. In this paper we consider restrictions for the component covariances, to avoid spurious solutions, and trimming, to provide robustness against violations of normality assumptions of the underlying latent factors. A detailed AECM algorithm for this new approach is presented. Simulation results and an application to the AIS dataset show the aim and effectiveness of the proposed methodology.
 Computer Science , 2015, Abstract: The parsimonious Gaussian mixture models, which exploit an eigenvalue decomposition of the group covariance matrices of the Gaussian mixture, have shown their success in particular in cluster analysis. Their estimation is in general performed by maximum likelihood estimation and has also been considered from a parametric Bayesian prospective. We propose new Dirichlet Process Parsimonious mixtures (DPPM) which represent a Bayesian nonparametric formulation of these parsimonious Gaussian mixture models. The proposed DPPM models are Bayesian nonparametric parsimonious mixture models that allow to simultaneously infer the model parameters, the optimal number of mixture components and the optimal parsimonious mixture structure from the data. We develop a Gibbs sampling technique for maximum a posteriori (MAP) estimation of the developed DPMM models and provide a Bayesian model selection framework by using Bayes factors. We apply them to cluster simulated data and real data sets, and compare them to the standard parsimonious mixture models. The obtained results highlight the effectiveness of the proposed nonparametric parsimonious mixture models as a good nonparametric alternative for the parametric parsimonious models.
 Statistics , 2013, Abstract: In the last years there has been a growing interest in proposing methods for estimating covariance functions for geostatistical data. Among these, maximum likelihood estimators have nice features when we deal with a Gaussian model. However maximum likelihood becomes impractical when the number of observations is very large. In this work we review some solutions and we contrast them in terms of loss of statistical efficiency and computational burden. Specifically we focus on three types of weighted composite likelihood functions based on pairs and we compare them with the method of covariance tapering. Asymptotics properties of the three estimation methods are derived. We illustrate the effectiveness of the methods through theoretical examples, simulation experiments and by analysing a data set on yearly total precipitation anomalies at weather stations in the United States.
 Page 1 /100 Display every page 5 10 20 Item