%0 Journal Article %T A quantile variant of the expectation每maximization algorithm and its application to parameter estimation with interval data %A Chanseok Park %J Journal of Algorithms & Computational Technology %@ 1748-3026 %D 2018 %R 10.1177/1748301818779007 %X The expectation每maximization algorithm is a powerful computational technique for finding the maximum likelihood estimates for parametric models when the data are not fully observed. The expectation每maximization is best suited for situations where the expectation in each E-step and the maximization in each M-step are straightforward. A difficulty with the implementation of the expectation每maximization algorithm is that each E-step requires the integration of the log-likelihood function in closed form. The explicit integration can be avoided by using what is known as the Monte Carlo expectation每maximization algorithm. The Monte Carlo expectation每maximization uses a random sample to estimate the integral at each E-step. But the problem with the Monte Carlo expectation每maximization is that it often converges to the integral quite slowly and the convergence behavior can also be unstable, which causes computational burden. In this paper, we propose what we refer to as the quantile variant of the expectation每maximization algorithm. We prove that the proposed method has an accuracy of O ( 1 / K 2 ) , while the Monte Carlo expectation每maximization method has an accuracy of O p ( 1 / K ) . Thus, the proposed method possesses faster and more stable convergence properties when compared with the Monte Carlo expectation每maximization algorithm. The improved performance is illustrated through the numerical studies. Several practical examples illustrating its use in interval-censored data problems are also provided %K Expectation每maximization algorithm %K incomplete data %K maximum likelihood %K Monte Carlo expectation每maximization %K missing data %K quantile %U https://journals.sagepub.com/doi/full/10.1177/1748301818779007