%0 Journal Article %T Uniform convergence of exact large deviations for renewal reward processes %A Zhiyi Chi %J Mathematics %D 2007 %I arXiv %R 10.1214/105051607000000023 %X Let (X_n,Y_n) be i.i.d. random vectors. Let W(x) be the partial sum of Y_n just before that of X_n exceeds x>0. Motivated by stochastic models for neural activity, uniform convergence of the form $\sup_{c\in I}|a(c,x)\operatorname {Pr}\{W(x)\gecx\}-1|=o(1)$, $x\to\infty$, is established for probabilities of large deviations, with a(c,x) a deterministic function and I an open interval. To obtain this uniform exact large deviations principle (LDP), we first establish the exponentially fast uniform convergence of a family of renewal measures and then apply it to appropriately tilted distributions of X_n and the moment generating function of W(x). The uniform exact LDP is obtained for cases where X_n has a subcomponent with a smooth density and Y_n is not a linear transform of X_n. An extension is also made to the partial sum at the first exceedance time. %U http://arxiv.org/abs/0707.4596v1