The L1 regression is a robust alternative to
the least squares regression whenever there are outliers in the values of the response
variable, or the errors follow a long-tailed distribution.To
calculate the standard errors of the L1 estimators, construct confidence
intervals and test hypotheses about the parameters of the model, or to calculate
a robust coefficient of determination, it is necessary to have an estimate of a
scale parameterτ. This parameter is such
that τ2/nis
the variance of the median of a sample of size n from the errors
distribution. [1] proposed
the use of , a consistent, and so, an asymptotically unbiased estimator of τ. However, this estimator is not stable in small samples, in the sense that it can
increase with the introduction of new independent variables in the model.
References
[1]
McKean, J.W. and Schrader, R.M. (1987) Least Absolute Errors Analysis of Variance. In: Dodge, Y., Eds., Statistical Data Analysis Based on the L1-Norm and Related Methods, Elsevier Science Publishers B. V., Amsterdam, 297-305.
[2]
Basset, G. and Koenker, R. (1978) Asymptotic Theory of Least Absolute Error Regression. Journal of the American Statistical Association, 73, 618-622.
https://doi.org/10.1080/01621459.1978.10480065
[3]
Dielman, T. and Pfaffenberger, R. (1982) LAV (Least Absolute Value) Estimation in the Regression Model: A Review. TIMS Studies in the Management Sciences, 19, 31-52.
[4]
Narula, S.C. (1987) The Minimum Sum of Absolute Errors Regression. Journal of Quality Technology, 19, 37-45. https://doi.org/10.1080/00224065.1987.11979031
[5]
McKean, J.W. and Sievers, G.L. (1987) Coefficient of Determination for Least Absolute Deviation Analysis. Statistics and Probability Letters, 5, 49-54.
https://doi.org/10.1016/0167-7152(87)90026-5
[6]
Kvalseth, T.O. (1985) Cautionary Note about R2. American Statistician, 39, 279-285.
https://doi.org/10.1080/00031305.1985.10479448
[7]
Narula, S.C. and Wellington, J.F. (1977) Prediction, Linear Regression and Minimum Sum of Absolute Errors. Technometrics, 19, 185-190.
https://doi.org/10.1080/00401706.1977.10489526
[8]
Elian, S.N., André, C.D.S. and Narula, S.C. (2000) Influence Measure for the L1 Regression. Communication in Statistics—Theory and Methods, 29, 837-849.
https://doi.org/10.1080/03610920008832518
[9]
Engelhardt, M. and Bain, L.J. (1973) Interval Estimation for the Two-Parameter Double Exponential Ditribution. Technometrics, 15, 875-887.
https://doi.org/10.1080/00401706.1973.10489120
[10]
André, C.D.S., Elian, S.N., Narula, S.C. and Tavares, R.A. (2000) Coefficients of Determination for Variable Selection in the MSAE Regression. Communication in Statistics—Theory and Methods, 29, 623-642.
https://doi.org/10.1080/03610920008832506
[11]
Portnoy, S. and Koenker, R. (1997) The Gaussian Hare and the Laplacian Tortoise: Computability of Squared-Error versus Absolute-Error Estimators. Statistical Science, 12, 279-300. https://doi.org/10.1214/ss/1030037960
[12]
Groeneveld, R.A. and Meeden, G. (1984) Measuring Skewness and Kurtosis. The Statistician, 33, 391-399. https://doi.org/10.2307/2987742
[13]
Sen, P.K. and Singer, J.M. (1993) Large Sample Methods Is Statistics—An Introduction with Applications. Chapman and Hall, New York.
[14]
Parker, I. (1988) Transformations and Influential Observations in Minimum Sum of Absolute Errors Regression. Technometrics, 30, 215-220.
https://doi.org/10.1080/00401706.1988.10488369