全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

基于LIME的改进机器学习可解释性方法
An Improved Method of Interpret Machine Learning Based on LIME

DOI: 10.12677/HJDM.2021.112005, PP. 38-49

Keywords: 机器学习可解释性,LIME,EBM,GAMNET
Interpretable Machine Learning
, LIME, EBM, GAMINET

Full-Text   Cite this paper   Add to My Lib

Abstract:

随着机器学习技术不断取得突破性进展,越来越多的决策交给复杂自动化的机器学习算法去做。但这些高性能的模型就像黑盒子,缺乏决策逻辑的透明度和可解释性。LIME (Local Interpretable Model-agnostic Explanation)是由Marco Tulio Ribeiro等人提出的一种XAI (Explainable Artificial In-telligence)方法,对于复杂的黑盒模型,LIME使用可解释性模型(线性模型)对黑盒模型进行局部近似,局部解释复杂模型的决策行为。LIME中使用的线性模型(Ridge回归等)学习能力较弱,不能很好地局部逼近复杂模型。对于复杂树模型(XGB、RF等)本文提出采用可解释性良好的广义加性树模型EBM去近似它们的局部行为,而对于复杂神经网络模型,本文提出利用广义加性神经网络模型GAMINET去局部逼近其局部行为。EBM (Explainable Boosting Machine)和GAMNET (广义加性神经网络模型)均具备可解释性并且拥有更强的学习能力,能更好地逼近复杂机器学习模型。
With the development of machine learning technology, more and more decisions are determined by machine learning algorithms. However, those high-performance models are like black boxes, lack-ing transparency and interpretability of decision logic. LIME (Local Intepretable Model-agnostic Ex-planation) is a XAI (Explainable Artificial Intelligence) method proposed by Marco Tulio Ribeiro et al. For complex machine learning models, LIME use linear model to approximate and explain their decision behaviors locally. But the learning ability of linear models is weak, so they could not ap-proximate complex models very well. In this paper, to improve LIME, for complex tree models (XGBoost, random forest, etc.), we propose use EBM to approximate (interpretable generalized ad-ditive tree model). For complex neural network models, we propose use GAMINET (generalized ad-ditive neural network model) to approximate. EBM and GAMINET are both generalized additive models. They have interpretability and stronger learning ability, which can better approximate complex models.

References

[1]  Ribeiro, M.T., Singh, S. and Guestrin, C. (2016) Why Should I Trust you?: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD, 13, 1135-1144.
https://doi.org/10.1145/2939672.2939778
[2]  Scott, M.L. and Su-in, L. (2016) A Unified Approach to Interpreting Model Predictions. 31st Annual Conference on Neural Information Processing Systems, 4766-4775.
[3]  UCI Datasets. http://archive.ics.uci.edu/ml/datasets/Diabates
[4]  Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M. and Elhadad, N. (2015) Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-Day Readmission. Proceedings of the 22nd ACM SIGKDD, August 2015, 1721-1730.
https://doi.org/10.1145/2783258.2788613
[5]  Yin, L., Rich, C., Johannes, G. and Giles, H. (2013) Accurate Intelligible Models with Pairwise Interactions. Proceedings of the 19th ACM SIGKDD, 623-631.
[6]  Zebin, Y., Aijun, Z. and Agus, S. (2020) GAMI-Net: An Explainable Neural Network Based on Generalized Additive Models with Structured Interactions. https://arxiv.org/
[7]  UCI datasets. http://archive.ics.uci.edu/ml/datasets/Adult

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133