全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

Heavy-Head Sampling Strategy of Graph Convolutional Neural Networks for q-Consistent Summary-Explanations with Application to Credit Evaluation Systems

DOI: 10.4236/oalib.1110615, PP. 1-17

Subject Areas: Big Data Search and Mining, Artificial Intelligence, Complex network models

Keywords: Summary-Explanation, q-Consistent, Branch-and-Bound, Heavy-Head Sampling Strategy

Full-Text   Cite this paper   Add to My Lib

Abstract

Machine learning systems have found extensive applications as auxiliary tools in domains that necessitate critical decision-making, such as healthcare and criminal justice. The interpretability of these systems’ decisions is of paramount importance to instill trust among users. Recently, there have been developments in globally-consistent rule-based summary-explanation and its max-support (MSGC) problem, enabling the provision of explanations for specific decisions along with pertinent dataset statistics. Nonetheless, globally-consistent summary-explanations with limited complexity tend to have small supports, if any. In this study, we propose a more lenient variant of the summary-explanation, namely the q-consistent summary-explanation, which strives to achieve greater support at the expense of slightly reduced consistency. However, the challenge lies in the fact that the max-support problem of the q-consistent summary-explanation (MSqC) is significantly more intricate than the original MSGC problem, leading to extended solution times using standard branch-and-bound (B & B) solvers. We improve the B & B solving process by replacing time-consuming heuristics with machine learning (ML) models and apply a heavy-head sampling strategy for imitation learning of MSqC problems by exploiting the heavy-head maximum depth distribution of B & B solution trees. Experimental results show that using the heavy-head sampling strategies, the final evaluation results of trained strategies on MSqC problems are significantly improved compared to previous studies using uniform sampling strategies.

Cite this paper

Dou, X. (2023). Heavy-Head Sampling Strategy of Graph Convolutional Neural Networks for q-Consistent Summary-Explanations with Application to Credit Evaluation Systems. Open Access Library Journal, 10, e615. doi: http://dx.doi.org/10.4236/oalib.1110615.

References

[1]  Qiao, L.T., Wang, W.J. and Lin, B. (2021) Learning Accurate and Interpretable Decision Rule Sets from Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35, 4303-4311. https://doi.org/10.1609/aaai.v35i5.16555
[2]  Lawless, C., Dash, S., Gunluk, O. and Wei, D. (2021) Interpretable and Fair Boolean Rule Sets via Column Generation. arXiv Preprint arXiv: 2111.08466. https://doi.org/10.48550/arXiv.2111.08466
[3]  Liu, W.N., Fan, H. and Xia, M. (2022) Credit Scoring Based on Tree-Enhanced Gradient Boosting Decision Trees. Expert Systems with Applications, 189, Article 116034. https://doi.org/10.1016/j.eswa.2021.116034
[4]  Rudin, C. and Shaposhnik, Y. (2023) Globally-Consistent Rule-Based Summary-Explanations for Machine Learning Models: Application to Credit-Risk Evaluation. Journal of Machine Learning Research, 24, 1-44. https://doi.org/10.2139/ssrn.3395422
[5]  Rudin, C. and Radin, J. (2019) Why Are We Using Black Box Models in Ai When We Don’t Need to? A Lesson from an Explainable AI Competition. Harvard Data Science Review, 1, 1-9. https://doi.org/10.1162/99608f92.5a8a3a3d
[6]  Gamrath, G., Anderson, D., Bestuzheva, K., Chen, W.K., Eifler, L., Gasse, M., Gemander, P., Gleixner, A., Gottwald, L., Halbig, K., et al. (2020) The SCIP Optimization Suite 7.0. ZIB-Report 20-10, Zuse Institute Berlin, Berlin, Germany. https://opus4.kobv.de/opus4-zib/files/7802/scipopt-70.pdf
[7]  Chen, Y.Z., Chen, W., Chandra Pal, S., Saha, A., Chowdhuri, I., Adeli, B., Janizadeh, S., Dineva, A.A., Wang, X.J. and Mosavi, A. (2022) Evaluation Efficiency of Hybrid Deep Learning Algorithms with Neural Network Decision Tree and Boosting Methods for Predicting Groundwater Potential. Geocarto International, 37, 5564-5584. https://doi.org/10.1080/10106049.2021.1920635
[8]  Ribeiro, M.T., Singh, S. and Guestrin, C. (2016) “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, New York, 13 August 2016, 1135-1144. https://doi.org/10.18653/v1/n16-3020
[9]  Bücker, M., Szepannek, G., Gosiewska, A. and Biecek, P. (2022) Transparency, Auditability, and Explainability of Machine Learning Models in Credit Scoring. Journal of the Operational Research Society, 73, 70-90. https://doi.org/10.1080/01605682.2021.1922098
[10]  Ribeiro, M.T. Singh, S. and Guestrin, C. (2018) Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, 2-7 February 2018. https://doi.org/10.1609/aaai.v32i1.11491
[11]  Guidotti, R., Monreale, A., Ruggieri, S., Pedreschi, D., Turini, F. and Giannotti, F. (2018) Local Rule-Based Explanations of Black Box Decision Systems. arXiv Preprint arXiv: 1805.10820. https://doi.org/10.48550/arXiv.1805.10820
[12]  Zafar, M.R. and Khan, N. (2021) Deterministic Local Interpretable Model-Agnostic Explanations for Stable Explainability. Machine Learning and Knowledge Extraction, 3, 525-541. https://doi.org/10.32920/22734320.v1
[13]  Huang, Q., Yamada, M., Tian, Y., Singh, D. and Chang, Y. (2022) Graphlime: Local Interpretable Model Explanations for Graph Neural Networks. IEEE Transactions on Knowledge and Data Engineering, 35, 6968-6972. https://doi.org/10.1109/tkde.2022.3187455
[14]  Alvarez, A.M., Louveaux, Q. and Wehenkel, L. (2017) A Machine Learning-Based Approximation of Strong Branching. INFORMS Journal on Computing, 29, 185-195. https://doi.org/10.1287/ijoc.2016.0723
[15]  Osa, T., Pajarinen, J., Neumann, G., Bagnell, J.A., Abbeel, P., Peters, J., et al. (2018) An Algorithmic Perspective on Imitation Learning. Foundations and Trends in Robotics, Boston. https://doi.org/10.1561/9781680834116
[16]  Gasse, M., Chételat, D., Ferroni, N., Charlin, L. and Lodi, A. (2019) Exact Combinatorial Optimization with Graph Convolutional Neural Networks. Advances in Neural Information Processing Systems, 32, 15580-15592.
[17]  Zhou, J., Cui, G.Q., Hu, S.D., Zhang, Z.Y., Yang, C., Liu, Z.Y., Wang, L.F., Li, C.C. and Sun, M.S. (2020) Graph Neural Networks: A Review of Methods and Applications. AI open, 1, 57-81. https://doi.org/10.1016/j.aiopen.2021.01.001
[18]  Buffelli, D., Liò, P. and Vandin, F. (2022) Sizeshiftreg: A Regularization Method for Improving Size-Generalization in Graph Neural Networks. 36th Conference on Neural Information Processing Systems, New Orleans, 28 November-9 December 2022, 31871-31885.
[19]  Gupta, P., Gasse, M., Khalil, E., Mudigonda, P., Lodi, A. and Bengio, Y. (2020) Hybrid Models for Learning to Branch. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, 6-12 December 2020, 18087-18097. https://doi.org/10.48550/arXiv.2006.15212
[20]  Geurts, P., Ernst, D. and Wehenkel, L. (2006) Extremely Randomized Trees. Machine Learning, 63, 3-42. https://doi.org/10.1007/s10994-006-6226-1

Full-Text


comments powered by Disqus

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133

WeChat 1538708413