全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Early Machine Learning Models: Biases Shown and How It Can Be Corrected

DOI: 10.4236/jilsa.2025.171001, PP. 1-7

Keywords: Machine Learning, Artificial Intelligence, Bias, Generative AI, Training Data, Testing Data

Full-Text   Cite this paper   Add to My Lib

Abstract:

The purpose of this research paper is to explore how early Machine Learning models have shown a bias in the results where a bias should not be seen. A prime example is an ML model that favors male applicants over female applicants. While the model is supposed to take into consideration other aspects of the data, it tends to have a bias and skew the results one way or another. Therefore, in this paper, we will be exploring how this bias comes about and how it can be fixed. In this research, I have taken different case studies of real-world examples of these biases being shown. For example, an Amazon hiring application that favored male applicants or a loan application that favored western applicants is both studies that I will reference in this paper and explore the situation itself. In order to find out where the bias is coming from, I have constructed a machine learning model that will use a dataset found on Kaggle, and I will analyze the results of said ML model. The results that the research has yielded clarify the reason for said bias in the artificial intelligence models. The way the model was trained influences the way the results will play out. If the model is trained with a large amount of male applicant data over female applicant data, the model will favor male applicants. Therefore, when they are trained with new data, they are likely to accept applications that are male over female despite having equivalent parts. Later in the paper, I will dive deeper into the way that AI applications work and how they find biases and trends in order to classify things correctly. However, there is a fine line between classification and bias and making sure that it is rightfully corrected and tested is important in machine learning today.

References

[1]  Brown, S. (2021) Machine Learning, Explained. MIT Sloan, 21 April 2021.
https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained
[2]  Chapman University. Bias in AI.
https://www.chapman.edu/ai/bias-in-ai.aspx
[3]  Denison, G. (2023) 4 Shocking AI Bias Examples. Prolific, 24 October 2023.
https://www.prolific.com/resources/shocking-ai-bias
[4]  Obermeyer, Z. (2024) Dissection Racial Bias in an Algorithm Used to Manage the Health of Populations. Science, 366, 447-453.
https://www.science.org/doi/full/10.1126/science.aax2342
https://doi.org/10.1126/science.aax2342
[5]  Kumar, V. Loan Application Data. Kaggle.
https://www.kaggle.com/datasets/vipin20/loan-application-data

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133