%0 Journal Article %T Early Machine Learning Models: Biases Shown and How It Can Be Corrected %A Nandini Guduru %J Journal of Intelligent Learning Systems and Applications %P 1-7 %@ 2150-8410 %D 2025 %I Scientific Research Publishing %R 10.4236/jilsa.2025.171001 %X The purpose of this research paper is to explore how early Machine Learning models have shown a bias in the results where a bias should not be seen. A prime example is an ML model that favors male applicants over female applicants. While the model is supposed to take into consideration other aspects of the data, it tends to have a bias and skew the results one way or another. Therefore, in this paper, we will be exploring how this bias comes about and how it can be fixed. In this research, I have taken different case studies of real-world examples of these biases being shown. For example, an Amazon hiring application that favored male applicants or a loan application that favored western applicants is both studies that I will reference in this paper and explore the situation itself. In order to find out where the bias is coming from, I have constructed a machine learning model that will use a dataset found on Kaggle, and I will analyze the results of said ML model. The results that the research has yielded clarify the reason for said bias in the artificial intelligence models. The way the model was trained influences the way the results will play out. If the model is trained with a large amount of male applicant data over female applicant data, the model will favor male applicants. Therefore, when they are trained with new data, they are likely to accept applications that are male over female despite having equivalent parts. Later in the paper, I will dive deeper into the way that AI applications work and how they find biases and trends in order to classify things correctly. However, there is a fine line between classification and bias and making sure that it is rightfully corrected and tested is important in machine learning today. %K Machine Learning %K Artificial Intelligence %K Bias %K Generative AI %K Training Data %K Testing Data %U http://www.scirp.org/journal/PaperInformation.aspx?PaperID=138482