全部 标题 作者
关键词 摘要

OALib Journal期刊
ISSN: 2333-9721
费用:99美元

查看量下载量

相关文章

更多...

Fairness in AI: When are AI Tools Gender-Biased?

DOI: 10.4236/aasoci.2025.153011, PP. 204-235

Keywords: Artificial Intelligence, Gender Bias, ChatGPT, Gemini, Ethical AI

Full-Text   Cite this paper   Add to My Lib

Abstract:

In the last decade, the implementation of different Artificial Intelligence (AI) tools has rapidly increased and found application in several domains, e.g., healthcare, education, autonomous driving etc. Therefore, ensuring fairness in these systems has received increasing attention, becoming a topic of considerable interest over the last few years. This work focuses on gender bias within AI tools, by providing a comprehensive literature review that examines the causes and implications of this bias and presenting the findings of four experiments that the authors conducted within two widely used AI tools, namely ChatGPT 3.5 and 4.0 and Gemini 1.5 and 2.0 Flash. These experiments aim to investigate the potential presence of gender-biased responses, as well as the influence of sociocultural norms on the outcomes of these AI tools. Following this, the results are analyzed and mitigation strategies along with policy recommendations are proposed to support the development of Gender Unbiased AI tools.

References

[1]  Ahmed, I., Roy, A., Kajol, M., Hasan, U., Datta, P. P., & Reza, M. R. (2023). ChatGPT vs. Bard: A Comparative Study. Authorea.
[2]  Baumgartner, R., Arora, P., Bath, C., Burljaev, D., Ciereszko, K., Custers, B. et al. (2023). Fair and Equitable AI in Biomedical Research and Healthcare: Social Science Perspectives. Artificial Intelligence in Medicine, 144, Article ID: 102658.
https://doi.org/10.1016/j.artmed.2023.102658
[3]  Chan, M. Y., & Wong, S. M. A. (n.d.) Comparative Analysis to Evaluate Bias and Fairness across Large Language Models with Benchmarks.
https://doi.org/10.31219/osf.io/mc762
[4]  Chen, Z. (2023). Ethics and Discrimination in Artificial Intelligence-Enabled Recruitment Practices. Humanities and Social Sciences Communications, 10, Article No. 567.
https://doi.org/10.1057/s41599-023-02079-x
[5]  Cirillo, D., Catuara-Solarz, S., Morey, C., Guney, E., Subirats, L., Mellino, S. et al. (2020). Sex and Gender Differences and Biases in Artificial Intelligence for Biomedicine and Healthcare. NPJ Digital Medicine, 3, Article No. 81.
https://doi.org/10.1038/s41746-020-0288-5
[6]  Deng, J., & Lin, Y. (2022). The Benefits and Challenges of ChatGPT: An Overview. Frontiers in Computing and Intelligent Systems, 2, 81-83.
https://doi.org/10.54097/fcis.v2i2.4465
[7]  Ferrara, E. (2023). Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci, 6, Article 3.
https://doi.org/10.3390/sci6010003
[8]  Ferrer, X., Nuenen, T. V., Such, J. M., Cote, M., & Criado, N. (2021). Bias and Discrimination in AI: A Cross-Disciplinary Perspective. IEEE Technology and Society Magazine, 40, 72-80.
https://doi.org/10.1109/mts.2021.3056293
[9]  Franzoni, V. (2023). Gender Differences and Bias in Artificial Intelligence. In J. Vallverdú (Ed.), Gender in AI and Robotics (pp. 27-43). Springer International Publishing.
https://doi.org/10.1007/978-3-031-21606-0_2
[10]  Gross, N. (2023). What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI. Social Sciences, 12, Article 435.
https://doi.org/10.3390/socsci12080435
[11]  Hall, P., & Ellis, D. (2023). A Systematic Review of Socio-Technical Gender Bias in AI Algorithms. Online Information Review, 47, 1264-1279.
https://doi.org/10.1108/oir-08-2021-0452
[12]  Horvát, E., & González-Bailón, S. (2024). Quantifying Gender Disparities and Bias Online: Editors’ Introduction to “Gender Gaps in Digital Spaces” Special Issue. Journal of Computer-Mediated Communication, 29, zmad054.
https://doi.org/10.1093/jcmc/zmad054
[13]  Hou, T., Tseng, Y., & Yuan, C. W. (. (2024). Is This AI Sexist? The Effects of a Biased AI’s Anthropomorphic Appearance and Explainability on Users’ Bias Perceptions and Trust. International Journal of Information Management, 76, Article ID: 102775.
https://doi.org/10.1016/j.ijinfomgt.2024.102775
[14]  Kaplan, D. M., Palitsky, R., Arconada Alvarez, S. J., Pozzo, N. S., Greenleaf, M. N., Atkinson, C. A. et al. (2024). What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT. Journal of Medical Internet Research, 26, e51837.
https://doi.org/10.2196/51837
[15]  Kartal, E. (2022). A Comprehensive Study on Bias in Artificial Intelligence Systems: Biased or Unbiased AI, That’s the Question! International Journal of Intelligent Information Technologies, 18, 1-23.
https://doi.org/10.4018/ijiit.309582
[16]  Katsiampoura, G. (2024). From Critical Feminist Theory to Critical Feminist Revolutionary Pedagogy. Advances in Applied Sociology, 14, 175-185.
https://doi.org/10.4236/aasoci.2024.144012
[17]  Krishnan, A., & Rattani, A. (2023). A Novel Approach for Bias Mitigation of Gender Classification Algorithms Using Consistency Regularization. Image and Vision Computing, 137, Article ID: 104793.
https://doi.org/10.1016/j.imavis.2023.104793
[18]  Kronqvist, A., & Rousi, R. A. (2023). A Quick Review of Ethics, Design Thinking, Gender, and AI Development. International Journal of Design Creativity and Innovation, 11, 62-79.
https://doi.org/10.1080/21650349.2022.2136762
[19]  Laffier, J., & Rehman, A. (2023). Deepfakes and Harm to Women. Journal of Digital Life and Learning, 3, 1-21.
https://doi.org/10.51357/jdll.v3i1.218
[20]  Lütz, F. (2022). Gender Equality and Artificial Intelligence in Europe. Addressing Direct and Indirect Impacts of Algorithms on Gender-Based Discrimination. ERA Forum, 23, 33-52.
https://doi.org/10.1007/s12027-022-00709-6
[21]  Mahoney, T., Varshney, K., & Hind, M. (2020). AI Fairness. O’Reilly Media.
[22]  Manasi, A., Panchanadeswaran, S., Sours, E., & Lee, S. J. (2022). Mirroring the Bias: Gender and Artificial Intelligence. Gender, Technology and Development, 26, 295-305.
https://doi.org/10.1080/09718524.2022.2128254
[23]  Marinucci, L., Mazzuca, C., & Gangemi, A. (2023). Exposing Implicit Biases and Stereotypes in Human and Artificial Intelligence: State of the Art and Challenges with a Focus on Gender. AI & Society, 38, 747-761.
https://doi.org/10.1007/s00146-022-01474-3
[24]  Mittermaier, M., Raza, M. M., & Kvedar, J. C. (2023). Bias in AI-Based Models for Medical Applications: Challenges and Mitigation Strategies. NPJ Digital Medicine, 6, Article No. 113.
https://doi.org/10.1038/s41746-023-00858-z
[25]  Nadeem, A., Marjanovic, O., & Abedin, B. (2022). Gender Bias in AI-Based Decision-Making Systems: A Systematic Literature Review. Australasian Journal of Information Systems, 26, 1-34.
https://doi.org/10.3127/ajis.v26i0.3835
[26]  Newstead, T., Eager, B., & Wilson, S. (2023). How AI Can Perpetuate—Or Help Mitigate—Gender Bias in Leadership. Organizational Dynamics, 52, Article ID: 100998.
https://doi.org/10.1016/j.orgdyn.2023.100998
[27]  O’Connor, S., & Liu, H. (2023). Gender Bias Perpetuation and Mitigation in AI Technologies: Challenges and Opportunities. AI & Society, 39, 2045-2057.
[28]  Prates, M. O. R., Avelar, P. H., & Lamb, L. C. (2020). Assessing Gender Bias in Machine Translation: A Case Study with Google Translate. Neural Computing and Applications, 32, 6363-6381.
https://doi.org/10.1007/s00521-019-04144-6
[29]  Roselli, D., Matthews, J., & Talagala, N. (2019). Managing Bias in AI. In Companion Proceedings of the 2019 World Wide Web Conference (pp. 539-544). ACM.
https://doi.org/10.1145/3308560.3317590
[30]  Shrestha, S., & Das, S. (2022). Exploring Gender Biases in ML and AI Academic Research through Systematic Literature Review. Frontiers in Artificial Intelligence, 5, Article 976838.
https://doi.org/10.3389/frai.2022.976838
[31]  Singh, S. K., Kumar, S., & Mehra, P. S. (2023). Chat GPT & Google Bard AI: A Review. In 2023 International Conference on IoT, Communication and Automation Technology (ICICAT) (pp. 1-6). IEEE.
https://doi.org/10.1109/icicat57735.2023.10263706
[32]  Skordoulis, K. (2016). Science, Knowledge Production and Social Practice. Knowledge Cultures, 4, 1-18
[33]  Sun, L., Wei, M., Sun, Y., Suh, Y. J., Shen, L., & Yang, S. (2024). Smiling Women Pitching Down: Auditing Representational and Presentational Gender Biases in Image-Generative AI. Journal of Computer-Mediated Communication, 29, zmad045.
https://doi.org/10.1093/jcmc/zmad045
[34]  Waelen, R., & Wieczorek, M. (2022). The Struggle for AI’s Recognition: Understanding the Normative Implications of Gender Bias in AI with Honneth’s Theory of Recognition. Philosophy & Technology, 35, Article No. 53.
https://doi.org/10.1007/s13347-022-00548-w
[35]  Wang, L. (2020). The Three Harms of Gendered Technology. Australasian Journal of Information Systems, 24, 1-9.
https://doi.org/10.3127/ajis.v24i0.2799
[36]  Wellner, G. P. (2020). When AI Is Gender-Biased. Humana Mente Journal of Philosophical Studies, 13, 127-150.
[37]  Xivuri, K., & Twinomurinzi, H. (2023). How AI Developers Can Assure Algorithmic Fairness. Discover Artificial Intelligence, 3, Article No. 27.
https://doi.org/10.1007/s44163-023-00074-4

Full-Text

Contact Us

service@oalib.com

QQ:3279437679

WhatsApp +8615387084133