2024 Collaborative Network for Engineering & Computing Diversity (CoNECD)
Arlington, Virginia
February 25, 2024
February 25, 2024
February 27, 2024
Diversity and CoNECD Paper Sessions
20
10.18260/1-2--45485
https://peer.asee.org/45485
156
Ashish Hingle (he/him) is a Ph.D. student in the College of Engineering and Computing at George Mason University. His research interests include technology ethics, interactions and networking in online communities, and student efficacy challenges in higher education. He received his bachelor’s degree in Information Systems and master’s degree in Information Assurance (Cybersecurity – Forensics – Audit)
from sunny Cal Poly Pomona.
Aditya Johri is Professor in the department of Information Sciences & Technology. Dr. Johri studies the use of information and communication technologies (ICT) for learning and knowledge sharing, with a focus on cognition in informal environments. He also
As algorithms proliferate across domains, their development for analysis, prediction, and generation tasks raises questions about fairness, justice, and inclusion. One primary reason is algorithmic data bias, a common phenomenon across datasets and systems that reflects incomplete or misused data. With the incentive to make generalized systems that can do everything, everywhere, data bias reflects the data makeup and how it leads to systematically unfairly generated decisions or outcomes. As future engineers, analysts, and scientists, it is fundamental that technology students are made aware early in their careers of how bias can, at a minimum, alter the quality of an algorithmic decision and, at worst, harm people and communities. In this paper, we report on a three-year course implementation of interactive role-play case studies to raise student awareness of technology ethics and how ethical principles can affect the recognition of data bias in decision-making processes. Students participated in a semester-long course with multiple case studies addressing different aspects of the social implications of technology implementation. Three specific cases were designed to discuss algorithmic data bias and its effects on DEI: 1) using facial recognition on a college campus, 2) algorithmic profiling of demographic data for credit risk allocation, and 3) exploring trust between the community, farmers, and artificial intelligence developers in agricultural systems. We analyzed the transcripts from the role-play activities and responses to assignments through the lens of an AI bias framework and associated theories. Students were introduced to algorithmic data bias at multiple stages in the course to highlight how it affects ideation, development, and implementation across systems. Overall, we found that students initially focused on data acquisition and testing any algorithmic models as ways to overcome data bias, but through the discussion, highlighted the need to question and meta-reason about the need for a highly complex, often black-box AI system in the first place. Additionally, students showed progressive and nuanced discussions about how data bias affected and altered other ethical principles, including transparency, accountability, and trust.
Hingle, A., & Johri, A. (2024, February), Technology Students' Recognition of Algorithmic Data Bias through Role-Play Case Studies Paper presented at 2024 Collaborative Network for Engineering & Computing Diversity (CoNECD), Arlington, Virginia. 10.18260/1-2--45485
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015