Portland, Oregon
June 23, 2024
June 23, 2024
June 26, 2024
Engineering Ethics Division (ETHICS)
9
10.18260/1-2--47112
https://peer.asee.org/47112
70
Dr. Hortense Gerardo is a playwright, screenwriter, and anthropologist and serves as the Director of the Anthropology, Performance, and Technology (APT) Program at the University of California, San Diego. Her works have been performed nationally and internationally. She is a Co-founder of the Asian American Playwright Collective (AAPC), and head of the Screenwriting competition on the Board of the Woods Hole Film Festival. For more information go to: www.hortensegerardo.com
Brainerd Prince is Associate Professor and the Director of the Center for Thinking, Language and Communication at Plaksha University. He teaches courses such as Reimagining Technology and Society, Ethics of Technological Innovation, and Art of Thinking for undergraduate engineering students and Research Design for PhD scholars. He completed his PhD on Sri Aurobindo’s Integral Philosophy from OCMS, Oxford – Middlesex University, London. He was formerly a Research Tutor at OCMS, Oxford, and formerly a Research Fellow at the Oxford Centre for Hindu Studies, a Recognized Independent Centre of Oxford University. He is also the Founding Director of Samvada International Research Institute which offers consultancy services to institutions of research and higher education around the world on designing research tracks, research teaching and research projects. His first book The Integral Philosophy of Aurobindo: Hermeneutics and the Study of Religion was published by Routledge, Oxon in 2017. For more information, please visit: https://plaksha.edu.in/faculty-details/dr-brainerd-prince
B. Lallianngura has completed post-graduate studies in philosophy from the University of Delhi. He is pursuing doctoral research in philosophy at IIT Bombay. He is a part of the research team at Centre for Thinking, Language and Communication at Plaksha University. His research focuses on the question of self and subjectivity and its relation to power-knowledge discourse in Michel Foucault.
[Theory Paper, Ethics of Emerging Technology]
Artificial Intelligence (AI) and cognitive robotics (CR) technologies are redefining and disrupting the way people work and live in many different domains. With an aging Baby Boomer generation, an increase in the small, nuclear family unit (as opposed to the multi-generational kinship assemblages housed under one roof), and a decrease in birth rate in so-called “developed” countries, there is an increasing trend in the use of these technologies to conduct personal care for aging populations and for the very young.[1] “Gerontechnology” based on Artificial Intelligence (AI) is expected to enable a predictive, personalized, preventive, and participatory elderly care”. [2][3] As medical dependency on AI accelerates, we are confronted with issues of safety and trust around its use. This paper uses a literature review as a methodology by which to discern similarities and differences in definitions of the “Self” as applied to humans and in parlance around AI and CR. By refining the definition of what is meant from a philosophical perspective by the concept of the “Self,” “Consciousness” and “Altruism” and juxtaposing these against the functional distinctions between Theory of Mind and Self-Aware AI, we posit the theoretical possibility, based on existing literature, of decision-making, self-aware AI capable of what might be considered a form of collective identity-based, altruistic behavior. This analysis is intended to inform considerations of the ethical implications to engineering of such systems in caring for the elderly and the young.
Artificial Intelligence (AI) and cognitive robotics (CR) technologies are redefining and disrupting the way people work and live in many different domains. We focus here on AI and CR applications in two fields closely related: children and elderly care. [1] With an aging Baby Boomer generation, an increase in the small, nuclear family unit, and a decrease in birth rate in so-called “developed” countries, there is an increasing trend in the use of these technologies to conduct personal care for aging populations and for the very young. “Gerontechnology based on Artificial Intelligence (AI) is expected to enable a predictive, personalized, preventive, and participatory elderly care”. [2][3] As medical dependency on AI accelerates, we are confronted with issues of safety and trust around its use. This paper uses a literature review as a methodology by which to discern similarities and differences in definitions of the “Self” as applied to humans and in parlance around AI and CR. By refining the definition of what is meant from a philosophical perspective by the concept of the “Self,” “Consciousness” and “Altruism” and juxtaposing these against the functional distinctions between Theory of Mind and Self-Aware AI, we posit the theoretical possibility, based on existing literature, of decision-making, self-aware AI capable of what might be considered collective identity-based, altruistic behavior. This analysis is intended to inform considerations of the ethical implications to the engineering of such systems in caring for the elderly and the young.
Gerardo, H., & Prince, B., & Ngura, B. L. (2024, June), Defining the Essence of the Self in Exploring the Notion of Altruism and Establishing Trust in Human | Robot Interaction (HRI) Paper presented at 2024 ASEE Annual Conference & Exposition, Portland, Oregon. 10.18260/1-2--47112
ASEE holds the copyright on this document. It may be read by the public free of charge. Authors may archive their work on personal websites or in institutional repositories with the following citation: © 2024 American Society for Engineering Education. Other scholars may excerpt or quote from these materials with the same citation. When excerpting or quoting from Conference Proceedings, authors should, in addition to noting the ASEE copyright, list all the original authors and their institutions and name the host city of the conference. - Last updated April 1, 2015