Close Menu

A new reality: Teaching about AI and its ethical use

Apr 17, 2024
7 min Read
Eugene Curtin
Image
new reality

Standing quietly in a corner of a conference room in Creighton’s Mike and Josie Harper Center are three robots looking for all the world like chastised children sentenced to a timeout.

Their heads slumped, arms hanging loose, it takes just a touch from Natalie Gerhart, PhD, to transform one into a babbling, large-eyed, endearing humanoid. These are Pepper robots, and the acquisition of three of them by Creighton’s Heider College of Business is striking evidence of the University’s determination to remain in the vanguard of the artificial intelligence revolution.

Quite how the 48-inch-tall robots will be used and what information will be programmed into them are matters of ongoing discussion, says Gerhart, associate professor of business intelligence and analytics, but what is beyond discussion is Creighton’s institutional conviction that AI-fueled technologies will shape the future.

It’s an understanding that reaches across campus, touching philosophy, medicine, communications studies, computer science, journalism and more. The world is changing, rapidly, and, as interviews with various professors suggest, Creighton intends to produce graduates who understand both the technologies that underlie AI and their ethical use.

Natalie Gerhart and Ali Dag with Pepper Robot
Natlie Gerhart, PhD, left, and Ali Dag, PhD, give commands to a Pepper robot.

Concepts Gaining Importance

Ali Dag, PhD, associate professor of accounting and business intelligence and analytics, teaches machine learning. Coding, for so long a foundational aspect of the digital revolution, is waning in importance as AI simplifies the process, Dag says. Concepts, on the other hand—ideas about how best to use AI—are gaining importance.

“Previously, we were focusing on maybe 40% coding and 60% concepts,” he says. “The direction that I am sensing from a lot of my colleagues and students at conferences is that coding needs to drop maybe to 20% or even 15%. Greater importance is attaching to concepts—ideas for how businesses can use AI.

“This is a big advantage for our students, by the way, because what really sets them apart is their ability to generate smart ideas and their understanding of how to use AI to the fullest extent.”

Gerhart says the word “applied” best represents the business school’s approach to AI.  

“We are teaching students to enter the business world, so we want them to understand how these tools can best be used,” she says. “It’s not just coding that must be understood. We want all our business students, and students across campus, to understand how they can use AI: ‘I am a whatever major, how is this technology applicable to me?’”

Using AI in Medicine

The Pepper robots, so emblematic of emerging AI automation, find their counterpart at the School of Medicine, where Waddah Al-Refaie, MD, chair of the Department of Surgery at Creighton and the CHI Health Clinic, is developing a remote voice-recognition device that will allow postoperative patients recovering at home to maintain their medicine schedules and receive medical advice.

Comparing the concept to Amazon’s popular Alexa device, Al-Refaie envisions patients benefiting from immediate and accurate information specifically tailored to their condition. Building such technology is a long process, and Al-Refaie only recently obtained permission from an FDA Institutional Review Board to move ahead with the project.

“We’re still setting up the platform for the ‘Alexa’ in terms of voice recognition, how to gather the information and how to educate the surgeon and the team to use it,” he says. “Currently, we are discussing how to have AI ask questions of the patients.”

He anticipates that program enrollment will begin in the late spring or early summer.

On Creighton’s Phoenix campus, 1,300 miles from the University’s historic home in Omaha, Manuel Cevallos, MD, is also pushing the boundaries of AI. An assistant professor in the Department of Medical Education, Cevallos is developing TAKAI, which is the acronym for an AI program he has dubbed “Teaching Anatomy with Artificial Intelligence.”

“The idea behind TAKAI is to combine the teaching of anatomy with artificial intelligence,” he says. “We will develop an application where we load information for students to access. When you ask a question of ChatGPT it gives an answer that is not necessarily correct. We are going to load correct information—the information that we want—into a ChatGPT-like program. That way, the answer will always be correct.”

Manuel Cevallos, MD
Manuel Cevallos, MD, assistant professor in the Department of Medical Education on the Phoenix campus, is developing an AI app for students to use while studying anatomy.

The program will eventually recognize photographs and pictures, Cevallos says, allowing it to function like a super-textbook, where students may not only observe anatomy but ask questions and receive explanations.

A key partner in Cevallos’ effort is Steven Fernandes, PhD, assistant professor of computer science, design and journalism in the College of Arts and Sciences. Specializing in deep learning, computer vision and machine learning, Fernandes will use the information that Cevallos and his colleagues provide to build TAKAI.

The successful deployment of TAKAI would be an example of AI benefiting humanity, Fernandes says. While many others are possible, he warns troubling outcomes are also possible.

“There is always the possibility, when you develop something new, that people might use it in good or destructive ways,” he says. “It is essential that we have some regulatory guidelines. AI could be used to detect cancerous tumors in millions of images, a task that can be challenging for doctors due to the sheer volume and, of course, the limited availability of doctors.

“At the same time, these same AI algorithms could potentially be used in military applications, such as altering the trajectory of a missile, which could lead to unintended damage. While risk is involved, AI is here to stay. It is essential for our students to understand both its benefits and drawbacks, how it operates, and how we can harness it for positive outcomes.

“These are key aspects of our focus. Remaining ignorant is not a viable option.”

AI and Ethical Behavior

Understanding right and wrong has, of course, always been a critical element of Creighton’s Jesuit, Catholic education, and the special challenges posed by the development of AI has birthed concerns across Creighton’s pedagogy about how universal access to AI will impact learning.

Jacob Rump, PhD, associate professor of philosophy, says the increasing role played by artificial intelligence raises questions of ethical behavior on the part of students who may be tempted to let it write their papers, but also for educators, who must help their students learn.

“If students rely too much on artificial intelligence instead of developing skills related to critical thinking and reasoning, then in a couple of years they won’t have the skills they need to set themselves apart from generative AI,” he warns.

“So, if you are worried about technology taking your job, then maybe you should try to make yourself the kind of person who can do things that technology can’t. The more you rely on AI, the less you are going to develop those skills.

“Sometimes I tell my students that studying philosophy is about learning how not to be robots, but if they allow a robot to turn in their non-robot homework, they very soon will not be very good at not being a robot.”

Artificial intelligence is not human intelligence, Rump says. AI is indifferent to ethical concerns.

“I think there are some really good philosophical ideas for resisting the idea that artificial intelligence is intelligence, or at least that it is like human intelligence,” he says. “One of the biggest, in my view, is that we as intelligent beings are embodied beings. We have bodies, we have feelings and experiences. Not just that we have emotions, but we feel.

Risk is involved, but AI is here to stay, and our students need to know the good and the bad, how it all works, and how we can use it positively.
— Steven Fernandes, PhD

“Aristotle talks about righteous indignation. You see an injustice, and you feel it. It is not clear to me how artificial intelligence could do that. This is why the humanities are going to become more and more important as AI advances. Understanding what it means to be human is our bread and butter in the humanities, and that is not an issue that society can afford to ignore.”

Guy McHendry, PhD, associate professor of communication studies, wrestles with many of the same questions. It is important, he says, that students understand how AI works, how this seemingly magical technology is just a form of computing, a tool that can help them succeed or get them in trouble.

“How to use it ethically in a way that doesn’t encourage them to cut corners and get into trouble not just in their classes but, later, in their professions,” is a big issue, McHendry says. “There have been stories already where a lawyer used ChatGPT to cite case law that ChatGPT just made up.

“There are instances where people have tried to use this to do their job and without knowing divulged proprietary company information. So, students need to know what AI is, how it works right now, how to adapt as it changes, and how ethically to use it.”

Given the power and pervasiveness of AI, it is unrealistic to simply forbid its use, McHendry says. The key is to teach its capabilities but also its boundaries and its limits.

“I was talking about that recently in class,” he says. “My students can use it with permission, but they must ask, and we must talk about ethical use.

“I’ve seen students let it write for them, and it just produces bad writing. It’s not their voice. It sounds inhuman. It can create approximations, but it doesn’t sound authentic, and authenticity is very important at Creighton.”

};