AI and the Liberal Arts: Embracing the Power, Preserving Humanity

Artificial intelligence has captivated our collective imagination. With groundbreaking models like ChatGPT and visual AI generators, the boundaries of what is possible have been shattered. Yet, amid our fascination, a twinge of fear lingers — a fear of the unknown, of a future shaped by artificial intelligence.

From automating tasks to transforming academic integrity, AI is reshaping education. We stand at the crossroads of an AI revolution, and the question reverberating through higher education is this: How do we embrace AI’s power while preserving our humanity?

We stand at the crossroads of an AI revolution, and the question reverberating through higher education is this: How do we embrace AI’s power while preserving our humanity?

It’s at this point that I, the bylined author, enter the article to admit that I didn’t write the previous paragraphs. ChatGPT wrote them, and it took only a few seconds. (ChatGPT is a chatbot developed by the artificial intelligence research lab OpenAI. It came up with the headline, too.)

AI has been working in the background for years, so in one sense it’s nothing new. But with the launch of new language-based models like ChatGPT and Google’s Bard — and visual AI models like Midjourney and DALL-E 2 — artificial intelligence has made a critical shift. In contrast to earlier forms of AI it’s creating, not curating.

Generative AI works by treating different types of information as if it’s a language, then simply selecting what’s likely to come next. It can do this with words, as in the first paragraphs of this article, and also with pixels and regions of images, with computer code, with the notes and chords of music, and with the next frame of a video.

“I prefer to call it ‘algorithm intelligence,’ because it’s programs and algorithms that sort through massive amounts of data and spit out predictions,” said Zachary Adams, Hope’s digital instructional specialist and an assistant professor of digital instruction.

This technology has been met with excitement: It took ChatGPT only five days to reach 1 million users (in contrast, it took Spotify five months and Facebook 10 months to reach the same milestone). It’s also been met with fear: One survey of AI researchers revealed that half of them believe there’s a 10% or greater chance that AI will result in the extinction of humanity.

Whether one thinks it’s good or bad, artificial intelligence is here: Now what?

To ask how AI might be used at Hope would be something of a misnomer, because the technology is already being used on campus. It’s editing images and graphics, generating content, refining emails, and cleaning up computer code. And in an ironic twist, faculty members are using AI to catch students who try to cheat by turning in AI-generated assignments. Still, AI could improve the work of Hope’s students, faculty and staff in innumerable ways.

“I think it makes everything accessible,” Adams said. “It levels the playing field for almost any content and specific skill set that someone may need to get into a job or sector.”

As examples, he said AI could translate course content into a student’s primary language, adapt content to the level of complexity at which a student learns best, and improve notetaking. Adams also pointed to the benefits that AI could offer to faculty members, such as through time-saving assistance like adapting multimedia assignments, writing essay prompts or developing presentation templates.

Hope’s provost, Dr. Gerald Griffin, who formerly taught in Hope’s psychology, biology and neuroscience programs, identified gains that AI can bring to research. “As a behavioral neuroscientist, I was introduced to this through the ability to capture video of an animal and then use AI technologies to calculate and quantify different types of behaviors that would take humans thousands of hours to do,” he said.

“We can’t say, ‘Don’t use it,’ because then we’re doing any student that comes here a disservice.”

Zachary Adams
Digital Instructional Specialist,
Assistant Professor of Digital Instruction
"Squirrel wearing a backpack" 
Created using Adobe Firefly (beta)
“Squirrel wearing a backpack”
Created using Adobe Firefly (beta)

"Robot using a computer"
Created using Adobe Firefly (beta)
“Robot using a computer”
Created using Adobe Firefly (beta)

“The tool advances capability for research,” Griffin said. “It advances students’ ability to edit their work. It enhances brainstorming. It helps connect thought patterns.”

Using AI at Hope clearly has advantages — but it also carries risks. For example, AI can’t tell whether its predictive text is true, only whether the next word in its sentence is likely based on an analysis of its dataset. In other words, it may simply be wrong.

“Eventually it’s going to get to the point where it’s working on its own supply of data, so if it’s looking at incorrect or falsified results, then it’s just going to regurgitate more and more false information,” Adams said.

In terms of academic integrity, students were already using ChatGPT to write their papers within weeks of the program’s release on Nov. 30, 2022. The curious element of using AI-generated papers is that it isn’t, strictly speaking, plagiarism: the AI generator isn’t copying text that already exists but actively generating original content.

“We already are interpreting our policy to say that generative AI is a breach of academic integrity,” Griffin said. “Without a proper citation or permission of the instructor, if you put in a prompt, take from a produced essay, and put your name on it, you’ve misrepresented work that you’ve not done.”

Griffin also pointed to ethical concerns: “It does allow for new types of research to be performed, and we want to build a framework to make sure that research is ethically sound,” he said. “The problem is that the world got a really advanced, powerful tool before there was a shared understanding of how to best use it for learning and for research advancement.”

Yet another concern is related to what Adams called “automation complacency.” If we outsource brainstorming, writing and critical thinking to artificial intelligence, it may atrophy the development of those skills in ourselves.

With so many promises and pitfalls, Hope is taking the approach that it’s critical to learn how to use AI responsibly. And the first way to do that is to actually use it. “We can’t say, ‘Don’t use it,’ because then we’re doing any student that comes here a disservice,” Adams said. “We’d be preparing a whole generation of students who are going to leave Hope and go into careers where they need to use it. How are they going to get a job when they don’t have that capability?”

The Academic Computing Committee talked about AI for most of last semester, and Adams was among a group of Hope instructors who presented to the rest of the faculty early in the spring. They discussed some of AI’s risks and equipped faculty members to prepare students (and themselves) to think about and adapt it in the classroom.

Another presenter, Greg Lookerse, offered a cautionary word: “If you rely on these programs, you are mechanizing your thinking and creativity in a startling way,” he said. Lookerse is an assistant professor of art, and he’s recorded several videos about AI and the visual arts on his YouTube channel, Art Can Help.

“Are we just teaching students to be good at business and efficient producers? No. We are teaching them to be more human. Creativity and thinking are human qualities,” Lookerse said. “We should highlight the fact that when you use these tools you risk diminishing your creating and thinking capacities — capacities that are deeply human.”

For Griffin, these sorts of questions — What does it mean to be human? — are exactly the questions that Hope’s liberal arts model is uniquely suited to answer. Hope can give students the training and the tools necessary to work with AI and to think about it with a holistic view.

By forcing us to ask what it is that makes us human, “AI will help us to elevate that definition. We are more than connectors of ideas, more than manipulators of tools,” he said. “If it is just organic, if it is just the manipulation, for me that’s very depressing. It does not fully explain the human experience and condition.”

Can you find all of the 40 AI differences?