Skip to Main Content

Chat GPT comes to Concordia

February 16, 2024 - 11 minute read


When he first heard about a new artificial intelligence (AI) tool, Concordia professor Michael Kinnen, who teaches corporate finance and investing in the School of Business and Economics, was ignoring the chatter about Chat GPT—“I just thought it sounded like something lazy people would use,” he says—until he met with former students.

“All of them are required to use generative AI in their new jobs to run meetings, manage teams, and in communications and following up with people,” Kinnen says. “It’s all run with generative AI tools.”

Kinnen decided to attend several AI workshops and saw high-level educators using it in teaching.

“It’s like scales fell away from my eyes, and I could see the potential for this to unleash creativity and learning,” Kinnen says. “It’s breathtaking. It fills me with wonder.”

His response is one on a wide spectrum—ranging from angst to curiosity—to the advent of generative AI tools such as Chat GPT. At Concordia, faculty and staff from different fields are engaging in different ways with these new tools, teaching students how and when to use them, and having robust dialogues that drill down to first principles such as: What is education, what does it mean to be human, and how can Chat GPT be used to produce wise, honorable, and cultivated citizens?

Rev. Dr. Scott Ashmon, senior vice president and provost at Concordia, calls it “a patient but not passive approach.”

“At Concordia, we haven’t pushed the alarm button on Chat GPT,” he says. “You need time to get to know something like this—to see how students use it and how we think we want to incorporate it. We are having conversations and making presentations among ourselves, and they have all been really good. People in opposite camps debate respectfully.”

Artificial Intelligence (AI), according to consulting firm McKinsey & Company, is “the practice of getting machines to mimic human intelligence to perform tasks,” and is commonly used in customer service chatbots and voice assistants like Siri and Alexa.

Generative AI, also called machine learning and exemplified in tools like Chat GPT, is a type of AI in which programmers create models “that can ‘learn’ from data patterns without human direction” by analyzing an “unmanageably huge volume and complexity of data (unmanageable by humans, anyway).”

Computer science professor Joshua Tallman, who teaches a class on AI, says the tasks Chat GPT can do on a vast scale are “impressive” but amount to nothing more than “glorified statistics”—with one key difference: machine learning now involves so much data that the results it produces are “a bit of a black box,” he says.

“It’s based on more input data than we can wrap our heads around. It’s beyond us, there are so many variables,” Tallman says. “Because we didn’t come up with the algorithm—it is the patterns the computer found in our training data—it’s not that it’s an impossibility that we would understand why it’s making the decision, but at this point we don’t have a good enough grasp.”

He loosely quotes John Lennox, a professor of mathematics at Oxford University, who said humans will never be able to create a truly conscious and intelligent computer system, but we will likely be able to create one good enough that people think it is conscious.

In his own computer science classes, Tallman’s goal is to get students under the hood to understand how AI is created, how it work—and how they can help it to function better. He requires sophomore students to use AI for certain aspects of their assignments, but they must also explain how they got the AI to produce results and assess how well it worked.

“We are figuring out how to interact with it in a way that is helpful,” he says. “Students see the limitations. They understand it can be useful, but you still have to think through all the criteria and how to process the data. You’re just thinking at a different level of abstraction using human language, not machine code.”

He points to positive results from machine learning, like helping to detect cancer earlier with computers that scan at medical images and pick up on patterns or elements too small for humans to appreciate. Self-driving cars are another application.

“Some professors are starting to prohibit its use because they think students are plagiarizing, and that is a danger, just like having an encyclopedia next to you,” Tallman says. “But there is a really positive use that can help us learn and become more efficient.”

Dr. Joel Oesch ’97, professor of theology and director of the master’s program in theology, bans the use of Chat GPT in his classes because, in his view, students haven’t developed enough virtue, self-control, and discipline to see it as a tool. Rather, Chat GPT provides “this short circuit around virtues of hard work, self-control, and self-discipline that you can’t get any other way”

Generative AI, he believes, is only properly used “after you develop a certain amount of wisdom, which is hard-earned.” Still, Oesch has played around with Chat GPT and dubs it “a staggering achievement of human ingenuity.” He recently gave it what he thought was an insuperable challenge. His wife just completed her first year with breast cancer, so he told Chat GPT to compose a sonnet in the manner of Shakespeare with biblical themes, and including a reference to a rose, for his wife who is overcoming cancer.

“It performed the task in five seconds and did it incredibly,” Oesch says. “It was mind-blowing. I thought, ‘If someone pawned that off as Shakespeare, you would believe it.’”

But on another occasion, Oesch encountered Chat GPT’s penchant for “hallucinating,” that is, fabricating answers and presenting them as real. Oesch asked for quotes about whiskey from C.S. Lewis, Mark Twain, and G.K. Chesterton. Chat GPT produced fake quotes, then repeatedly apologized for its misdeeds when Oesch confronted it. “It presents itself as true, and we trust it as true perhaps a little too quickly,” Oesch says. “It takes a learned eye to discern what’s being told to you.”

Dr. Jennifer Cosgrove, professor of psychology, has taught here since 1986. She began experimenting with Chat GPT earlier this year and has attended three webinars on the subject.

“It’s the biggest thing since the internet, and you really need to start thinking about it because it’s going to impact everybody, including the workplace,” she says. “Our students need to get up to speed on this. Teachers especially are finding all sorts of helpful uses.”

She says Chat GPT is excellent for brainstorming ideas, or for refining something you’ve written.

“You plug it in and say, ‘Edit this to sound more professional, like a 400-level university course paper, or make it conversational,’” she says. “Instantly, it will provide that. If you don’t like the way that looked, you click the button that says ‘Regenerate’ and it will modify it and keep modifying it.”

It also helps create lesson plans and in-class activities. “Instantly, it comes up with some really fun stuff,” Cosgrove says. “One time I said, ‘Give me five multiple choice questions at a 100-level university course on the topic of such-and- such.’ Boom, it came up with it. I looked through it and they were good questions, but as I told my students, I am an expert on this topic so I can know it’s good. If you’re not, you need to do the grunt work of double-checking it.”

Cosgrove believes fears of cheating and dumbing down the population are overblown, as they were when calculators and Wikipedia were first introduced.

“Do we really think kids have lost out in math because we have calculators? No,” she says.

Her main concern revolves around the quality of information Chat GPT yields based on the information available to it.

“Do you really want your source of knowledge to be what is popular?” she asks. “Are you careful to notice there can be bad actors or just lazy writers and researchers? It’s garbage in, garbage out. What is Chat GPT grabbing onto? What info is it sucking in and spitting back out? You need to have a really critical eye when you get information from these things because it can be biased.”

She, too, has experienced Chat GPT “hallucinating,” then apologizing. In one case, while she was researching an incident involving two famous psychologists, “It kept changing its story, but I knew what the story was,” she says. “It can do a lot of really good things, but you can’t think of it as a solid source because it can make stuff up.” Kinnen, too, has experienced Chat GPT’s duplicity.

In early August 2023, he gave a presentation to Concordia’s faculty and staff on “Using Chat GPT and Generative AI to Maximize Learning and Transform Teaching.”“I had noticed people over the summer were freaking out about AI and there was a lot of fear surrounding it being a tool for cheating,” he says of his motivation for giving the seminar. “I had used it quite a bit and was blown away by how it was helping me with teaching.”

Kinnen gave Chat GPT live prompts in front of a jam-packed room. In one prompt, he told Chat GPT to include a quote from his own dissertation. It did so correctly, but when he slightly modified the prompt, Chat GPT twice created “a fake quotation that sounded like my voice, but wasn’t in the dissertation,” Kinnen says. “I knew I hadn’t written it, but it sounded like me, which was so interesting. That’s when I started seeing what everybody calls ‘hallucinating.’ It’s outright lying and if you challenge it on it, it will [sometimes] double down and say, ‘I did get it from this source.’ That’s the freakiest thing, that when it makes mistakes it doesn’t always admit it. So, buyer beware. It can generate really good content, but you have to kick the tires and verify it.”

Kinnen uses generative AI to develop classroom exercises, create quizzes, even to come up with innovative ways to introduce new concepts and connect them to old ones in class.

“In my experience, for teachers it’s a fantastic tool to boost your creativity and effectiveness in prepping instruction and delivering content,” he says. “I think of using AI as leveraging a personal assistant to do work for you that is important but is often extremely time-intensive and therefore discouraging to embark on, such as coming up with brand new exercises, or developing introductions for topics and summaries for classes.”

Kinnen has created a library of “flexible prompts that are reusable and give fairly predictable outcomes,” which do in two or three minutes what previously took several days of work. For example, he described to AI his semester-long investment project for students and had it develop assignments that break the project into 10 or 12 weeks of requirements. He instructed AI to create email communications and Canvas announcements to send to his students over that time period—all of which Chat GPT created in minutes. Kinnen read and approved it all before using it.

He also requires students to use AI in an upper-level class on investment management.

“Students have to use it on everything from writing to analysis to creating their final versions,” he says. “They have to include an appendix that provides me with every prompt they used to generate the paper. Ultimately, the paper is a combination of what they’ve written and what GPT helped them write.”

His view is that, “If we take the position as faculty that this is a cheating tool and we need to forbid it, none of our students will be ready for the real world when they graduate. We need to teach them to leverage this tool so it can maximize what they are capable of delivering.”

Oesch and others wonder, “Where is humanity in all this, and how is humanity lifted up or suppressed?” He believes tools like this will “ultimately make life more convenient, but not richer.”

Cosgrove—who points to a host of ethical issues, including copyright infringement, biased programmers, and the excessive amount of water used for cooling the computers— also asks, “How much will people rely on it out of sheer laziness? Isn’t it important for first-year teachers in the School of Education to create their own lesson plans and have the skills to do that? Where is the dividing line between learning basic skills and taking the grind out of it?”

Ashmon believes the best approach is a balanced one, based on the setting in and purpose for which generative AI might be used.

“If you’re in a freshman writing course you might forbid it because we want to develop you as a writer,” he says. “But if you’re in a data analytics class, of course you’re going to use it because we use all sorts of tools. We would be falling down in our job of preparing you if you didn’t know how to use it because employers expect it.”

He calls Chat GPT “transactional but not transformational,” and weighs the same critical questions other educators do.

“Does it diminish students’ ability to discover the best sources of information?” he asks. “What does it do to them as writers and interpreters if they’re not engaging with the source material? Are they losing out on developing themselves as insightful questioners and wise appliers of information? Are they short-changing their creativity and not cultivating it? There are a whole lot of questions I have depending on the purpose in front of you.”

Like the others, he counsels caution and curiosity.

“We don’t know what generative AI is going to become,” Kinnen admits. “We’re at the front end of it.”

For Tallman, it’s a matter of helping students “to understand what’s actually happening so we can learn to use it ourselves and are not caught up in the hype, which gives it more credit, more humanness and more intelligence than it has,” he says.

Back to top