I was asked to participate on a faculty panel about AI in the classroom for the Gaylord College of Journalism and Mass Communication’s annual faculty retreat, which I was incredibly happy to do. I presented both my experiences in the classroom as well as some updates on conversations from around the institution. Much of what I shared has already been documented in a blog post here, so there will be a bit of redundancy. However, for the sake of archiving my current thoughts, here is a transcript of the presentation:
—
Today, the Associated Press updated its stylebook to include a chapter on generative AI, a little more than 250 days after the public release of OpenAI’s ChatGPT, a large language model chat bot. According to reports, the book states that:
- Generative AI tools cannot be used to create publishable content and images for the news service
- It can be used to put together digests of stories in the works that are sent to subscribers.
Similar organizations such as Wired have said their writers can neither publish stories with text generated by AI nor text edited by AI. They are permitted to use it to suggest headlines or text for short social media posts, to generate story ideas, and to experiment with using AI as a research or analytical tool.
To be clear, though, none of this is etched in stone. Amanda Barrett, vice president of news standards and inclusion at AP, has stated that a committee exploring guidance on the topic meets monthly.
So it’s quite clear that the conversation will continue to evolve over the year and quite possibly at a speed that won’t keep up with the evolution of the technology.
There has been a question about at what point generative AI will be prevalent in college classrooms, and the answer is that it already is. According to a May 2023 survey of over 1,200 college students on Intelligent.com, 30% of students have used ChatGPT for homework over the last year, and 46% say that they use it frequently.
I can tell exactly when my students found out about it. ChatGPT was publicly released on November 30, 2022, and the next week we had the tool up and live in my Contemporary Problems in Advertising class, as ChatGPT indeed fits the definition of a “Contemporary Problem.”
We opened up the AI generation tool and asked it to generate an image of Mark Zuckerberg behind the counter of a McDonald’s. And it wasn’t bad. (Note: OpenAI DALL-E now asks users to respect privacy and will not create images of public figures; this prompt could not likely be recreated today.)
We discussed in class how image generative tools could be applied by students: how they used to create quick visual storyboards for copy concepts that were being pitched to potential clients.
But the way I chose to approach the tool for the Spring semester was to bring it into the class rather than try to cover my eyes and plug my ears.
Early on in the semester, I teach a lesson on becoming a digital fact checker and evaluating online sources for arguments they are later asked to write and present. I illustrate this by having them investigate articles that make arguments on both sides of the question of whether the federal government should raise the minimum wage.
Students read four sources: One is written on minimumwage.com and discusses how it will negatively impact fast-food restaurants, another is the Seattle Times, another one is on a crowdsourced platform called the Odyssey Online, and the last one is a student essay formatted as how I ask them to do so for class (which they don’t know I fully wrote using ChatGPT).
My outcome is to teach them how to approach every source with a healthy level of skepticism and to take none of them at face value. If they dig enough (which isn’t a ton!), they’ll find that minimumwage.com is funded by a conservative think tank which is an arm of a public affairs firm that lobbies for the restaurant, hotel, alcoholic beverage, and tobacco industries. Odyssey Online is written by a college student who studied, coincidentally advertising, and authored the essay without a traditional editor, and all of the citations in the ChatGPT are made up. Because that’s what ChatGPT does when you tell it to include citations.
I tell my students that once you’ve identified who or what is behind the article, that this means the information is inherently suspect. They bring a perspective regardless of whether they are a conservative think tank, an undergraduate student, or a large language model. Even so, three of the four sources fall short of the level of sourcing that we expect in institutions of higher learning.
I reflected on this on my blog, and at the time, I wrote that I didn’t think my students were mildly impressed with the essay ChatGPT wrote. It was evaluated as the worst of the four sources that they read.
So we looked at how ChatGPT could be valuable in crafting an argument for or against raising the federal minimum wage.
Perhaps they could ask ChatGPT, “What are some questions I should explore if I’m interested in learning more about this topic?” For me, it gave me ten questions to ask as I considered the impact of raising the federal minimum wage on the food industry.
Or we could ask it to list arguments on both sides of the debate. And, very beautifully, it gave us four arguments for the affirmative and five arguments for the negative.
Or they could even have it draft an essay outline. And it showed students that they would need an introduction with background and a thesis statement. Paragraph one could look at the impact on food prices, paragraph two could evaluate the impact on employment, etc., etc.
Now the students had a framework from which they could build their essay, which I see as a great value to students, particularly those who struggle with the two hardest parts of writing: starting and stopping.
Later in the semester, students were required to hold a debate on whether AI would have a positive or negative impact on jobs in the advertising industry. Several of the students used generative AI as a research tool and constructed prompts that had AI give answers.
If you ask ChatGPT what it thinks, it will tell you that the issue is both complex and highly debated. It will list reasons why it will more than likely lead to job loss (automation, efficiency gains, cost reduction), but it will also tell you why it is less likely (human creativity, new job opportunities, complex decision-making tasks). For the students, this is an excellent jumping-off point to understand the complexity of the issue as well as the arguments that underpin both sides of the debate.
Of course, as the Director of the Office of Digital Learning, I’ve thought deeply about this impact across the entire institution. I’ve had multiple conversations with the Office of Academic Integrity and participated last semester in an ad hoc committee led by the Center for Faculty Excellence.
In my opinion, the biggest question isn’t if we should use generative AI tools but how and to what extent? What is the level of generative AI with which we are comfortable?
This is a difficult question to answer and will vary by discipline and likely by faculty member.
For example, a faculty member in computer science argued that teaching someone how to code by hand is both futile and antiquated. For years, coders have relied on publicly available code libraries as the basis for application development. Unlike our field, they are more concerned with code being standardized, precise, and functioning more than they are concerned about originality. Open source—the voluntary sharing of work—is valued.
A user can now say, “Help me visualize this data as a web application using Javascript,” and ChatGPT, along with many other tools, will generate the code. Or you can say, “Check my code to ensure that it will work,” and it can. According to their faculty, it is going to fundamentally change the skills necessary to work with code in the future.
Given the spectrum of ways in which these tools can be utilized and the rapidness with which the tools are developing, it’s incredibly difficult to set policy around their use in all courses.
Now available on the front page of the OU Academic Integrity website is some language:
All academic work submitted by a student should be the product of the student’s own understanding and effort. Unless specifically permitted by the professor and clearly indicated by the student through proper attribution, it is cheating to submit any academic work that originates from another source.
From sharing notes to the emergence of the internet, generative AI tools, such as ChatGPT, join a long list of technological advancements that have significantly impacted education. When used properly, these resources can aid in a student’s understanding of a particular topic and can be celebrated for their contributions to advancing knowledge. When used improperly, any resource can undercut the purpose and value of academic work.
The truth is, though, that it will be very difficult to prosecute the use of generative AI unless a student admits to it. There are tools that promise to detect generative AI text, but it is impossible to do that. You can only detect that it is written in a way that is similar to how a bot might write. The tools have been proven to create false positives, and users have already identified workarounds to obfuscate detectors by reverse engineering the tool.
My challenge to faculty would first be to not fear it. Rather, find ways to integrate discussion about generative AI into each of our areas. In classes where it is suitable, require students to use generative AI, have them evaluate its work, and talk about ethical ways in which it can be used as a tool. It is a tool in the same way a calculator is a tool. And our graduates will, and we will, use it as a productivity tool: social media content generation, data analysis, editing and proofreading, language translation, visualizations, trend analysis. The art of “prompt engineering” is quickly entering into our collective lexicon.
Finally, prepare yourself for how you would like to discuss with a student who you suspect is using generative AI. Because if your class involves writing, which is the case for most of our classes, you will encounter it this semester if you haven’t already. My advice is, if you suspect it is being used, talk to them about how they used it and learn from it before you pass judgment. This is a learning moment for all of us.
The AP has already telegraphed that how we might use this tool is likely going to change and change rapidly. Therefore, we need to also recognize that what we may or may not allow as a use case in classrooms will also evolve over the coming months.
Featured image:“Realistic photo of someone with a 90s computer monitor for a head.” Created with DALL·E, an AI system by OpenAI.