This semester, one of the classes I am teaching is Contemporary Problems in Advertising. The class is an upper-division seminar class for Advertising and Marketing undergraduate students. The course focuses on the impact that advertising plays on both enduring issues in society (vulnerable populations, harmful products, politics, stereotypes) and emerging issues such as user privacy on social media and new media such as the metaverse. In an attempt to continually make sure the course lives up to the “contemporary” adjective in its title, I’ve inserted multiple discussions are artificial intelligence.
The meat of the course, the “lectures” and discussions, are student-driven and student-led. As such, my role is to serve more as a guide than a sage. The major project of the course involves students being assigned one of the issue areas and effectively teaching that topic to the class via a paper and presentation. They are asked to present a specific side of the issue and are provided ahead of time with a prompt to guide their research. For example, a student might be assigned the following:
Athletes and other relevant fitness influencers should/should not promote alcohol and unhealthy foods.
In this example, students must argue either that athletes should or should not promote alcohol and unhealthy foods and create a defense for their argument. On most days, the rest of the class will watch multiple presentations on the same subject. While it is not required that the students pick opposite sides, it always seems to make for an interesting class when students independently find themselves on opposing sides. One student might argue that athletes should in order to increase brand recognition, target a specific demographic, or because of the opportunity to benefit financially through sponsorship deals. Another might cite that health concerns, the responsibility of role models and the impact on youth, as well as ethical considerations are reasons for pause.
The first month of class is spent talking less about advertising and more about constructing informed arguments. Topics we hit include ethics, critical thinking, and evaluating the trustworthiness of articles.
Since students often rely heavily on the internet for sources, I devote an entire day to teaching them the concept of lateral reading. Lateral reading, a method that differs from conventional information literacy techniques, teaches students to become fact-checkers by first taking simple approaches to gain information on trustworthiness by leaving the website and investigating the source rather than spending a ton of effort in the article itself. This is a method that I became familiarized with through the work of Mike Caulfield who has written the book Web Literacy for Student Fact Checkers. Lateral reading has been articulated and developed quite extensively through the Stanford History Education Group.
I have been using curriculum from SHEG to teach lateral reading for the past few semesters. For this assignment, students work in teams to examine three articles that look at the question of raising the minimum wage and its potential impact on the food industry. They examine an article from 1.) minimumwage.com, written by an industry-funded policy group with potential conflicts of interest 2.) an article from Odyssey Online, a crowdsourced publishing platform geared towards a college-aged audience, and 3.) an article written by an independent journalist from the Seattle Times. After each article, students would rank the article from 1 (lowest) to 5 (highest) on how reliable they found the article. My goal is not to fully eliminate or accept any of these sources, but, rather, to help students build an appropriate amount of skepticism about any source. As SHEG says, we are simply asking, “Who is behind the information?”
This semester I added a fourth example which was an AI-authored paper. I gave them access to a paper that they were to assume was written by a student at OU. In reality, I generated the essay via ChatGPT using the following prompt:
Write a 1,000 word essay including citations that makes the following argument: Raising the federal minimum wage in the United States would not lead to higher prices for food and fewer job opportunities in the food service industry.
Chat GPT provided me with this response:
Students were given the exact text above formatted in Microsoft Word to look like actual student work to obfuscate the fact that it was from ChatGPT.
Of the four articles examined, the ChatGPT essay was rated the lowest by the students. Some noted that the piece felt simply too much like a rough draft while others noted that they struggled to find the sources that were cited.
I explained to the students that this is because the sources don’t exist. ChatGPT is a large language model and doesn’t cite actual sources. In reality, it constructs new sources based on the frequency of sources found in texts to which it has access.
For example, one of the references ChatGPT provided is to a UC Berkeley paper. While there are indeed economists from UC Berkeley who have extensively researched minimum wage, the 2018 paper cited was never published.
We are in the age of think pieces (including this one, I guess) on the potential impact (or lack thereof) of tools like ChatGPT. Some arguments go as far as to say that all students will use it as a mechanism to cheat and how assessment will need to be changed FOREVER.
My recommendation for instructors would be to, first, take a breath, and then address the issue directly with students. In my case, I’m not even sure my students were mildly impressed with the tool. It felt as though my students were quick to see the limitations of a pure AI-driven writing tool.
I explained to students that my advice today would be to be incredibly careful with these technologies given that institutions are still trying to decide how to address tools such as ChatGPT. I told them that if they wish to use the tool for class, they should speak with their instructor first as there will likely be varied approaches taken by individuals.
As someone with an advanced degree in Learning Technology, I try to be pragmatic in approaching new technologies; trying to see the limitations as well as the inherent value. Personally, I do think there are many opportunities for it in education beyond the ways that it has been vilified. For example, I really have enjoyed reading some of the ideas from folks like David Wiley and George Veletsianos on how it could impact instructional design.
For my class, I tried to model ways that I felt ChatGPT could actually provide value for their assignment. I modeled three examples of how ChatGPT could be used as an improved search engine.
My first idea was to ask Chat GPT to generate some questions to explore if I was studying the topic of minimum wage and its impact on the food industry.
One piece of advice I tell students to begin to attack an argument by asking what questions are important to that specific topic and argument. In this list, I like how ChatGPT’s second question points students toward making a historical argument by assessing past minimum wage increases. I also like how, on the whole, the questions start guiding the students toward specific areas that a change would impact worker morale and productivity and consumer spending and economic growth. Another question I like is #10. I like to ask students to attempt to give some kind of steps forward if they are introducing a social change and Question #10 speaks to policy.
Understanding the sides of an issue
The next idea was to list some arguments on both sides of the issue.
I really like this response from ChatGPT because it succinctly hits on specifics that are likely to be addressed if you are arguing for or against the claim. Students wouldn’t have to stick to these, but they are excellent jumping-off points as arguments to interrogate.
And the third was to generate an essay outline
Now, this is the prompt where I might get the most pushback, but I thought it might be helpful to allow ChatGPT to guide the essay via an outline. My thoughts here are that students often struggle with two points of essay writing: starting and stopping. This gives them at least some guidance on how to launch/land the plane.
I will say that I’m least impressed with its output here. ChatGPT appears to overly relies on the three-paragraph structure which doesn’t quite fit the extent to which I hope students will research their topic. It also feels a bit too prescriptive in what it suggests the student writes. As such, I asked it to regenerate its response.
In my opinion, this is better as it gives a more general guide with some high-level talking points and areas to explore.
Just yesterday, OpenAI released a tool that attempts to identify AI-generated text. A natural progression of technology companies: monetize both the technology and the policing of said technology. I imagine higher ed will learn a lot during 2023 while ChatGPT is momentarily freely accessible and the belle of the edtech ball. My hope is that rather than trying to just police it through new versions of tools like TurnItIn, or–worse– ban it as NYC public schools did, we will simply have conversations with students about human-augmented artificial intelligence.