By fall semester, Forbes, Reddit, the NYT, and Berkeley Engineering were among the legions describing 2023 as the soft launch of the “the AI revolution”. CAIFS (Center for Academic Innovation and Faculty Support) also anticipated the permanence of AI and kicked-off the fall semester with faculty guidance and discussion on the emerging multitudes of AI tools. Some particular attention was given to how to use AI to streamline, standardize, and improve aspects of our own practices. (see TIP blog from September 7 for more information about ways faculty can use AI).
While most of the guidance coming from the White House to Digital Promise strongly advocates for institutions to formalize AI policies, many are still exploring the functionalities and pitfalls of AI before settling on anything definitive. As part of this exploratory period, some professors have already started showing students the strengths and limitations of AI. But what does it look like? And to what extent should faculty introduce AI tools’ benefits and drawbacks? Over the summer, USC Upstate’s Colby King began experimenting with ways to bring AI into his fall classes. Dr. King has previously published teaching resources supporting information literacy (see publication here), and was a 2020 recipient of the USC Upstate Library Faculty/Staff Award for the Promotion and Integration of Information Literacy.
Check out Dr. King’s assignment sequence, inviting students to learn about AI uses and limitations in sociological contexts and also to reflect on their own experiences using AI.
• First, he has them read a short excerpt of Burke’s (1967) book The Philosophy of Literary Form, in which Burke uses the metaphor of “The Parlor” to describe the ongoing nature of academic writing/scholarly discussion. Importantly, it focuses on how academic ideas build on and refine each other. I’ve used this in writing-intensive classes previously but adapted it here, with them doing a brief reading reflection on it. (Burke, Kenneth. The Philosophy of Literary Form. Baton Rouge: Louisiana UP, 1967. 110-111.)
• Then, he has them listen to a podcast and read a short essay, both of which are quite critical of generative AI (at least one of them calls it “theft”).
• Then, he has them do an assignment where they use an AI system of their choosing to generate an essay on the sociological imagination, and then the student grades it and reflects on what it did (or did not) do well.
• Then he has the students participate in a discussion thread to share and reflect on these activities.
In Dr. King’s assignment which uses the episode “Why AI is a Threat to Artists” from the podcast “Tech Won’t Save Us” along with the short essay “ChatGPT is not ‘artificial intelligence.’ It’s theft.” by Jim McDermott, students are tasked with responding to a series of open-ended questions to engage with the major themes from the assigned content:
• Can you summarize some arguments FOR seeing generative AI as theft? How about some arguments AGAINST seeing generative AI as theft?
• Drawing on previous course material, including discussions on research methodology and how sociologists “know” what they know – why is referencing and providing credit so important for academia to function well for all participants?
• Given that referencing and credit is so important, how does thinking of creations of LLMs (large language models) as “theft” challenge thinking about these tools and help to reframe understanding of their use in academia?
• What questions do you have about all this for your peers or your instructor?
Using open-ended questions like these that make use of neutral language invite students to expand their thinking about the content, to articulate their own understanding of complex social and technical phenomena and build their own bodies of knowledge.
Cornell’s Center for Teaching Innovation suggests that strong open-ended questions like these can be arrived at when we put our questions through filters like:
• Does the question draw out and work with pre-existing understandings that students bring with them?
• Does this question raise the visibility of key concepts the students are learning?
• Will this question stimulate peer discussion?
• Is it clear what the question is about?
After students have encountered some of the strengths and weaknesses of AI, combined with any of their own experiences using AI, Dr. King’s writing assignment has students use an AI tool of their choice to generate a 500 word essay with references; assess the essay; and then reflect on their experience completing the assignment. The AI-generated 500-word essay topic is on the sociological imagination and the writing should make use of formal style.
The assignment then goes on to have students evaluate the essay by answering questions like: Were the references provided in the essay referring to real sociologists and publications? You should check and confirm.; Was the essay unique? Did it have a particular point of view or was it generic?; and Did the essay provide you or any other reader with new insights or new clarity on the subject?
And the third and final part of the essay is to answer reflection questions like: How do you feel about the essay generated by the LLM is general?; How do you feel about the accuracy of the essay?; What do you think of the ethics of using tools like this for writing? Are there good or ways to use these sorts of tools?; How do you think professors/universities should deal with LLM writing tools in academic integrity policies?
Among the strengths of Dr. King’s assignment is that it creates a strong sense of student autonomy. Many students feel panic when tasked with writing an assigned essay on a theoretical topic and they often struggle to begin, especially if they feel like they are not strong writers. This series of engagements, however, gives students entry points into a complex course concept and then uses guiding questions to have them evaluate the product (the 500 word AI-generated essay), reflect on what it means in the 21st century to have access to these types of tools; and to lend their learning to the creation of AI academic policies. In other words, the assignment has students engage in the types of complex thinking and critical analysis a paper requires without making them write the paper. Brilliant. Beyond creating a generative learning experience for students, the assignment also empowers students by encouraging them to form their own evaluations of these tools and inviting them to contribute their perspectives to discussions about ethics and academic integrity around the use of these tools in academia. This elevates students’ sense of belonging and helps them understand that they have valuable insights and that they can and do make contributions to the academic context they’re a part of.
Though we’d be very interested to hear what kinds of policies the students came up with in Dr. King’s class, some examples of institutional policies can be viewed on the Eberly Center’s website. Are you using AI in your course this semester or next? If so, CAIFS would love to hear about it.