Despite the relative newness of AI tools for all, surveys conducted by the Digital Education Council, Chegg, and TechTrends found that more than 80% of students have already built AI into their learning lives. According to the survey results, students are using tools like ChatGPT, Grammarly, and CoPilot primarily to search for information, check grammar and improve their writing, summarize and paraphrase documents, create first drafts, explain complex concepts, and suggest research projects. While these efforts provide students with several benefits, students remain skeptical of AI’s accuracy. Chegg’s survey found that 53% of students were concerned about content reliability and the ethical implications of using the tools. They also were worried about data privacy and whether or not using AI tools diminished their ability to think critically. As we all continue to navigate AI’s uses in higher ed, having a plan for AI use and talking candidly with students about your expectations and theirs, may reduce some of the uncertainty everyone has about when and where to use AI.
Detection
While AI detection tools have proliferated at similar rates as AI tools themselves, there is little evidence that they are reliable. Several studies have shown that the detection tools identify human-generated work as AI-generated – and vice versa – too often for the tools to be considered foolproof. In fact, ChatGPT’s own AI detection tool was taken offline because of its inaccuracy. Without a reliable detection tool, what can faculty do to identify AI generated work? Most of the recommendations are similar to what we’ve always done.
- Look for accurate citations.
- Ensure the facts, dates, and people are real.
- Identify appropriate tone and vocabulary for the course.
- Check for personal or class-specific details.
- Rely on your expertise and course knowledge to determine authenticity.
Dialogue
Experts are recommending that faculty set clear expectations about how students can – and cannot – use AI in their courses and talk to students about why they’ve chosen them. Open conversations support transparency and academic honesty which are essential to reinforce during the emergence of new technologies. To do this, consider:
- Holding class discussions that invite students to ask questions and share their perspectives about AI tools. Additionally, invite students to lead conversations on ethical uses of AI in the discipline and shape course policy when possible.
- Explaining the rationale behind your AI policies so students understand that the goal is to facilitate intentional and guided learning experiences that cultivate deep and enduring understanding, rather than merely to ensure compliance.
- If students are allowed to use AI tools, discuss with them how to cite their use of tools (log into SpartanHub and view “Instructor’s Guide to Generative Artificial Intelligence (AI) in the Classroom” to see course and assignment relevant policies and citation guides).
Innovation
AI tools are here to stay for now and we may benefit from shifting our conversations – and mindsets – from a “tool restriction” approach to a “guided exploration” approach. The more we talk and explore together about the promise and limitations of AI, the better we will prepare our students to make ethical and intentional disciplinary decisions. Consider these suggestions:
- Frame AI tools as thinking partners, not shortcuts.
- Analyze AI as part of the discipline itself – what is AI’s role in creative writing, journalism, psychology, computer science, education, nursing, business?
- Design AI-inclusive assignments.
- AI-Inclusive: Compare AI generated drafts with student revisions.
- AI-Inclusive: Invite students to critique and enhance AI responses.
- AI-Inclusive: Fact-check AI summaries or explanations.
- AI-Inclusive: Analyze different AI tool output from a disciplinary perspective.
- AI-Inclusive: Evaluate the quality of AI generated study guides, flashcards, or outlines.
- Design AI-restrictive assignments.
- AI-Restrictive: Invite students to relate course content to very recent events that AI’s knowledge base may not contain.
- AI-Restrictive: Have students conduct research on local businesses, organizations, or community groups aligned with course goals or have students look at an issue on campus that AI may not be able to give specific insights on.
- AI-Restrictive: Annotate sources and include a course content connection.
- AI-Restrictive: Focus on process not product.
- Reevaluate assessment practices to emphasize growth and understanding rather than perfection.
- Assessment: If students believe their first submission has to be flawless, turning to AI becomes an easy alternative.
- Assessment: When assessment awards productive struggle, engagement, and measurable progress, students may feel greater intrinsic motivation to persist and take ownership of their own learning.
- Talk to your colleagues and learn about their experiences using AI.
As AI continues to evolve, the conversation in higher education must move beyond detection and toward dialogue and innovation. We can engage with students in open and transparent discussions about AI’s role, invite experimentation within AI’s ethical boundaries, and design assessments that reward growth rather than perfection. If we have the opportunity to view AI as a catalyst for curiosity and critical thinking, we can help students gain proficiency with new tools, develop a deeper understanding of their own learning processes, and prepare for an uncharted, AI-enhanced future.