As the semester closes out and we move into summer, I like to set aside some time to wade into the student learning evaluations for my classes. Despite the efforts to make student learning meaningful and enduring each semester, I still find myself filled with a certain familiar sense of angst as I click the reports and dig in. I wonder if they got what they needed from my courses, if I could have done more or taught differently, or if they had big moments of discovery and affirmation.
In the last decade, I’ve discovered that students are often incredibly astute at letting us know if the coursework was challenging, if they enjoyed certain assignments, or if they felt engaged during particular content explorations. They have a keen sense of what excites and bores them, and they often can share excellent suggestions for making an assignment or experience more accessible for their individual needs. Together, this is very useful information because I genuinely want students to enjoy my classes; I want them to sign up to take my courses again and tell their friends, too. There was a time when I didn’t see the learning evaluation survey data as success-oriented feedback, but once I discovered the trick for categorizing and analyzing the information, I had a stronger sense of my strengths and areas of growth, and that ultimately made me a better instructor. So here’s one approach to get the most out of your evaluation data.
Read the qualitative data first. In the students’ own words, this gives us the good, the bad, and the ugly right up front. Once you’ve skimmed it, consider categorizing the feedback in the same way you might categorize Amazon review data when you’re considering buying a product or item. There are usually two broad areas that Amazon reviews fall into (I also learned this analogy with…my students while writing with Kelly Gallagher‘s guidance):
A) useful information directly related to the item (ie, if the item is a pair of shoes, the review may indicate: “true to size”, “good ankle support”, “too narrow; need the next size up”, “perfect comfort for my 6 hour hike at the Grand Canyon! See pics.”)
B) information unrelated to the item (ie, if the item is the same pair of shoes, the reviews indicate: “it took 2 days longer to get here”, “the box was busted”, “my dog ate the box”, “I love my delivery guy. See pics.”)
Similarly, there are usually two broad areas of qualitative data collected in student evaluations of teaching:
A) useful information directly related to teaching (ie, “the assignment in week 5 would have helped me more if I had completed it in week 3. I suggest moving it.”, “I used the materials in week 7 during my teaching observation and aced it! thanks!”; “I needed more time for the final assessment and we didn’t even cover half of what was on there. Can you move it to week 12 instead of week 14?”; “I would have liked another week of content Y”, “loved the textbook!”; “Some of your lectures were too long. You gotta do something else.”, “Thanks for the paper assignment and the rounds of feedback; it got accepted in X journal!”, “You required too much reading. Seriously.”).
B) information unrelated to teaching (ie, “I’m a coach and I didn’t have enough time to do the assignments in my other classes”, “I couldn’t reschedule my vacation, so I missed 2 weeks and I had a hard time catching up”, “the parking on campus sucks”, “my home internet was too slow.”)
The unrelated information may be useful in other contexts like advising, campus accessibility surveys, or technology support for students learning off campus. Some of these issues may be fixed when students are connected to the resources and tools available on campus, but this type of feedback doesn’t tell me much about my teaching. Looking at useful information category, however, gives me some strong points to reflect on and to write about in my annual reviews, as well. “I learned from students that…” “I adjusted my policy on… because…” “Next time I teach the course, I want to include…” “Students really liked X so I definitely want to keep it and try more…” “Students generally thought I did not spend enough time on Y so I want to build that out some more next time…”
Using student responses to craft specific improvement or change narratives reveals that we’re open to learning, that we can analyze student data with the same attention we give to analyzing our research data, and that we see data as a tool for progress. If we are excelling in our teaching, we can still benefit from professional development (students change just like we do). If we are struggling, despite all our passion and enthusiasm, we can benefit from professional development. Professional development, like the programming offered through CAIFS, the insights we gain from sitting in on a colleague’s class, or the knowledge sharing we experience at state, regional, and national conferences, is meant to guide faculty on rewarding, enriching endeavors that improve our pedagogy and enhance the learning experiences of our students. And those efforts reward and even excite everyone.
I say all of this to say that my students made me a better instructor when they took the time to complete the qualitative portion of the evaluations and I took the time to act on their feedback. The quantitative data was useful, too, but I learned the most about student learning and their sense of belonging through the qualitative responses they shared. Sometimes, the feedback was just “for your information”; other times, it only required small shifts that were easily accomplished. There were also a few times when I had to rework the entire course because it was outdated, uninspiring, or disorganized. Making a choice to respond is essentially what it means to be a lifelong longer, to value continuous improvement, and to view our students as our thought partners as we navigate higher ed together.