Making sense of course evaluations

Facebook has clued me into some problems with the tried and true method of evaluating college teachers: course evaluations.  I’ve definitely encountered or perceived some problems.  A student is unhappy, most likely because he is doing poorly, so (s)he gives consistently poor ratings.  You happen to hand out course evaluations on a day that several students are absent.  You happen to have a small class so the number of responses provides a small sample size.  The teacher may know the students’ handwriting so anonymity is dubious (especially in introductory language classes).  The questions are often standardized and may not apply to Latin or Classics courses as well as to Biology or English courses.

And then how do we actually interpret some of the responses? Can you easily compare your scores to the scores of other teachers? Is it good that your students didn’t feel challenged? Perhaps if you taught them the right skills to succeed and they learned well, they weren’t challenged.  Perhaps if you didn’t teach them these skills, they didn’t feel challenged because the class was too easy or they were too challenged and didn’t learn anything.  Or some other option. And what classes do we want to really challenge our students in: all? intro level? upper level?

As Facebook let me know, there’s been some nice research about these course evals and here are some of the findings and links:

  • Male professors score higher than female professors.
  • Students do not always know what is best for their learning.  If students are earning a high grade in their current class, they will rate their teacher highly but may do poorly in subsequent classes.  The article argues that students underestimate the value of effort and curiosity to learning and overestimate the value of innate abilities.  Similarly, students are not good at judging what tasks will help them learn more–they prefer the easier, less effort-demanding tasks to the harder, more demanding tasks that are more effective teaching tools.  So in terms of evaluations, if a student believes they are very able and are doing poorly, a student may blame the teacher rather than investing more effort.  Any question about how effective the teacher or the homework has been could be useless because the students are not best positioned to judge these things.
  • Another article echoed the point about students rating their teachers highly based on their high grades and then possibly doing poorly in subsequent classes.
  • A statistician’s academic article lists several things that effect (or correlate with) course evaluation scores: students’ grade expectations, students’ enjoyment in class, and the instructor’s physical attractiveness, gender, ethnicity, and age.  Dr. Philip Stark argues the evaluations are good for measuring a class’s pace, the teacher’s clarity, and the teacher’s effort.  He rejects the idea that course evaluations measure teachers’ effectiveness.

Ways to use or improve the current system to help us now

Regardless of these issues, we are probably going to have to live with course evaluations for a bit longer.  So other than advocating for not using course evaluations as they are currently used, how do we use this knowledge to make our current system work better for us as teachers?  Here are some of my ideas or some that I have seen:

  • Throughout the course, make students aware of how their biases about gender, ethnicity, age, etc. could effect their opinions and expectations of people in various roles, including you as a teacher (or our opinions and expectations of them as students).

When distributing course evaluations,

  • Ask students to fill out both the standard university (and/or department) course evaluation and another department- and/or course-specific evaluation that addresses things you are more curious about, or things that are more appropriate for your type of class.
  • Ask them how much effort they put into your class.  Either compare it to other classes or ask them a factual amount of time/week they invested.  Since effort correlates with learning, this question may help understand the responses to other questions.
  • Ask students for specifics.  Ask them what they enjoyed and why.  This may help you learn whether the activity encourages engagement, motivation, and curiosity that will help students learn OR if it is just because it was a fun activity.
  • Ask students about things that you’re curious about or are considering changing or are having a hard time judging.  I’ve asked about how long it takes them to do their homework or about my YouTube videos. — see my earlier post about mid-term course evals for more thoughts about the value of these specific questions.
  • If you are asking your own questions, can you make them more objective and factual rather than subjective?
  • Emphasize why the evaluations matter and how you will use them so that students put more thought into them.  This semester, I even mentioned the day before that we would do course evaluations so that students’ minds were primed to reflect about the whole course.
  • For more anonymity, do the evaluations online or have students type them up before turning them in.  This may have a low response rate, so some teachers offer extra credit to students who fill out the evaluation–do you believe this is ethical or a good idea or the purpose of extra credit?

When reporting the data to hiring or tenure committees,

  • Include the total number of students in the class and the total number of course evaluations.  Hopefully, this will allow readers to understand that the sample size may not have been very good.
  • Include information from another teacher observing your class.  This teacher would be a better judge of your innovation in teaching design and whether you are presenting up to date material.  You can also discuss good pedagogical methods and ways to improve with this observer.
  • Include a teaching video so that the committee can judge for themselves.

When interpreting the data or planning your next class,

  • View your data en masse.  Average number rankings (but remember if some of the questions were polarizing) and put all the written responses to each question together so that you can think about them all at the same time.  The comments may actually be more helpful and reassuring than the numbers.
  • Consider how self-aware or skilled your students are.  One article said, “Classes full of highly skilled students do give highly skilled teachers high marks. Perhaps the smartest kids do see the benefit of being pushed.”  If you believe your students fit this bill or are more self-aware, the evaluations will be more helpful.
  • Consider how appropriate the evaluation format was to your class.  Should you provide your own, additional evaluation form in the future?
  • Based on the articles above, what questions on the course evaluations are more useful?  So yes to “Was this instructor clear,” no to “How effective was the teacher”.

How else do you think we can make better use of course evaluations?

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s