This article is from the WLUFA Advocate April 2016 4.9.
Kari Brozowski, Community Health, Public Health and Health Administration
The validity and reliability of student evaluations of university teaching is a perennial question—especially as they are used as a component in determining whether or not to promote or re-employ university professors. Because student evaluations are often developed by committees with no expertise in survey methods, it is difficult to trust their results. And, as a recent study at Lander University in South Carolina finds, students are
frequently inattentive when they are filling out the evaluations (see Further Reading below). Another study by Anne Boring et al. reveals how student evaluations do not measure teaching effectiveness. Among other things, their study finds that student evaluations are biased against female instructors by an amount that is large and statistically significant. While student opinions of faculty teaching are helpful, to institutionalize them in formal career assessments is highly problematic.
Although most faculty members are not formally trained in bona fide teaching programs, we are well trained as researchers and as experts in our fields. As such, we should know better than to accept that student evaluations of faculty teaching can determine the future of our careers. But that is exactly what occurs when student evaluations are included in assessing tenure and promotion decisions or when they affect decisions to re-appoint a contract faculty member. Students—with no training in job evaluation or teaching—are granted the responsibility of possibly making or breaking a faculty member’s career.
This situation is a particularly damaging reality when it comes to the careers of contract faculty who depend on these evaluations for their next course hiring. Sarika Bose,
a contract faculty member at the University of British Columbia, provided excellent insight into this issue at the Harry Crowe Foundation Academic Freedom Conference in Toronto on February 26-27. Bose argued that marking or rating professors’ performance on the basis of student evaluations produces a market value of a contract faculty’s teaching skills.
This “commodification” of student evaluations creates problems for contract faculty who are constantly walking on a tightrope: the pressure exists to design their courses to be edgy, up-to-date and current, but not too difficult or controversial so that students might rate them negatively. As well, pressures to follow department expectations about evaluation benchmarks (which are not necessarily stated formally) can lead contract faculty to choose safe topics and stay away from offending or discomforting statements.
Untenured permanent faculty face similar pressures, but do not have to apply for a teaching position every term. Meanwhile, the threat of negative evaluations places contract faculty members in a constant state of surveillance; departmental expectations about evaluation results effectively polices the design of their syllabus and teaching methods. So how do we evaluate teaching capabilities of university professors? The Centre for Teaching Innovation and Excellence here at Laurier will visit classes and provide feedback to faculty who request it. And some graduate programs incorporate courses on
teacher training. Maybe this should become standard practice, and faculty should be encouraged and trained in following the methods endorsed by teacher training
programs. Such supportive forms of evaluation are clearly preferable, and should in fact replace the current system in which hiring and promotion decisions are made by taking student evaluations into consideration. This would solve the problem of constant surveillance of teaching over the course of a faculty member’s career. It is also possible to have expert and award-winning teachers, possibly from the Faculty of Education, to run periodic evaluations of faculty teaching as part of the process to improve teaching
at the university level. Only then would university professors be in line with all other
professions and their methods of evaluation in making career decisions.
• C. Havergal, “Course evaluation forms ‘not read
properly by students,’” in the March 2016 edition of
Times Higher Education.
• A. Boring, K. Ottoboni and P.B. Stark, Student Evaluations
of Teaching (mostly) Do Not Measure Teaching
Effectiveness, Science Open Research, 2016, (DOI: