Earlier this week, Slate’s Rebecca Schuman confessed to inflating her students grades because it’s not worth the effort to grade fairly. While tenured faculty might have the security to evaluate students as they see fit, and even give out Cs, Ds, and Fs, part-time faculty do not have that luxury. Grades are consequently inflated. Schuman writes:
I know there are professors out there who delight in being a student’s first earned C, but those professors have more intestinal fortitude than I do.
Or at any rate they are probably not adjuncts, whose popularity is the only thing that can keep them employed. Although exceptions exist, the trend in U.S. higher ed at the moment is precarious faculty, hired semester to semester or at best year to year, and rehired based almost solely on student evaluations—which, alas, are themselves often based on how “well” the student is doing in class.
I can relate because as a part-time instructor, it’s likely that a department rehiring me largely depends on how favorably students evaluate me, which largely depends on how favorably I evaluate them. (Seriously, I can’t remember the last time I was reviewed by a peer.) Surely, there are other considerations, right? I have a long track record, I can teach a lot of different subjects, and I am flexible in terms in scheduling (I teach a lot at 8:30 AM and 6:30 PM), and I work cheap. But I wonder exactly how much student evaluations factor in getting rehired each semester. Can I really ask my department chair?
Another factor in inflating grades is the switch away from completing the evaluations in class to doing them asynchronously through the web. There were significant disadvantages with the old synchronous, paper-based system. First, absent students, who might be missing class for a legitimate reason, wouldn’t have the opportunity to evaluate me or the course. Second, it took time out of class so students were likely to write very little so they could get out earlier, especially if the instructor waited until the last minute to do the evaluations. Another issue was that it required a lot of mind-numbing data entry to process these evaluations, and administrative staff have enough of that to do. I’m fine with getting rid of the old system.
With the new system, a lot of these problems are solved, but asynchronous evaluations introduce new issues. First, some students will simply not do the evaluations because they are preoccupied with other matters, such as papers, exams, and other aspects of undergraduate life. Second, some evaluation systems don’t purge students who have withdrawn. That means that students who have not been in our class get to evaluate the course. How can students who haven’t even completed the course give an authentic evaluation of it? They can’t. But most disturbing to me, students can wait until they get back all their graded work and submit a critical evaluation if they get a unfavorable grade (and vice versa). This semester, I failed three students. They all had a lot of missing assignments, and two of them didn’t bother to turn in any work. I felt they earned their failing grades. While this might be fair to the other students who busted their butts all term and earned their good grades, it’ll almost certainly cost me in terms of my overall student evaluations, and I get penalized for doing the right thing. With these electronic evaluation systems, we’re taking the worst aspect of a site like Rate My Professors and making it the official evaluation system for the college or university. At least the students will be getting what they want: excellent grades for all their work.
Hiring and rehiring adjuncts based on student evaluations is like giving a restaurant a health-department rating based on Yelp! reviews. Say you eat a restaurant and you get your food on-time, it was hot (or cold) as it should be, you got exactly what you ordered, the waiter was attentive and responsive, and the check was accurate. In other words, everyone did their job, but who gives a five-star review for that?