« Really REALLY interdisciplinary | Main | If my feet smell, it's because giants are peeing on my shoes. »

May 02, 2005

Comments

Kim

Actually measuring what you suggest would probably make a fantastic math ed PhD if we could somehow get around the privacy issues.

femaleCSGradStudent

At my "Premier Catholic Teaching University" which I attended as an undergrad in the engineering department, the professors were evaluated annually by their peers, usually by the department chair. Moreover, the department was reviewed by the ABET (Accreditation Board for Engineering and Technology, Inc) in order to keep with certain standards of topics for teaching. ABET reviewed student's work and course syllabus. Maybe it's just a silly hoop they jumped through, but I will say that the department's worst teachers exceeded the ability of some of the best I've seen here in the midwest.

K

I agree with your criticism of the current evaluation system.

Not that I have a better idea, but I think one problem with your suggestion is that "performance in diffyQs" and "salary figures/placement in grad school" are things which, while being affected a lot by the quality of teaching, are also affected a great deal by other factors. Thus to get useful information from these statistics, one will have to factor the affects of these other factors. That sounds difficult in the short run. You could average over several years and get statistically significant correlations, but it would take a while.

E.g. the dot com bubble and the following bust had a much bigger impact on salaries than any teaching. Performance in diffyQs would depend a lot on the person teaching diffyQs. There can be significant differences in student quality from year to year and there certainly are significant differences in student motivation from year to year.

Thus while these effectiveness measures will be great for estimate how good a teacher someone was over a career, the extraneous noise is probably reasonably large over a period of a few years.

Moebius Stripper

Your modest proposal is definitely a step in the right direction. If nothing else, it's something to measure on top of the standard stuff that student evaluations measure so imperfectly - and it can't hurt to provide an additional metric.

I see one wee glitch, that would certainly be possible to correct for: one way that I could ensure that, say, my precalculus students kicked the asses of other instructors' precalc students once they got into calculus, would be to fail three quarters of my class. Then only the top 25%, or less, would even get INTO calc, and of course their grades would be higher, on average than those of the 50% of Teacher X's precalc class that went on to calculus.

I'd also like to see a study that correlates students' mastery of prerequisite material with the evals they give their instructors. I know, for instance, that a lot of my precalc students HATED me because I didn't take the time to reteach them grade six level math. My calc students, on the other hand, seemed to like me a lot - largely, I presume, because their abilities were more in sync with what I expected of them.

Mitch

Students can be directly manipulated into giving better evaluations.

Here's another frivolous manipulation method: give students so-so grades (arbitrarily) at the beginning of the semester, steadily increase the grades towards the end. Give evaluations out before the final exam. Give final exam, and grade by mastery at last.

Another modest proposal which is much more likely not to be used: evaluation by peers, that is your fellow teachers. That would totally suck.

SusanJ

Somewhere around 1975 there was a lot of talk similar to this and many were passing around a study [would that I had the reference!] of a single large calculus class taught by a single prof. who gave all the lectures, wrote the exam, and was in charge of the grading. There were also many small sections taught by a number of different T.A.'s. Someone carried out exactly the (controlled) experiment you suggested and correlated the course grade with the students' evaluations of their arbitrarily assigned T.A.'s.

As best as I remember the correlation was inverse in that the students who down-rated their T.A.'s got the best grades. The conclusion was that students don't like T.A.'s who make them feel uncomfortable because they make it clear when the student doesn't understand something but these are the very ones students learn the most from.

But your point about how one does in the next course is crucial if very difficult to measure. At least in science and math, the major significance of most of what you learn is that it is the prerequisite for the next course, or, more accurately, the prerequisite for a deeper understanding of the subject rather than an end in itself. (Sometimes I'm not even sure this a convergent process....)

The comments to this entry are closed.