Does student visibility in a tech classroom affect grades? Rough data analysis

This semester, Media Literacy was in the high-tech Digital Toolbox lab. The lab suits the purposes of the Digital Toolbox class pretty well but isn’t great for a discussion class like Media Literacy. Students can’t see more than about four of each other well, and I can’t see them well either.

Map depicting the classroom, including the Bad Seat Zone.

Map depicting the classroom, including the Bad Seat Zone.

The Setup: The Toolbox lab has a projector and a massive lovely screen in the front, which is great except that it also has windows which you have to shade in order to see that screen, and that means it’s dark, and that means if you’re prancing around in front of the screen like I am (literal prancing, ask my students), you can’t see a damn thing on the little dears’ faces. I had a much, much harder time gauging student comprehension in Media Literacy this year. The island setup of the students’ computers made discussion very difficult (man, but I miss Hampshire’s big round tables — now there was a campus that had pedagogy in mind when the architects went to work).

And the lovely huge monitors the class computers have make it so that, from the podium in the front-stage-left, the professor is completely unable to see two to four students (depending on how they sit) who sit stage-right. I wandered around the classroom somewhat, but it was a little hard to always be teaching from over there.

The Question: I had been thinking I could prove that the students who sit in those seats are statistically more likely to end up with a grade one letter lower than the rest of their classmates, both in Media Literacy and in Toolbox. Not to imply causation; it’s just as likely that kids who are trying to escape the teacher’s notice sit there on purpose. So I ran some quick and dirty calculations — class means and standard deviations therefrom — on those seats.

Dataset: I’ve taught 138 students in that classroom total, 90 in 4 Digital Toolbox sections and 48 in 2 Media Literacy sections. The seats I’m thinking of are at tables closest to the exits, with their backs to the glass fishbowl wall where passerby might see what’s on their monitors but as I said, the professors can’t because the backs of their monitors are to us. (The only real “surveillance” that could be said to be happening on the fishbowl side would be from tour groups — who cares — and periodically department secretaries would interrupt instruction to harangue us to get food and drinks off the consoles.)

My toolbox students get assigned to particular tables based on self-reported prior experience, likelihood that they’ll cause trouble, and the difficulty they seem to be having. However, they’re not assigned to seats at those tables — so they can jockey for the seats on the side where I can’t see what’s on their screens, if they want. In toolbox, I seat students who may need a little more assistance at one of the tables in question, and students who definitely need less assistance at another of the tables. So they should balance out.

My memory of who sat where is a little hazy, particularly for fall semester. Also, Toolbox students move around some for different projects. Media Literacy students slavishly returned to the same seats for no reason I could discern. Nevertheless, I think I’m correctly recalling who sat where for the video projects at least, which is the longest project. Final caveat: five students had both Toolbox and Media Literacy in that classroom with me, 4 across both semesters and 1 in the same semester. They sat in very different places in the two classes.

By rights I know I should be calculating and comparing this seat-by-seat, but 1) I’m doing this out of nerdy personal interest 2) don’t have the best data and 3) am not being paid by the institution anymore, so that’s just not happening.

Findings: On average, the 26 students I can recall sitting in the obstructed spots performed about 7 points more poorly than their classmates across all 6 classes. This is enough to take their grade down two halves (say from A- to B). However, this was within one standard deviation (11.8 points) of the mean.

Within all of the classes individually, overall scores from those seats were lower than the class mean. In two classes — one Media Literacy section this semester, and one Digital Toolbox section this semester — the average in those seats was more than one standard deviation below the mean. In that Media Literacy section, the class average was 83.4 and the bad-seats average was 68.76 (standard deviation 12.31). In that Toolbox section, the class average was 81.28 and the bad-seats average was 65.91 (standard deviation 15.37). There was also a Toolbox class last semester where the bad-seats approached one standard deviation from the mean: class average 85.30, bad-seats average 73.69, standard deviation 12.78. Eleven to fifteen points below the class average is a big deal — that’s a whole grade or more.

Conclusions: It appears there may be some concern with students’ inability to see or be seen by the professor in those seats. This is just my data — there’s at least three other sections of Toolbox in that room, and a couple of other instructors seem to think they’ve seen similar problems. As so many research papers say, more study is needed. As to what’s to be done, professors may want to alert students to this problem, move struggling students out of those seats, ensure those seats aren’t filled if the class isn’t filled, etc.

Post a Comment

Your email is never published nor shared. Required fields are marked *