My Coworker Cares Too Much About Assessment

Context: I’ve been teaching EFL at the JHS/HS level in Japan for over 10 years in public and private schools. Some ALT work, some full instructor. Degree in Language Arts and in Communication, post grad teacher certification. Current coworker working in our HS comes from a university instructor background.

I feel like this coworker of mine—fellow native English speaker and teacher—cares far too much about the smallest nuances of rubric design and assessment for things like oral presentation and interviews, and it’s getting exhausting.
In the scope of JHS/HS second language communication classes, all we really have them study, and then assess, are students use of key grammar, expressions and some conversation skills. And it’s all relatively simple. [Describe a fun experience, use X grammar to make a question about…, etc]

For me, we don’t need to reinvent the grading wheel or deep dive into the “micros” of a student’s answers.

Did they correctly use the particular vocab/grammar/skill they were asked to?

Yes/Attempted/No

How was their overall oral mastery of the delivery?

Advanced, Standard [for their grade level], Sub-Standard, Weak.

I feel like that’s more than enough. Especially as experienced teachers, we don’t need to pick apart and define “Mastery” or create a bunch of sub categories to accommodate for if one student has great pronunciation but simpler ideas, vs weak pronunciation and slow response time but their response demonstrates more creativity, etc. etc.

There are dozens of variables in any student’s speech patterns and abilities, and trying to zero in on and define exactly how each and every little thing should be analyzed and categorized, in the context of a 4-5 question speaking exam prompting 1-3 line responses to things like “What do you usually eat for breakfast?”, is excessive given the level and scope.

Am I in the wrong for feeling like this person is wildly over thinking this? We all have an intuition and understanding of what is good versus a bit lacking in the context of the level we teach at. What I’m trying to convey is that we should be able to make a simple holistic judgement on their overall spoken delivery. But this teacher sees that as “complicated and overwhelming” because their focus is too zoomed in on “I need to be listening for their accuracy, their pronunciation, how well developed their ideas are and the word choice, while also making sure they use the target.”
But, I can’t seem to convey that a holistic meta analysis doesn’t require such complex fine tuned nuanced analysis, and to just look at the bigger picture that: Grade appropriate answers are 3 points. Any number of errors that add up to a student’s expression falling below grade standard is a 1 point drop, and any number of errors significantly impacting clarity/understandability is a 2 point drop. Then, an answer that includes fluency and skills that go above whats expected at their grade is the maximum 4 points, advanced.

Very simple 4 point distribution.
While giving an additional max of 2 points for attempting to or successfully implementing the prompted grammar or skill into their response.
Totaling a 6 point scale.

I feel like I’m crazy for thinking this is simple and common sense, and that we don’t need a bunch of different specific scales or different point distributions for question types or answer lengths, defined by specific terminology to make concrete cutoff points—all inside of the scope of listening to a half awake EFL 15 year old responding to a set of basic interview questions.
I get that we all are proud liberal arts majors who want to apply and flaunt our expertise and understanding of pedagogy and what have you, yet at a certain point I just want to say
“It’s really not that deep.”

But I of course can’t.

by PiPiPoohPooh

Leave a Reply