She said that I should throw out any question missed by a large number of students, so I explained my process. When I ran the tests through the scantron (which she also didn't like that I used), it told me how many students missed each question (one of the reasons I continued to use that type of test). If a question exceeded a certain number, I went to the question and the key and asked myself several questions.
- Was the key marked correctly? We do make mistakes, and if I marked the key incorrectly, I will immediately give everyone credit for that question.
- Did I actually teach that this year? Experienced teachers do pull up their old tests and edit them rather than creating new ones each time, and sometimes, changes to the calendar or interruptions to the routine mean I could have skipped something in class but forgotten to remove it from the test. I would obviously throw that question out for everyone.
- Was the question and answer list fairly worded? It doesn't happen very often, but every once in a while, I would be making the key for a test and think, "Was I half asleep when I wrote this question? It doesn't make sense." When that happened, everyone got credit for that one too.
If the answer to all of these questions was yes, then the question remained no matter how many of them got it wrong. This mom stopped me at the word remained and said, "Well, I imagine the students would have a different perspective than you do on that." Of course they would. They were in the 8th grade, and I had been teaching for 15 years; we had a different perspective on EVERYTHING. It's their job to complain and pushback on anything they don't like, and it is my job to understand that what they want and what they need are two different things.
I said to her, "I know they do, but I'm not going to trade 15 years of professional judgment built by experience to middle schoolers." That mom didn't speak to me for 3 months. (Oh, by the way, at some point during all of this, the dad popped up and said, "She didn't miss this question anyway, so we should probably move on." AARGH!)
In the age of populism, this problem has only increased. In the same way everyone was an armchair epidemiologist in 2020, everyone who reads an education blog is ready to challenge curriculum. They will sit across from someone with a PhD in curriculum design and say, "but this website says this book is better." We all (and I am including myself) decide we are qualified to counter arguments if we have done an hour of internet research. A man I encountered at the gym recently told me that he "know more than most doctors" because he read "five very long books" on nutrition and cancer. He was saying this to a woman who has been seeing doctors at Johns Hopkins, Duke, and MD Anderson - three of the best cancer treatment institutions on the planet, but he thought he was qualified to overrule their judgment.
And now, as it always seems to these days, AI enters the discussion. Teachers everywhere are being asked to sacrifice their judgment to a machine.
- Is the machine an expert on their subject? No. It's been fed a lot of websites.
- Does the machine know anything about their students? No.
- Has the machine given an exam before? Of course not.
- Is the machine trained using only high quality sources? No. It is trained on every source - good, bad, and ugly. Right and wrong. Every source on the scale of credible to nutjob is represented in equal measure.
A friend of mine did an experiment with one of the AI platforms last week. She put in her midterm exam and asked it how long it would take students to complete. She doesn't need to ask it this. She has given nearly the same exam (tweaked for the reasons discussed early) for several years, and she knows that the first student will turn it in somewhere around the 65 minutes mark and the last last student will finish it just before the 90 minute allotment is up). The AI told her it would take 90 to 120 minutes for students to complete it. The next day, she fed the exact same test into the same program and asked it the same question, and it said it would take an hour.
Is this a hallucination (the cutsie name we give for when AI lies by making up crap that doesn't exist)? No, it just doesn't know. And that would be fine if it just said so, but it won't.
I'm not saying you should never seek out the wisdom of another mind, but it should be a mind that is at least as wise as yours.
Students don't qualify; they simply don't know what they don't know. A student once told me that the biology teacher next door to me was "asking questions that didn't need to be asked." I said, "I'm sorry, but your are a high school freshman; you aren't qualified to make that judgment. You don't know what needs to be asked." AI doesn't qualify either. It is the digital equivalent of your worst friend - the one who thinks they know everything, never admits when they don't, and just guesses. Think about that friend; do you go to them for advice? Of course you don't; you know you have better judgment than that friend.
Teachers, trust yourself. Seek advice from those whose judgment you trust. Incorporate their input into your thinking. But don't trade in your professional judgment to anyone or anything with less wisdom than you.
No comments:
Post a Comment