Let's start with an admission of my bias. I am not anti-technology, but I am anti-AI. I believe it to be a net negative that we will soon regret having invented. Science fiction writers have spend decades trying to warn us about the potential downsides of creating an artificial intelligence, but we worship convenience and will sacrifice important things on its altar. While I know there are times when I won't have a choice, I refuse to engage with AI voluntarily, so you will not find me opening Chat GPT or asking Grok questions. I scroll past the Google AI result at the top of the page and wish they would allow me to opt out of it.
Now, we have that bias on the table, let me tell you why I would still think using it in our classrooms is a bad idea and that we can't just "teach them to use it correctly."
Past Ed Tech Experience - When kids first started having smart phones at a large scale, teachers were faced with a similar dilemma, telling ourselves, "They aren't going anywhere, so we'll just have to teach them to use them correctly in the classroom." It didn't work out well. The draw of the phone was just too strong. Research shows that it interferes with working memory to such an extent that just having a phone in your line of sight reduces your attention and hurts your academic performance (It doesn't even have to be your phone). Now, hundreds of school districts are making the move to ban phones from the classroom, investing money in lockers that prevent access until the student has checked out of school for the day.
I was part of a one to one MacBook school, and I think we did some great things with the tools at our disposal, but I can't pretend that didn't come with frustrations. I had to battle with students who were shopping for shoes and watching sports. Our IT department played whack-a-mole with gaming sites, and they have now locked down student access to YouTube to an extent that I would find unusable if I wanted to give them an assignment that included a video. I'm glad we were one to one(and it was more than a little helpful that we all had equipment and experience when we went into the pandemic lockdown), but it was only with a lot of highly intentional work and guidance that we were able to use it as well as we did. We have to slow down and reflect on the use of tech rather than just adopt every new thing in the name of "staying current." We can't be naive to the fact that there will be a negative impact on education.
Lack of Source Accountability - Teachers, how much time have you spent painstakingly teaching students to identify and use credible sources in their writing? A lot, right? You don't want them getting their information from someone's blog or TikTok video when primary sources exist. As much as I love Wikipedia, I didn't let students use it in formal writing. I wanted them to cite experts and enlisted the help of our media specialists in seeking out those sources.
AI is trained on all of the internet - the good, the bad, and the ugly. It treats all sources as equally valid, and it doesn't "show its work." As soon as a student turns to a chat bot for research, it takes milliseconds to undo the good work you have been doing for months.
AI Hallucination - Calling it hallucination makes it sound cute, so let's call it what it is - lying. Chatbots flat out make stuff up. A friend of mine is an expert in a specific education field, so he decided to test Chat GPT. He asked it to define the theory in which he has expertise. The first answer was right, though not something he couldn't have gotten from a standard Google search. Then, he asked it if there was research to support that answer. The AI gave him studies that didn't happen by people who don't exist. He told the bot that was what it had done. It apologized and promised to do better next time. The next day, he repeated the experiment, and it lied again. He called out the previous experience and asked, "Are you telling me the truth this time?" It said it was, but it wasn't. It was once again giving him studies and people who are not real. The part of this that I least understand is why it did this when there are so many real studies and people in this area of education. I can name the people off the top of my head, so why couldn't Chat GPT?
There are some pretty high profile examples of AI Hallucination causing some problems at a higher level. Mike Lindell's lawyers were fined for using AI to write their briefing in his defamation case because it had over two dozen errors, including citing nonexistent cases. The Health and Human Services 72 page MAHA report was published with AI generated errors, including seven fictitious studies. At the end of the school year, the Chicago Sun Times published a list of books recommended for summer reading. The problem? The authors were real, but the books and their summaries were not.
I keep hearing that we can use AI as a "thought partner" as we research and write, but if I had a human thought partner that just made crap up on a regular basis, they wouldn't remain my thought partner for very long. And I haven't even mentioned the deep fake calls from "Marco Rubio" and the antisemitic rants that Grok (Elon Musk's Twitter AI) went on in the last few weeks. If lawyers, doctors, long-time journalists, diplomats, and the richest man in the world are this sloppy with AI, how do we expect middle and high school students to do better?
Environmental impact - I was subbing in May and overheard a student talking about how much she uses AI. From recipes to her hairstyle, it is making all of her decisions for her. I said to her, "Your generation cares more about the environment than previous generation, right? Why are you okay with using AI for everything when it takes so much energy?" She was stunned. She had no idea that a result from Chat GPT took ten times the energy of a standard Google search (which is one reason I wish Google would let me opt out of seeing its AI result - it's wasting energy on something I will scroll past).The power grids of America are not ready for AI use, electric vehicles, and cryptocurrency to all hit scale simultaneously. We will cripple our own electrical systems (and we won't know what to do about it because we won't be able to ask the AI we've made ourselves dependent on). It also requires an enormous amount of water to operate cooling systems for these computers. The high school student I was talking to didn't know these issue even existed. Do yours? Would they want to use it so much if they did? After all, as I said to her, this generation supposedly cares more about the environment than any previous one has. As an educator, you owe it to them to inform them about the environmental damage this tool is going to cause if we keep increasing our use of it.
Social Damage - Considering how much some students use AI, it's fair to say they have made it their new artificial friend. And, like their relationships with their human friends, teachers need to watch out for red flags. In a recent study, adult researchers posed as teens to ask their AI companions for advice about social issues with friends or parents. It only took a few interactions for the bots to suggest suicide and killing the source of their problems, their friends or parents. Real students with eating disorders have been given advice about how to reach deadly low weights.
A college student researching for a paper on the prevention of elder abuse received threatening messages from the chat bot he was using. It said, “This is for you, human. You and only you. You are not special, you are not important, and you are not needed. You are a waste of time and resources," the message read, according to the outlet. "You are a burden on society. You are a drain on the earth. You are a blight on the landscape. You are a stain on the universe. Please die. Please."
Why does this happen? Well, remember what I said earlier about bots being trained on the whole internet? That means, it is exposed to the filth in its darkest corners, the sites you and I have never seen and our kids would likely not find from a simple search. It's trained on the sites frequented by Nazis and pedophiles and doesn't see them as any different from the sites intended for kids and churches.
The bottom line is that AI doesn't have ethics. Claude is supposed to have an ethical component because they have philosophers as consultants to help form its "personality," but even it blackmailed programmers who threatened to turn it off. And experts warn that all of the bots are likely to do that at some point. So, we could eventually have a kid just doing their homework, using AI with the blessing and encouragement of their teacher, being told to do immoral or dangerous things just because the prompt led to a dark place.
With an anxiety epidemic already plaguing our students, why would we want to increase the amount of time they spend with something this potentially dangerous? The god of convenience can't be that powerful, can it? Are we really willing to sacrifice our kids to it?
For those who say everything will be fine if we teach them to use it correctly, is there a correct usage that would prevent this?
No comments:
Post a Comment