In the rapidly evolving landscape of artificial intelligence (AI), an ethically grounded approach is emerging as a guiding principle, shaping the transformative impact of intelligent technologies. Ethical AI prioritizes fairness, transparency and accountability, reflecting a commitment to aligning technological advancements with fundamental human values. The ethical imperative in AI underscores the importance of responsible design, unbiased algorithms and collaborative frameworks that seek to harness the immense potential of AI for the greater good while mitigating potential risks and pitfalls.
Before reading the remainder of our article, please reread the previous paragraph. Now that you’ve read it over twice, we have some questions to ask. Did the previous paragraph sound intelligent? Did it contain correct grammar and proper sentence structure? Or did it sound like something you recently asked an AI database for? Despite sounding as if it were written by us, the previous paragraph was entirely written by AI. Specifically, Chat GPT.
This shouldn’t come as a shock. After all, AI databases are available to students in just three clicks (yes, I tested this myself). Despite KHS’ attempts to block AI from classrooms, students continue to break past these barriers and use AI as assistance when doing schoolwork. And when we say assistance, we hope you know we mean copying and pasting a voiceless passage into google docs and passing it off as your own work. You know, simple and lowkey “assistance.”
Nicole Leachman, English teacher, said she uses AI databases such as EasyBib and Grammarly as a way to encourage students to grow their skills in areas like citing, grammar and sentence structure. However, Leachman also said these tools should only be used when a student is already familiar with these skills.
“I [encourage] students in my class to use sites like EasyBib once they’ve already learned citing skills,” Leachman said. “I think that for now, AI is definitely a threat, but if we can teach kids to use websites like these, it’ll [teach them] how to use AI positively.”
When asked if AI could be helpful or potentially harmful in a classroom setting, 68% (37/54) of TKC staffers voted that AI was harmful in a classroom setting. In addition, the TKC staff agreed that not only is AI damaging, but extremely unethical. While the use of AI in a learning setting is seen as wrong because it is considered cheating, TKC believes that AI is unethical because of its ability to teach biases to students.
While the majority of TKC staff did believe that AI could be hurtful in the classroom setting, we also noted it can serve as an incredibly helpful tool that we believe all students and teachers could use positively with the right training. As of now, the most common way students are using AI is by simply typing “write a 300+ word essay on WW2” into Chat GPT. Yeah, that gets the job done. But it’s also remarkably lazy. Fine, grab a couple slices of information from Snapchat AI, but to request entire essays and assignments pretty much takes away the entire point of school. In our opinion, that’s not quite worth a mediocre essay likely containing incorrect facts.
By definition, AI functions by pulling material from various sources across the internet. Yet, AI is unable to detect false information or bias in the sources it grabs its content from. So in reality, your AI written essay blabbering about a “recent study” could be using facts pulled from your Aunt Becky’s very unfactual blog. This means that when a student is using AI for help with an assignment, it isn’t guaranteed all the facts are going to be correct. And we don’t really feel bad. If you decide to use AI to cheat and the information is wrong, that’s on you.
However, by learning how to properly use AI, it can become a constructive tool. In many cases, simply asking AI to find you an answer is inherently wrong (and lazy). Instead, just use AI to help cite sources, or run grammar checks. This way you’re still using your precious AI, and not at risk of cheating. You can even ask generative AI cites to help you find sources or studies, which you can then run your own fact checks on. So please, instead of swiping onto your own personal Snapchat AI for help on your AP Lang project, use a database like Sourcely to find accurate sources.
As we’ve said, the issue of students mistreating AI is a growing problem. However, we also believe the more important issue at hand is the idea that AI can use this incorrect or biased information to teach harmful beliefs to students, or to have a negative impact on education in general. According to the Harvard Business Review, AI has already hurt the way applications are sorted at a British medical school. The school was using an AI to sort through applications when it was found the database was purposefully biased against non-european names and women because it had been programmed to sort the applications as they had been in the past. This improper use of AI is just one example of how it can implement biases into our school systems.
TKC believes the classroom is a place where students should be allowed to develop their own opinions, rather than be taught discriminatory biases by AI. While we support the advancement of AI as a positive tool, right now, its negative impact on student’s learning is too much of a risk for it to become a full time resource in the classroom.