Since OpenAI launched ChatGPT (an AI chatbot) in 2022, educators all over the world have been in a panic. ChatGPT is capable of producing essays and completing assignments in a matter of seconds. Plus, it is free to use! The problem with ChatGPT is that students can use it to complete assignments or take tests with little to no effort on the student’s part. This deprives the student of the benefits of doing the assignment themselves and when students try to pass off the work as their own, it is a breach of ethics and considered plagiarism (learn more about different types of plagiarism in The 5 Types of Plagiarism and How to Avoid Them). So, what should educators do about it? Let AI run rampant? Ban it all together? No! We need to see this as an opportunity to have discussions about plagiarism and media literacy.
ChatGPT and Plagiarism
If it is simply plagiarism educators are worried about, there are a couple of tools and tricks available for detecting AI-generated material. Online AI content detectors are available for grading. However, these are not 100% accurate and can generate false-positives and miss AI-generated text. Another tip is for instructors to look at the quality of the writing. If it is too perfect (in regard to grammar, syntax, etc.) or has a completely different voice than is typical of the student, then maybe it was created by a robot.
Although it may be gut reaction to try to keep AI out of the classroom or ignore it, a better path is to acknowledge it and have a conversation with students. Talking with students about plagiarism and the impact it has on them and their ability to complete the course’s learning objectives can go a long way toward keeping students from submitting AI-generated work. One option is to add a statement about AI-generated text in the syllabus and have a discussion about it at the beginning of the course. Torrey Trust, an associate professor of Learning Technology at the University of Massachusetts Amherst, provided the following statement as an example of one to include in a syllabus:
“You are responsible for the content of any work submitted for this course. Use of artificial intelligence (AI) to generate a first draft of text is permitted, but you must review and revise any AI-generated text before submission. AI text generators can be useful tools but they are often prone to factual errors, incorrect or fabricated citations, and misinterpretations of abstract concepts. Utilize them with caution.”
Another option is to redesign assignments so that AI language models won’t work. Helen Crompton, an associate professor of instructional technology at Old Dominion University, believes that “if ChatGPT makes it easy to cheat on an assignment, teachers should throw out the assignment rather than ban the chatbot” (Heaven). AI language models like ChatGPT are good at putting out factual information but not as good at activities that require more personal touches, higher-order thinking, or experiential learning. For example, students can take a topic or theme and ask them to write a personal reflection essay. Educators can also move from simply asking students to provide information (curation) and move toward having students analyze the quality of sources and how bias or other contexts effect their validity.
ChatGPT and Media Literacy
ChatGPT and other AI tools may seem all knowing and full of answers, but they can be wrong. In fact, on the ChatGPT homepage it specifically says that it may “occasionally generate incorrect information.” So if this tool is here to stay (and let’s face it, it is) then it is important to help students understand it. ChatGPT can be a great place to get ideas and brainstorm, but just like we wouldn’t take everything we see on Wikipedia as absolute truth, it is important to fact check and gather sources before using information in an assignment.
Some teachers are using this flaw in ChatGPT (its ability to provide incorrect information) as an opportunity to teach critical thinking skills (Roose). These teachers have asked students to try and trip up the chatbot or evaluate the responses like a teacher would a student’s response. Other exercises include evaluating an AI-generated work for cultural bias and learning how to ask the right questions to get certain results. These exercises have the dual benefits of showing AI’s limitations and teaching students valuable critical thinking skills.
Ultimately, if we learn more about ChatGPT and other AI tools, we don’t need to be afraid of them. Technology is always moving forward, and there isn’t a lot we can do to stop it. Kim Lepre, a seventh-grade English teacher in California, said, “It’s kind of like handing a kid a calculator. . . . Hand them a TI85–that’s one thing, but show them how to use it? That’s even more powerful” (Blose).
So if ChatGPT is here to stay, why not learn more about it and use it to make educating easier? MyEducator understands the needs and frustrations of educators, and we want to help you find your footing during this time of unprecedented technology, so look out for more articles about ChatGPT over the coming months.
Blose, A. (n.d.). As ChatGPT Enters the Classroom, Teachers Weigh Pros and Cons | NEA. https://www.nea.org/advocating-for-change/new-from-nea/chatgpt-enters-classroom-teachers-weigh-pros-and-cons
Heaven, W. D. (2023, April 7). ChatGPT is going to change education, not destroy it. MIT Technology Review. https://www.technologyreview.com/2023/04/06/1071059/chatgpt-change-not-destroy-education-openai/
Roose, K. (2023, January 12). Don’t Ban ChatGPT in Schools. Teach With It. The New York Times. https://www.nytimes.com/2023/01/12/technology/chatgpt-schools-teachers.html
Trust, T. ChatGPT and Education | Center for Innovative Teaching and Learning | Northern Illinois University. (n.d.). Northern Illinois University. https://www.niu.edu/citl/resources/guides/chatgpt-and-education.shtml