Artificial Intelligence

Why Ed Tech is Not Endorsing a Ban on ChatGPT in Schools

Education Experts Argue ChatGPT Can Help Students Prepare for the Workforce — but Thoughtful Policies Should Happen Sooner Than Later

The public introduction of OpenAI’s ChatGPT has, in recent months, sent a wave of alarm through the education sector unlike any other technology introduced this century. 

Depending on who’s speaking, ChatGPT will either further erode learning outcomes particularly in English language arts, or it will boost ELA instruction and overall learning outcomes by embedding critical-thinking and modern-workforce skills into everyday writing assignments as students learn to use the new technology with caution and precision. 

Some large school districts across the country immediately banned ChatGPT usage by students and even blocked the OpenAI ChatGPT website from being accessed on school networks and devices. Many education leaders have expressed concern, even those who urge educators to explore using the tool in their classrooms.


Even OpenAI itself published guidance and warnings for educators shortly after ChatGPT launched publicly.

Meanwhile, the ed tech sector’s response has been similarly divided. A few smaller software providers jumped into the fray by debuting “AI detector” tools within weeks of the ChatGPT launch last November. None of them — not even the detector built by the creators of ChatGPT — are very reliable, as demonstrated by this review and comparison report by TheConversation.com.

THE Journal asked leaders at several education technology providers about their thoughts on ChatGPT and AI-generated text, what plans (if any) they have to address the new technology within their own software solutions, and whether they had guidance for K–12 policymakers, administrators, and educators struggling to update their organizations’ rules for using AI in education settings. 

Following are excerpts from those interviews as well as answers to our questions from OpenAI’s official guidance for educators and from a University of Houston Law Center professor who blogs about ethical and legal implications of new technology in education.

THE Journal: What’s your take on the perils and perks of this new technology, and how do you balance them?

DEBORAH RAYOWImagine Learning vice president of Product Management, Courseware: I think academia and ed tech are both going through something similar to the five stages of grief when it comes to this issue. We’ve passed denial and now we’re mostly on anger. I’m not sure all the stages actually apply, but I do think it’s going to be a process before we’ve accepted that this technology is here to stay and will only grow in capabilities. We’ll need to make clear to students when it’s okay to use ChatGPT and other generative AI tools and when it’s not, with a strong emphasis on academic honesty. And we’ll have to have ways to enforce the rules we set. But once students are clear about when NOT to use generative AI, it does open up some interesting possibilities for teaching and learning.

MELISSA LOBLE, Instructure chief customer experience officer: AI writing tools are not new to education, but none have sparked the conversation that ChatGPT has in the last couple months. It’s clear the initial reaction to ChatGPT from many educators has been apprehensive, and while we understand the concern being felt in classrooms and schools across the country, we believe the best way to navigate the reality of ChatGPT, and AI tools like it, is to learn to work with them instead of against them, because technology like this isn’t going anywhere.  

PETER SALIB, University of Houston Law Center assistant professor: For everybody whose main work is writing things on a computer, this is a tool that is going to change how you work, especially as it gets better. ChatGPT produces mediocre content in response to complex questions. There might be some incentive to plagiarize, but probably not if a student wants an A. On the other hand, I’m not sure it’s right to think of using those kind of language models in the classroom just through the lens of plagiarism. They’re extremely useful tools, and they’re going to be extremely useful tools for real people doing real work. I think we do a disservice to a student if we said these tools are not part of education, and they’re forbidden to use them as they work their way through law school or their undergraduate education.”

OPENAI: We recognize that many school districts and higher education institutions do not currently account for generative AI in their policies on academic dishonesty. We also understand that many students have used these tools for assignments without disclosing their use of AI. Each institution will address these gaps in a way and on a timeline that makes sense for their educators and students. We do however caution taking punitive measures against students for using these technologies if proper expectations are not set ahead of time for what users are or are not allowed. Classifiers such as the OpenAI AI text classifier can be helpful in detecting AI-generated content, but they are far from foolproof. Classifiers or detectors should be used only as one factor out of many when used as a part of an investigation determining a piece of content’s source and making a holistic assessment of academic dishonesty or plagiarism. Setting clear expectations for students up front is crucial, so they understand what is and is not allowed on a given assignment, and know the potential consequences of using model generated content in their work.


Whitepapers