With Microsoft reportedly pouring $10 billion into OpenAI, the maker of ChatGPT, it seems keen to corner the market maker that generative AI text is likely to be. They are not alone. You can almost hear the colossal checkbooks opening worldwide to access ChatGPT, its cousins and descendants.
In education, the market and business opportunities are different, more than likely not in creating with AI, but in spotting it. That’s because, in what was the most predictable of all consequences, students have already used ChatGPT to cheat. In classrooms everywhere, students try to pass off computer-generated work as their own. And while a few observant professors have noticed AI-created writing, sooner or later – and probably sooner – instructors will need help.
If we’re honest, they already are. The global education community cannot rely on the attentive reading of a few teachers to ensure the integrity of academic work and the value of teaching credentials. In any case, it cannot rely on that alone.
So as AI grows, helping teachers see into the fog of technology will be of increasing value. And since being able to detect AI content is a strong deterrent to using it inappropriately, helping schools avoid massive academic fraud could become its own billion-dollar business. The demand for tools to recognize ChatGPT will not be as great as the demand for ChatGPT itself. But there is no reason to doubt that it will be quite large anyway.
Based on what we’re being told and what we’ve already seen, the question isn’t if AI detection systems will exist, but what they will look like and when they will be widely used. And, of course, whether education providers can risk going without them. In most cases, they probably can’t.
We know the detection regimes are coming because some of them are already here, which means the race to quality development and full deployment of these AI detection products and services is already well underway.
One, created by a student at Princeton University, inexplicably received a lot of media attention. That was even after several companies said they had already done the same. Since then, Futurist and others have reported that its solution has problems. Maybe more than a few. A good try, but perhaps not the effort that comes to define this space. Addressing AI cheating the right way will likely require more serious investment.
If it’s not there, the to-market solution could come from Australia, where a team recently said their software could recognize GPT text.
Or it could come from Europe, from a startup like CrossPlag. The company says their technology not only reliably detects text created by ChatGPT, but is also accurate at recognizing text performed by common AI-assisted paraphrasing tools in attempts to fool existing plagiarism detection systems.
CrossPlag says their system is also good at picking up on what’s called “spinning” — a newer but fairly common method of tricking teachers and anti-cheat systems by running text through various translation tools, distorting it from the one language to another and then back to English. Here they say that being in Europe and dealing with the multiple language requirements is a big help.
Then there’s Turnitin, the industry leader in helping teachers and schools track down suspicious written work. They also say they have a ChatGPT solution that they are testing right now. They’ve already released a preview. They rely on their experience and expertise to develop a winning detection system.
“While these AI generative writing models for major languages like ChatGPT are general purpose, the AI systems to detect their statistical signatures must be purpose built,” said Eric Wang, vice president of AI at Turnitin. We leveraged our deep understanding of how students write and what teachers will find useful to build a detector that provides insight into how these AI generative writing systems are used in student writing. What we are testing now and rolling out soon is built on 20 years of working with teachers, giving them insight into the actual work of students.
And it doesn’t hurt that Turnitin already has its systems and software in thousands of schools around the world, in a platform and user experience that instructors are already familiar with.
Then there’s always the possibility of OpenAI being the ones identifying their own work. The company has hinted at adding a watermark to the AI text, making it easy to spot – a “mitigation” they call it.
But of course, if you know anything about technology or academic misconduct, the minute OpenAI adds a watermark, someone will develop an app that takes it out, shifting the responsibility for distinguishing AI from human writing back to teachers and back to a or more of these companies.
Wherever it comes from, the company or companies that develop a tool that can actually reliably find and flag suspicious bone spatter based on how it writes and the words it chooses will likely enjoy significant market exposure. Not to mention helping to preserve the integrity of academic work and the value of human creativity.
That’s not a bad value proposition – literally or figuratively.
However it turns out, there is exceptional value and profit waiting to be won with generative AI like ChatGPT – not just in the AI, but in helping all of us be able to see who is actually talking to us. Or that it’s a who at all.