OpenAI has launched a tool that helps to differentiate between human-written and AI-generated text, including text produced by the company’s own ChatGPT and GPT-3 models.
The tool dubbed the classifier is grossly inaccurate but OpenAI argues that it could be useful in preventing the abuse of AI text generators.
AI critics have called on the creators of AI text generators to take steps to mitigate their potentially harmful effects.
Some of the largest school districts in the U.S. have banned ChatGPT on their networks and devices. These schools fear the impacts on student learning and the accuracy of the content that the tool produces
Sites like Stack Overflow have banned users from sharing content generated by ChatGPT as the AI makes it too easy for users to flood discussion threads with dubious answers.
OpenAI AI Text Classifier, like ChatGPT, is an AI language model trained on many, many examples of publicly available text from the web.
The Classifier has been optimized to predict how likely it is that a piece of text was generated by AI.
The OpenAI Text Classifier won’t work on just any text, importantly. It needs a minimum of 1,000 characters or about 150 to 250 words. It doesn’t detect plagiarism.
Depending on its confidence level, it’ll label text as “very unlikely” AI-generated (less than a 10% chance), “unlikely” AI-generated (between a 10% and 45% chance), “unclear if it is” AI-generated (a 45% to 90% chance), “possibly” AI-generated (a 90% to 98% chance) or “likely” AI-generated (an over 98% chance).
According to OpenAI, the classifier incorrectly labels human-written text as AI-written 9% of the time.