The company expressed worries that its detection system could somehow “stigmatize” the use of AI among non-English speakers.
OpenAI appears to be holding back a new “highly accurate” tool capable of detecting content generated by ChatGPT over concerns that it could be tampered with or cause non-English users to avoid generating text with artificial intelligence models.
The company mentioned it was working on various methods to detect content generated specifically by its products in a blog post back in May. On Aug. 4, the Wall Street Journal published an exclusive report indicating that plans to release the tools had stalled over internal debates concerning the ramifications of their release.
In the wake of the WSJ’s report, OpenAI updated its May blog post with new information concerning the detection tools. The long and short of it is that there’s still no timetable for release, despite the company’s admonition that at least one tool for determining text provenance is “highly accurate and even effective against localized tampering.”