Negotiations with the New York Times appeared to be progressing constructively until the last communication on December 19th. Negotiations focused on a high-value partnership centered on real-time viewing with attribution on ChatGPT. This will give The New York Times new rights. There are ways to connect with existing readers and new readers, and users will have access to your reports. We told the New York Times that, like any other single source of information, its content does not meaningfully contribute to the training of existing models and will not have sufficient impact on future training. explained. Their lawsuit on December 27th, which we learned about in the New York Times, came as a surprise and disappointment to us.
In the process, they said they were seeing some regurgitation of content, but repeatedly refused to share any examples despite our promises to investigate and fix the issue. We have demonstrated how seriously we treat this as a priority. ChatGPT feature removed Shortly after I learned that it could reproduce real-time content in unintended ways.
Interestingly, the backlash caused by the New York Times appears to be from an article many years ago that has since spiked. multiple The third–party website. It appears that they intentionally manipulated the prompts, which included long excerpts of articles, to regurgitate our model. Even when using such prompts, our models typically don’t work the way the New York Times implies. This suggests that we either told the model to regurgitate or that we chose a sample from a large number of trials.
Despite their claims, this misuse is not common, is not a permitted user activity, and is not a substitute for the New York Times. In any case, we are continually making our systems more resistant to adversarial attacks that regurgitate our training data, and we have already made significant progress with our latest models.