Amidst an ongoing struggle within the news industry, major newsrooms are preemptively securing their content against ChatGPT, an innovative AI chatbot. These defensive actions reflect the potential threat posed by ChatGPT to an already challenged sector.
Recently, leading newsrooms have implemented code on their websites to prevent OpenAI’s web crawler, GPTBot, from scanning their platforms for content.
The Guardian’s Ariel Bogle reported last week that CNN, The New York Times, and Reuters had blocked GPTBot. But a Reliable Sources review has found several additional news and media giants have also quietly taken this step, including Disney, Bloomberg, The Washington Post, The Atlantic, Axios, Insider, ABC News, ESPN, and the Gothamist, among others. Publishers such as Condé Nast, Hearst, and Vox Media, which all house several prominent publications, have also taken the defensive measure.
The valuable deep archives and intellectual property rights of these news organizations play an immensely important, arguably essential, role in training A.I. models like ChatGPT. These efforts aim to provide users with accurate information. On Monday, a news executive, who requested anonymity due to lacking authorization to speak on behalf of his company, conveyed, “While most of the internet consists of garbage, traditional media publishers are driven by facts and offer high-quality content.”
Despite the posturing behind the scenes, none of the outlets that have taken the preventive measure of blocking GPTBot offered an on-the-record response when I reached out for comment on Monday. But the move to insert code disallowing OpenAI from drawing on their large libraries of content to train its ever-learning ChatGPT bot reflects the degree to which news organizations are spooked by the company’s technology and are quietly working to address it.
Danielle Coffey, president and chief executive of the News Media Alliance, told me on Monday that news organizations are indeed alarmed by the rapidly advancing technology.
Coffey said that the News Media Alliance, which represents nearly 2,000 publishers in the US, believes newsrooms “are on solid legal ground when it comes to copyright protections.” Nevertheless, they’re apprehensive about how companies like OpenAI might further upend the already embattled news sector.
“I see a heightened sense of urgency when it comes to addressing the use, and misuse, of our content,” Coffey said. “One publisher told me it is an existential threat. Another publisher told me there isn’t a business model with certain uses of A.I. … there is a sense of urgency to address this.”
What exactly these media giants do next, however, remains to be seen. News organizations might feel they’re on solid legal ground, as Coffey told me, but there has yet to be any serious action taken against the Open AI.
Barry Diller has likely gone the furthest by taking a notably aggressive stance and signaling a future lawsuit. The NYT is also reportedly weighing whether to sue Open AI. Meanwhile, the Associated Press went a different route, hammering out its own licensing deal with the A.I. developer, though it notably did not share key terms of the agreement.
If the issue is not resolved, enormous damage could be inflicted on the publishing industry, imperiling the information environment in the US and around the world even more than it is now.
The integration of A.I. bots into search engines, apps, and widely used smart devices could feasibly lead to the closure of numerous newsrooms. This ironic outcome would stem from the bots utilizing the information they have gathered from these newsrooms.
When these outlets cease to exist, they would leave a void of authoritative sources required to train A.I. models. This absence could lead to confused bots relying on inaccurate information, potentially propagating misinformation.
“If there is nothing left of quality to feed on,” Coffey said, “then we are all going to end up with a very bleak future.”
Although the stakes are high, the majority of news organizations are currently choosing not to publicly address the issue. Instead, they are discreetly securing their content in protective vaults until a more definitive battle plan can be formulated. The news executive I conversed with on Monday emphasized that, at a minimum, blocking GPTBot serves to make a clear and unmistakable statement.
“It sends a signal,” the executive said. “Talk to us.”