On Wednesday, The Associated Press released its first official standards regarding its journalists’ use of artificial intelligence—guidelines that may serve as a template for many other news organizations struggling to adapt to a rapidly changing industry. The directives arrive barely a month after the leading global newswire service inked a deal with OpenAI allowing ChatGPT to enlist the AP’s vast archives for training purposes.
“We do not see AI as a replacement of journalists in any way,” Amanda Barrett, VP for Standards and Inclusion, said in an blog post on August 16. Barrett added, however, that the service felt it necessary to issue “guidance for using generative artificial intelligence, including how and when it should be used.”
[Related: School district uses ChatGPT to help remove library books.]
In short, while AP journalists are currently prohibited from using generative content in their own “publishable content,” they are also highly encouraged to familiarize themselves with the tools. All AI content is to be treated as “unvetted source material,” and writers should be cautious of outside sourcing, given the rampant proliferation of AI-generated misinformation. Meanwhile, the AP has committed to not use AI tools to alter any of its photos, video, or audio.
Earlier this year, the Poynter Institute, a journalist think tank, called AI’s rise a “transformational moment.” They stressed the need for news organizations to not only create sufficient standards, but share those regulations with their audiences for the sake of transparency. In its coverage published on Thursday, the AP explained it has experimented with “simpler forms” of AI over the past decade, primarily for creating shorter clips regarding corporate earning reports and real time sports score reporting, but that the new technological leaps require careful reassessment and clarifications.
[Related: ChatGPT’s accuracy has gotten worse, study shows.]
The AP’s new AI standards come after months of controversy surrounding the technology’s usage within the industry. Earlier this year, Futurism revealed CNET had been utilizing AI to generate some of its articles without disclosing the decision to audiences, prompting widespread backlash. A few AI-generated articles have appeared on Gizmodo and elsewhere, often laden with errors. PopSci does not currently employ generative AI writing.
“Generative AI makes it even easier for people to intentionally spread mis- and disinformation through altered words, photos, video or audio…,” Barrett wrote in Wednesday’s AP blog post. “If journalists have any doubt at all about the authenticity of the material, they should not use it.”
According to Barrett, a forthcoming AP committee dedicated to AI developments could be expected to update their official guidance policy as often as every three months.