
The New York Times is officially embracing artificial intelligence (AI) in its newsroom and product development, marking a major step toward AI-assisted journalism.
In an internal announcement, the newspaper introduced AI training for its journalists and debuted Echo, an in-house AI tool designed to summarize articles, briefings, and interactive reports.
The move is particularly striking given that The Times gained significant attention in AI-related discussions after suing OpenAI in December 2023, accusing the company of massive copyright infringement. The lawsuit alleged that OpenAI used The Times’ proprietary articles without permission to train its models, including ChatGPT. Now, despite its legal opposition to AI companies, The Times is integrating AI into its own operations, a shift that could signal a broader adoption of AI in newsrooms worldwide.
Register for Tekedia Mini-MBA edition 17 (June 9 – Sept 6, 2025) today for early bird discounts. Do annual for access to Blucera.com.
Tekedia AI in Business Masterclass opens registrations.
Join Tekedia Capital Syndicate and co-invest in great global startups.
Register to become a better CEO or Director with Tekedia CEO & Director Program.
According to internal documents obtained by Semafor, The Times has approved a suite of AI tools for its editorial and product teams, aiming to improve workflow efficiency and content optimization. The newly sanctioned AI programs include:
- GitHub Copilot – A programming assistant for coding-related tasks
- Google’s Vertex AI – A development tool for AI-based products
- NotebookLM – A document analysis and research tool
- Amazon AI products – Various tools to assist newsroom operations
- OpenAI’s API (non-ChatGPT version) – Accessible only with legal approval
- NYT’s in-house Echo – A beta tool for summarization and content organization
However, the company has imposed strict restrictions on AI usage, ensuring that it remains a supportive tool rather than a content creator. The Times’ guidelines prohibit using AI to draft articles, input confidential or copyrighted materials, bypass paywalls, or generate and publish AI-created images or videos.
Instead, journalists are encouraged to use AI for:
- Generating SEO-friendly headlines and social media content
- Summarizing articles for newsletters in a conversational tone
- Brainstorming interview questions for sources and experts
- Analyzing company documents and organizing research
- Developing news quizzes, FAQs, and interactive audience features
The AI-driven assistance is meant to enhance efficiency rather than replace human reporting. However, some in the newsroom remain skeptical, fearing that AI-generated content could reduce creativity, result in factual errors, and undermine journalistic integrity.
From AI Skepticism to Mainstream Adoption?
The New York Times’ decision to adopt AI marks a significant turning point for an industry that has largely been wary of artificial intelligence in news production. Earlier experiments with AI-driven journalism—such as CNET’s attempt to use AI for financial articles in 2023—exposed significant deficiencies, including factual errors, misleading summaries, and plagiarism concerns.
Studies have shown that AI models struggle with accuracy, especially in real-time reporting. Unlike human journalists, AI lacks the ability to verify sources, understand political and social nuances, or capture the depth of investigative journalism. For this reason, major media organizations had previously kept AI at arm’s length, fearing that overreliance on the technology could degrade the quality of news content.
However, The Times’ structured approach—where AI is used for supplementary tasks rather than primary reporting—may pave the way for greater AI adoption in newsrooms. By restricting AI’s role to headline suggestions, content summarization, and research assistance, the newspaper is attempting to harness AI’s efficiencies while minimizing its risks.
While The Times is now incorporating AI into its operations, it remains one of the most vocal opponents of AI’s unchecked use in media. The company’s lawsuit against OpenAI accuses the tech firm of illegally scraping and repurposing its content to train AI models.
Microsoft, OpenAI’s largest investor, has dismissed The Times’ legal claims, arguing that they threaten technological progress. The case has drawn widespread attention, as its outcome could set a precedent for how AI companies handle copyrighted journalistic content in the future.
Internally, The Times’ embrace of AI has also sparked debate. Some newsroom employees have expressed concerns that AI integration could lead to complacency in reporting, uninspired headlines, and the spread of inaccuracies. Others remain wary due to the industry’s tense relationship with AI firms. AI has found applications in media including being used to merge PDF document files well for global distributions.
For example, last year, during a strike by Times tech employees, the CEO of AI startup Perplexity made a controversial statement suggesting that AI could replace the striking workers. The remark intensified fears about AI displacing jobs and further strained relations between media professionals and AI companies.
However, this move signals a shift in the relationship between the Times and AI companies, which many believe will only get better.