
The use of artificial intelligence (AI) is spreading rapidly in newsrooms around the world. It’s not just about automation, it’s about supporting journalistic work, processing large volumes of data more efficiently, and improving research and production workflows. Major media organizations like The New York Times, Financial Times, dpa, and Reuters have each developed their own strategies to implement AI responsibly and effectively.
1. The New York Times: AI as an Extension of Human Creativity
Since 2024, The New York Times has increasingly relied on AI, following major advancements in generative models that now deliver high-quality results. Importantly, AI is not seen as a replacement for journalistic work. Instead, the newsroom views AI as a tool to support and enhance the work of its journalists.
To guide the use of generative AI, the Times has established a set of principles based on three key tenets:
- AI may only be used in service of the newsroom’s mission.
- Every application must involve human guidance and editorial review.
- Its use must be transparent and aligned with ethical standards.
These principles were developed under the leadership of Zach Seward, Editorial Director for A.I. Initiatives. The range of AI applications at the Times is diverse, it is used to analyze and categorize images, suggest related articles, craft headlines, and summarize texts.
The Times clearly emphasizes that AI does not replace human creativity but merely supports editorial processes. Readers are to remain informed about how content is produced. Where AI plays a significant role, potential risks such as bias or factual inaccuracies are mitigated through human oversight.
2. Financial Times: Building AI Literacy as Part of Company Culture
The Financial Times has adopted a comprehensive approach to integrating AI into the organization. Since early 2024, employees across departments have been actively trained to use AI tools like ChatGPT Enterprise and Google Gemini in their day-to-day workflows. The aim is to increase efficiency, foster creativity, and encourage a critical understanding of new technologies.
To support this initiative, several internal programs have been launched:
- Training sessions tailored to different departments
- A weekly newsletter with practical AI tips for everyday work
- An “AI Fluency Quiz” allowing employees to assess their knowledge
- An AI Skills Framework for individual skill development
- A designated “AI Fluency Lead” role to help embed AI into internal workflows
At the Financial Times, ethical considerations remain central. The training initiatives are designed to raise awareness of both the opportunities and limitations of AI. The goal is long-term, responsible integration into editorial and operational processes.
3. Deutsche Presse-Agentur (dpa): AI for Efficient Research and Fact-Checking
The German Press Agency (dpa) applies AI in multiple core areas, following a long-term, responsible development strategy. For years, dpa has used AI to automate event entries in databases, assist with image search, and support transcription workflows.
In 2025, dpa introduced a new AI-powered research assistant within its internal News Hub. Based on Retrieval-Augmented Generation (RAG), the tool draws exclusively from dpa content and delivers concise, source-backed summaries instead of long lists of links. Developed in collaboration with a U.S. technology partner, the tool helps journalists research faster and more reliably, while saving time and resources by analyzing archive and news material.
In addition to technical development, dpa is heavily involved in media training. It partnered in the government-funded “Wegweiser KI” (“AI Guidepost”) program, which provided workshops, mentoring, and training to nearly 500 media professionals to prepare them for using AI in journalism.
Furthermore, dpa has published five AI guidelines to define its ethical framework. These stress that human oversight, journalistic responsibility, transparency, and ethical standards must always take precedence.
4. Reuters: Scalable Automation with Editorial Oversight
Reuters has been using AI for many years, particularly in the field of business news. Speed, data volume, and editorial reliability are key. Today, over 1,000 business-related updates are published each month through automated systems that complement the manual work of editors.
Technologies currently in use include:
- Automated alerts on company earnings, executive changes, and market analysis
- Website-watching tools that trigger immediate automated responses to new content
- Machine translation to support multilingual content across the newsroom
Looking ahead, Reuters plans to expand its use of generative AI, including:
- Personalized search and notification services
- Automated support for source verification
- Workflow optimization in editorial, analytics, and customer service
- Enhanced data-driven advertising and personalized content solutions
Despite the use of automation, human responsibility remains at the core of Reuters’ editorial model. All AI-generated content is still reviewed by editorial teams, ensuring that cutting-edge technology is combined with traditional journalistic integrity.
5. More Examples
At this year’s ppiDays, hosted by ppi Media on June 30 and July 1 in Hamburg, the international relevance of AI in editorial workflows was clearly evident. Keynote speakers included Anup Gupta, Managing Editor of the Hindustan Times, and Maximilian Bruhn, Head of GenAI at FAZ, both of whom shared real-world insights from their respective organizations.
Anup Gupta’s talk, “Harnessing AI, Preserving Trust – Writing Tomorrow’s Story Today”, emphasized how the Hindustan Times uses AI as a tool to support journalism without undermining reader trust. He stressed that creativity always originates from human minds and that AI is strictly a supportive resource. He also presented use cases where AI is used to generate visual content and graphics.
Maximilian Bruhn provided an inside look at how FAZ uses generative (agent-based) AI as an intelligent editorial assistant, handling research, correcting errors, and supporting journalists through so-called “research agents.” At FAZ, too, the focus is on human quality control and transparency in every step.
Conclusion
Media organizations are already leveraging AI to streamline workflows, improve research, and enhance content accessibility. While each newsroom applies the technology in its own way, they all follow the same basic principles: AI is only deployed where it adds real value and always remains embedded in human editorial oversight. Generative models, natural language search, and automated data analysis are shaping a new data-driven media reality. But creativity, responsibility, and ethical judgment firmly remain in human hands.