News Automation – The rewards, risks and realities of ‘machine journalism’
News Automation – The rewards, risks and realities of ‘machine journalism’
Price
- For non-members: 250 EUR
- For WAN-IFRA members: Free
Download
Employees of WAN-IFRA member organizations can download the report free of charge here, as a membership benefit:
(Note: You do not need your log-in – just your e-mail address.)
Attention non-members: Click here to purchase the report.
Not yet a member? Learn more about WAN-IFRA membership.
Summary
This report focuses on a specific part of news automation: the automated generation of news texts based on structured data. This is not about crystal ball gazing. News automation is already making itself felt in the daily life of newsrooms, and the examples presented in this report show how automation can aid journalism as well as the implications, and the ethics involved.
Media outlets face ever-growing commercial pressure to extract higher margins from dwindling resources and that is a key driver for news automation. Right now, one of the main goals of automated content is to save journalistic effort, especially on repetitive tasks, while increasing output volume. Automated production is foremost a tool, aiding and creating additional content.
One of the characteristics of what is labelled "automated news" is that its focus is on writing stories that journalists cannot or do not necessarily have the time to write. The good news is that so far, news automation has not replaced humans, and looks set to work alongside humans in the newsroom.
For all the hype about "robot journalism" we are more or less in the same spot as three years ago. AI has a hype problem and we need to put aside our Hollywood-inspired ideas about super-advanced AI and instead see the automation process as a logical extension of the Industrial Revolution. The future of automation lies in decomposition, or deconstruction, of the fundamental principles of journalism. That means breaking down journalistic work into the actual information artefacts and micro processes to analyse what can be automated and what are inherently human tasks.
Five examples of implementation worldwide
In this report, we present five examples of how news automation has been implemented in newsrooms around the world:
- MittMedia and United Robots (Sweden)
- RADAR (UK)
- The Washington Post (US)
- Valtteri (Finland)
- Xinhua and Caixin (China)
Publishers considering implementing news automation systems have a lot of judgement calls to make. The biggest decision is whether the system should be bought from a service provider or created and modified in-house. In addition, the approach to implementation of news automation, ethical considerations and transparency should be considered.
Automated journalism transforms structured data into news articles, and the quality of the output is highly dependent on the quality of the data that is fed into it. The quality of data is often described as the five V's: volume, velocity, variety, value and veracity. Volume, variety and velocity are largely relevant from a business perspective, satisfying content-hungry customers and driving revenue streams. Veracity, on the other hand, matters more from an ethical and journalistic viewpoint.
The process of translating digitally encoded data into human language is called Natural Language Generation (NLG). There's been a lot of research into NLG, but it remains little exploited in the context of algorithmic journalism. One reason for this is the complexity of the natural language used in journalistic settings: journalists are extremely skilled at avoiding repetition, and easy-to-implement NLG approaches only really work where the range of possible news stories is relatively limited.
Because templates used in news automation are designed by humans, there is a risk that the automation might reflect what humans consider important. The looser the template, the greater the chance for discrepancies, and the higher the risk that readers won't buy into it. Beyond fact-belief discrepancies, NLG systems can also produce fact-claim discrepancies, which can also be called "incorrect statements" or more simply "lies".
In all use cases, good user perceptions are crucial. This report looks at the impact of saying when stories are created automatically, as well as what happens when users are asked to compare content created by machines with that created by journalists.
- Date:
- 2019-03-08
- Language:
- English
- Type:
- WAN-IFRA Report
- Number:
- 1
- Author:
- Editors: Lindén, Carl-Gustav; Tuulonen, Hanna. Co-authors: Bäck, Asta; Diakopoulos, Nicholas; Granroth-Wilding, Mark; Haapanen, Lauri; Leppänen, Leo; Melin, Magnus; Moring, Tom; Munezero, Myriam; Sirén-Heikel, Stefanie; Södergård, Caj; Toivonen, Hannu
Contact information
Dean Roper
Director of Insights, Editor-in-Chief
WAN-IFRA
| Frankfurt am Main,
Germany
Phone: ++49-69-240063-261
E-Mail: dean.roper@wan-ifra.org