There are questions over who claims authorship of robot journalism, and with whom liability lies in cases of libel. There is not yet a definitive legal opinion on whether the news organisation or the software company that produces the robot journalism systems have liability in cases of libel.
The technology behind robot journalism is called Natural Language Generation (NLG). NLG projects are formed using templates and conditions. Templates are essentially sentences with gaps that are filled using data and lexicalisation algorithms. Conditions are the circumstances that need to be met in order for the template to be utilised. The process happens through a software engine that arranges the templates into a determined order known as a “narrative”. The software that forms the basis of Retresco’s offerings is the rtr textengine. The predetermined order that templates are arranged into is known as a ‘storyplot’.
NLG is also able to use big data to spot newsworthy developments such as a sharp and unexpected increase in one value. This would trigger the production of a special story highlighting this shift. While a very basic use fills in gaps with data, more in-depth projects analyse the data, make sense of it, and draws conclusions. An example of this is financial reporting. Such a project involves looking at raw data and interpreting not only what the various financial events are but also ranking them in order of importance.
Robot journalism is already present in major newsrooms and organisations, particularly in the United States. These include The Los Angeles Times, Forbes, The New York Times, the Associated Press, and ProPublica.
In 2011, QuakeBot was launched by The Los Angeles Times. The system was connected to the US Geological Survey’s Earthquake Notification Service. When the system received a notification about an earthquake, it automatically generates a news story with the salient information of time, location, and magnitude. The story is then placed into the paper’s content management system, where it awaits approval from a human editor. The system first gained widespread recognition in 2013 when it was the first to report that an earthquake of magnitude 4.4 had hit Southern California.
Algorithms are able to generate quicker and faster news that is potentially less error-prone than human-produced content.
Translation is fairly easy to implement, as stories can be generated in multiple languages simultaneously from the same dataset.
NLG content is limited to a data pool. They are unable to conduct interviews, draws lines of causality, or be able to provide much — or any — external context.