Journalists are using Gen AI to shape stories about you - this is what it means
A new report 'AI and the Future of News' published by Reuters Institute and University of Oxfords shows over half of UK journalists are now using AI professionally and this is what it means for comms.
New research shows that journalists are using Gen AI to discover information about your company and executives. We’ve long suspected it but its good to see the data: over half (56%) of UK journalists now use AI professionally at least once a week, with a further 27% using it less frequently. Only a small minority (16%) have never used AI for journalistic tasks.
The report (found in full here) is based on surveys conducted between August 2024 – November 2024, across a representative sample of 1,004 journalists. It would be fair to assume that these numbers have only increased 12+ months later, and this may be an indication of global usage as well.
I’ve read through the report and pulled out the main bits that struck me as a communications professional.
A quiet revolution
AI’s integration into journalism is not just about flashy headlines or deepfakes. The most common uses are surprisingly practical: transcription, translation, grammar checking, and copy-editing. Nearly half of UK journalists use AI for transcription or captioning at least monthly, a third for translation, and 30% for grammar checking. But the technology is also creeping into more substantive areas such as story research, idea generation, headline writing, and even fact-checking.
This means that Gen AI is responsible for shaping the reputations of the entities we represent. It’s a profound shift, as typical online communication is about searching for information and making a human assessment. Now that the assessment is being made by Gen AI.
It’s a double-edged sword
Newsrooms have never faced such pressure: the relentless news cycle, consolidation of media titles, redundancies, changing news behaviours, and so on. Therefore, the promise of AI in journalism is clear: faster turnaround, broader coverage, and the ability to handle vast amounts of data.
But speed comes with risk. The Reuters Institute finds that journalists who use AI more frequently are not more satisfied with the time they spend on complex, creative tasks. Instead, they report spending more time on low-level tasks, such as cleaning data and checking AI output. In other words, AI hasn’t yet delivered the creative liberation some hoped for; instead, it’s introduced new layers of technical diligence.
This means that errors, whether factual inaccuracies or misinterpretations, can propagate quickly. The margin for error is slimmer, and the window for response is shorter.
This appears generational
The research reveals that younger journalists and those with higher levels of management responsibility are more frequent users of AI. Business journalists, in particular, are leading the charge: 43% use AI professionally at least weekly, compared to just 21% of lifestyle journalists.
This generational slant matters. Senior journalists who are often the ones making editorial decisions are more likely to see AI as an opportunity, while rank-and-file reporters remain sceptical. This means that the decision-makers shaping coverage of your organisation are increasingly comfortable with AI-driven processes. Understanding their workflows and concerns is now a strategic imperative.
Trust and transparency
Despite widespread adoption, UK journalists remain deeply pessimistic about AI’s impact on their profession. A striking 62% see AI as a “large” or “very large” threat to journalism, while only 15% see it as a significant opportunity. The top concerns? The potential negative impact on public trust, the value of accuracy, and the originality of journalistic content.
Journalists’ scepticism means they may scrutinise AI-generated content more closely, raising the bar for transparency and accountability. Whether a journalist or a communications professional, there is common ground to be found in advocating for responsible AI use, clear labelling, and robust fact-checking.
AI policy and procedures
Most UK journalists report that their main news publication has established some rules or guidelines around AI involving human oversight, data privacy, and transparency. However, only 27% say their publication has guidelines around bias and fairness. Training is also patchy: around a third of journalists say their organisation provides AI training.
This means that the standards governing AI use in newsrooms are still evolving.
This is an AI future but with increased scrutiny
Journalists overwhelmingly expect their main publication’s use of AI to increase in the future. The stance of most newsrooms is supportive, with internet-native titles particularly bullish. This means that AI-driven journalism is not a fad but a behavioral shift that will transform the profession.
With greater adoption comes greater scrutiny. Reuters Institute highlights that journalists with higher levels of AI knowledge are more concerned about ethical consequences, while daily users are less so. This divergence suggests that as AI becomes more embedded, expect debates about its ethical use to intensify.
It’s time to navigate the new normal
The adoption of AI by UK journalists is reshaping the media landscape in ways that are both subtle and profound. In the past, a company’s presence in Google Search used to matter but now you need to care about how your organisation and executives are appearing across a plethora of AI platforms.
The future of reputation management is being written, quite literally, by algorithms.



