The rise of generative text AI (Gen AI) tools, such as ChatGPT, has sparked a wave of change - and concern - across the PR and communications profession.
Thanks to new research published by communications consultancy Magenta Associates and the University of Sussex, we have a better understanding of not just if people are using Gen AI, but how they feel about it. The study gathered insights from over 1,100 content writers and managers, including 22 in-depth interviews.
The findings and analysis are presented in a new white paper - CheatGPT? Generative text AI use in the UK’s PR and communications profession. It reveals a complex picture of AI’s influence, from efficiency gains to ethical concerns, and points to a clear need for industry-specific guidelines.
Key findings: high adoption, but a lack of transparency
The results confirmed what many of us suspected - Gen AI is rapidly embedding itself in PR workflows. With 80% of content writers now using these tools either frequently or occasionally, AI is a present reality, not a distant one. These tools are being used for a range of tasks, from drafting social media posts to generating article outlines, and 68% of professionals reported that AI has improved their productivity. “It’s never been easier to write blogs,” said one survey participant, encapsulating the sentiment of many.
But it’s not all smooth sailing. Despite widespread adoption, our research highlights ethical and transparency issues, with only 20% of respondents feeling comfortable discussing AI use with their clients. Many worry that clients or managers might perceive AI use as “cheating” or view it as compromising their authentic voice. The secrecy surrounding AI use points to the need for open conversation and guidelines that support responsible and transparent usage.
Key findings at a glance
80% of UK communications professionals use Gen AI tools frequently, but only 20% have told their line manager, and even fewer (15%) have received any training.
66% of participants agreed training on the use of Gen AI could be useful.
68% of respondents report that Gen AI improves productivity, primarily in initial drafting and ideation stages.
71% of writers said their organisation had no guidelines regarding its use or they were not aware of any, and the 29% of people whose organisations did have guidelines said that employers issued advice like “use it selectively”.
While 68% consider Gen AI use ethical, concerns persist over transparency, especially as only 20% discuss use openly with clients.
95% of managers are to some degree concerned about the legalities of using Gen AI tools like ChatGPT.
Nearly half of respondents (45%) are concerned to some degree about the IP implications of Gen AI.
Why it matters: power imbalances and ethical implications
One of the most thought-provoking findings is the imbalance of power in AI development and use. While PR and comms professionals increasingly rely on AI tools, most of these tools are controlled by major tech corporations. This dynamic can leave smaller agencies and freelancers—who lack the resources to influence AI development—at a disadvantage. The research indicates that a lot of the content used to train these algorithms originates from the communications sector itself, potentially without explicit permission.
This imbalance doesn’t just affect the tools’ ethical standards; it also impacts the quality of outputs and the level of transparency professionals can offer their clients. Dr Tanya Kant, senior lecturer at the University of Sussex and research lead, explains the solution: “Critical algorithmic literacy could empower PR professionals, giving them a framework to engage thoughtfully with AI while also challenging Big Tech’s dominance.”
Moving forward: practical guidelines for responsible AI use
While Gen AI can be an efficient assistant, the findings underscore that it is not a replacement for human creativity or expertise. Writers report that AI output often feels shallow, repetitive, or “very salesy,” lacking the authentic tone clients expect. As one content writer put it, “I prefer to use my own style and language.” This gap highlights the need for professionals to carefully edit AI-generated content to maintain the quality that defines effective PR.
So, what’s next? The report recommends that PR professionals champion their authorial expertise and advocate for transparency in AI use. Industry-wide guidelines for responsible AI use are also essential, particularly as only 15% of respondents have received any form of training. Guidelines should offer a practical framework for using AI ethically, supporting teams to embrace its efficiencies without compromising the human-centred values of PR.
Gen AI is here to stay. By taking a thoughtful approach, the PR and communications industry can harness its potential responsibly, ensuring that AI remains a powerful tool, but never a substitute for authenticity and creativity.
If you're interested in hearing more about the role of technology in the future of public relations, you might find this PRmoment Podcast interesting.
PRmoment Leaders
PRmoment Leaders is our new subscription-based learning programme and community, built by PRmoment specifically for the next generation of PR and communications leaders to learn, network, and lead.
PRmoment LeadersIf you enjoyed this article, sign up for free to our twice weekly editorial alert.
We have six email alerts in total - covering ESG, internal comms, PR jobs and events. Enter your email address below to find out more: