By Krista Davidson
Krista Davidson describes the risks of using generative AI and how communicators can ensure trust and transparency in AI systems.
With the rise of general purpose AI (GPAI), communicators now have a powerful toolkit at their fingertips. GPAI are models that can perform and delegate a wide range of tasks autonomously and without human oversight. Platforms such as GPT, Claude and LLaMA, are so sophisticated they can write computer programs, carry on a conversation, mimic your voice and generate realistic videos. They are the equivalent of a super intelligent, keen, creative graduate student with tons of time. It’s the communicator’s dream, right?
While these technologies produce new and novel capabilities on a seemingly daily basis, many AI experts and policymakers around the globe have raised the alarm on the dangers of GPAI. In the International AI Safety Report, released by the AI Action Summit in January 2025, experts agree that the harms of GPAI need to be better understood in order to mitigate the risks.
As they navigate this transformative new set of tools, there are some of risks that communicators need to be aware of.
AI Malfunctions
With technology comes malfunction, and sometimes malfunctions pose risks to the public that are unintended but still harmful. These types of malfunctions can include factors such as reliability. For example, when someone consults ChatGPT for legal or medical advice, it may not always provide reliable, evidence-based responses.
GPAI systems also rely on large data sets which inevitably inherit biases passed on by humans. Left unchecked, these unconscious biases can perpetuate real-world harms such as when facial recognition models fail to recognize people with darker skin tones. Here’s a good article that helps debunk bias in generative AI and the role that humans can play in mitigating it (spoiler alert – it involves human oversight).
AI systems have also been known to “hallucinate” — generating false outputs, like when Google’s Bard incorrectly claimed that the James Webb Space Telescope had captured images of a planet beyond our solar system.
Malfunctions can also happen when humans lose control of AI systems and are unable to manage the AI systems outputs.
Data Leaks
Nearly 46 per cent of Canadian workers report that they are using generative AI in their jobs in 2024, a figure that has more than doubled in the span of a year. What’s even more concerning is that a quarter of those users admit they’ve entered proprietary company data into a public AI platform. The potential for data leaks has led some companies, including Apple and Samsung, to ban internal staff teams from using ChatGPT after potentially sensitive coding had been uploaded to the public platform.
Deepfakes
Deepfakes are synthetic or artificially generated content such as videos, photos and audio that appear humanistic and real. They’re on the rise and have been used in scams. In one incident, a financial officer of a firm was scammed out of $25 million by tricksters who deployed deepfake technology to trick the officer into believing it was the company’s chief financial officer giving the directive.
Deepfakes have also been widely used to spread misinformation (inaccurate information) and disinformation (deliberate and malicious intent to spread false information) for the purposes of influencing public opinion. In both the Canadian and American elections, the use of deepfakes were on the rise, with more than one quarter of Canadians exposed to fake political content. While there are subtle cues to look for in spotting deepfakes, they’re unfortunately becoming more difficult to detect.
Trust and Authenticity in Communications
Despite all these scary but potential risks, general purpose AI has many benefits to companies and organizations, and communicators at all levels in their careers can benefit. In addition to transforming your social media channels, AI can take on a host of tasks, elevating the communicator’s toolkit when it comes to generating engaging and compelling content, editing, designing and analyzing.
While the strategies, practices and policies that ensure the safe and responsible use of AI systems are not widely or well understood, communicators have an important role to play. At the heart of the communicators’ work is building trust and authenticity–regardless of whether they utilize AI tools. With AI literacy and trust in AI ranking low on Canadian’s scorecards, according to Stanford’s Global AI Index, communicators have a vital opportunity to build and maintain trust and transparency in the following ways:
- Disclosing when and what content is AI-generated.
- Checking content for accuracy and bias
- Letting users know when they are interacting with AI systems
- Educating themselves and others on the ethical and inclusive use of AI systems in communications
As GPAI becomes more prominent in the workplace, communicators will need to be more deeply involved in helping to develop frameworks that guide and govern the use of generative AI. Some companies, such as Salesforce, have already begun this work to ensure that the use of GPAI systems are safe and inclusive from ideation and development to deployment. An important question to ask is just because GPAI can perform a task, should it?
An important consideration raised by the International AI Safety Report is that humans get to make the choice on how AI is used in the world, and communicators in particular, have an important role in promoting trust and transparency in how organizations and companies communicate and interact with the world.
*ChatGPT was used minimally to assist with editing and refining portions of this article.
About the Author
Krista Davidson is a certified, award-winning senior communications leader with experience in the higher education and not-for-profit sectors. Her expertise includes storytelling and content creation, digital communications, issues / crisis communications, artificial intelligence for communicators, thought leadership and equity, diversity and inclusivity in communications. She has worked for a wide range of organizations and institutions, including CIFAR, York University, University of Toronto, CBC and others.
Davidson holds a master’s degree in journalism international from the University of Westminster in the UK and a Bachelor of Arts honours degree from Memorial University. She holds CMP certification and has served as an IABC Gold Quill evaluator. She resides in Whitby, Ontario with her husband, two children and mini poodle, Pepper.
Return to the June 2025 Issue of Communicator
READ MORE
Connecting communicators: Insights from IABC/Toronto’s latest member survey
By Nathalie Noël, Vice President of Data Analytics & Brand Management, IABC/Toronto Nathalie Noël summarizes the results of the IABC/Toronto’s latest member survey and highlights the value an IABC […]
Writing Headlines That Work: Why AI Is Your New Creative Partner
By Mandy Silverberg Mandy Silverberg explores how communicators can use AI to generate engaging headlines. As communicators we face the challenge to create content that will drive […]
Your Degree Got You Here. Your AI Skills Will Get You Hired
By Roopal Chaturvedi “The objective isn’t just to use AI, but to understand how it can complement your skills as a communicator.” Roopal Chaturvedi offers job search tips to […]
Crisis Communications: The Essential Role of Your Legal Counsel and How to Best Manage the Relationship
By Bryan G. Jones, Founder and CEO BGJ Global “Engaging legal counsel during a communications crisis is a critical component of effective management.” Bryan Jones explains the importance of […]
How inclusive design strengthens transparency in crisis communication
By Matisse Hamel-Nelis, ADS, CPACC “Trust doesn’t only come from facts and data. It comes from clarity.” Accessible communications and marketing expert, Matisse Hamel-Nelis offers tips on how to […]