Navigating Ethical Implications for AI-Driven PR Practice
By Michele E. Ewing, APR, Fellow PRSA
September 2023
Each September, PRSA recognizes Ethics Month to bring increased attention to the core foundation of the communications profession. Please visit prsa.org/ethics for additional programming and ethics resources and PRSA’s social media platforms for updates throughout the month.
Should PR professionals disclose the use of AI generative text for content development? Should audiences know they’re interacting with chatbots?
The growing adoption of artificial intelligence (AI) in public relations practice raises ethical concerns that must be thoughtfully navigated to ensure AI aligns responsibly with communication strategies. While AI can streamline public relations work, it must not overshadow the value of human connections to mitigate ethical challenges.
According to the 2023 survey among 400 U.S. PR leaders published by USC Annenberg Center for Public Relations and WE Communications, communicators are using AI technologies for content creation (visual and written), background research, data analysis, language translation, and audience insights and targeting.
The survey findings indicated PR leaders have the opportunity to play a pivotal role in promoting the use of AI strategically and ethically through education, experimentation and skill development.
Andrea Gils, marketing director at JacobsEye Marketing Agency and a member of the PRSA Board of Directors, said there are a lot of different ways that communicators can apply AI in their work.
“We must move from fears of being displaced to feeling empowered to be more efficient and effective with our time and our work, said Gils, who has been working with her team and clients to harness AI’s potential to optimize the agency’s deliverables for the past year.
Understanding the ethical challenges of AI
The integration of AI into PR practice presents numerous advantages; however, it raises ethical dilemmas including factual errors and misinformation, fake information and disinformation, bias, transparency, privacy issues, information security and negative social consequences, among others. These ethical concerns can lead to eroding trust between organizations and their audiences.
“The idea of a practitioner just using AI to generate content without having an active role in screening and editing the content is really dangerous because it (AI) can create something that may be untrue,” said Cayce Myers, APR, professor and director of graduate studies, public relations and advertising division, School of Communication at Virginia Tech. “AI can produce massive disinformation, particularly in deep fakes and other false content we’re already seeing, that will amplify in 2024 with the [presidential] election.”
Linda Staley, APR, Fellow PRSA, corporate communication manager at Carilion Clinic, also shared concerns about how the use of AI in content creation can lead to disseminating inaccurate information, undermining trust and damaging your brand or others’ brands or reputations.
She noted information in ChatGPT (a natural language processing tool allowing human-like conversations) may be dated, and if the accurate answer isn’t available, it will make up or “hallucinate” an answer, resulting in misinformation.”
Another ethical concern focuses on inherent biases that may be present within AI algorithms. AI systems learn from historical data, and if that data is biased, then it can lead to perpetuating stereotypes or discriminatory communication and behaviors.
“The AI tools are only as good as the available data, and it’s going to replicate any biases or stereotypes generated by humans,” said Staley, a member of PRSA’s Board of Ethical and Professional Standards (BEPS).
Ensuring fairness and diversity in AI-driven communication strategies requires careful monitoring and analysis by people representing a range of perspectives. “Be aware there’s an inherent bias in AI and recognize AI is a value-free system,” said Myers, a PRSA Board and BEPS member. “It’s reflective of our own biases.”
Transparency and disclosure are also important ethical implications with AI-driven PR practices. Being transparent about AI use builds trust, reinforces ethical practices and protects reputations. PR professionals should advocate for transparency with AI adoption for content development, chatbots and other uses.
A study among 2,000 U.K. consumers, conducted for PRWeek by YouGov, conveyed that consumers are concerned about AI and want brands to convey when AI is being used. Also, sharing this transparency with organization leaders, clients, employees and other audiences demonstrates a commitment to innovation.
“It’s showing that you’re not only up to speed with trends but that you know how to apply those tools and leverage technology in a smart way to advance your client’s goals,” Gils said. With this in mind, it’s recommended that PR practitioners disclose when they use AI generative technology. Eventually, as AI adoption grows and becomes a norm, people will assume that AI is used for content development, data analysis and other tasks.
Further, communicators must monitor and verify the sources of information produced by AI technology, safeguarding against any unauthorized access to personal data or proprietary information, as well as copyright infringement or plagiarism.
Addressing ethical issues focused on AI integration
Communicators can proactively evaluate and mitigate ethical dilemmas and negative social consequences associated with AI integration. Here are some initial steps to guide the ethical use of AI technology in PR work:
• Apply existing guidelines and codes. The ethical issues are very well captured in the PRSA Code of Ethics because the ethical issues are not necessarily new ones; they are just applied in a different context in a different technology, Gils said.
Myers agreed, stating, “Even though we may have a new phenomenon and new problems, the standards of the ethical code remain the same. It’s the benchmark for us to practice good and ethical public relations.”
• Involve diverse stakeholders to discuss, monitor and review. Obtaining diverse perspectives among internal and external audiences about AI use in communications helps recognize ethical implications and social consequences for the organization, industry and society.
Staley said her organization created an AI steering committee composed of organization leaders, communicators, legal, compliance and security officers, data scientists and others who collaborate to examine ethical implications. Engaging in partnerships with industry peers, advocacy groups and policymakers also supports responsible AI implementation.
• Invest in education and training. Promoting an understanding of AI’s capabilities, risks and ethical considerations encourages informed decision-making. Ensuring that all employees comprehend the responsible application of AI and are familiar with AI policies is vital to minimize potential ethical and legal risks. “When you have a fast-evolving new technology that has all these ethical, legal, social and cultural implications, training has never been more important than it is now,” Myers said.
• Address bias and discrimination. Continuously audit and evaluate data used in AI models to ensure fairness and inclusivity. “Train your data systems to be representative of the population,” Gils said. “The way you do that is by diversifying your sources and inputs and asking someone in the community for their perspective.”
• Promote transparency. Clearly communicate to audiences about how AI is used and explain the decision-making process for AI tools and systems. “It’s all about honesty and transparency,” Gils advised. “Be very specific with how you’re using [AI] to collect, analyze and communicate information.”
Technology has impacted public relations for decades, and successful PR educators and practitioners will adopt a mindset focused on understanding the benefits and risks of AI technology while prioritizing human involvement in AI decision-making.
“We need to educate ourselves and encourage others to learn about these AI tools and gain competency on how to use these tools responsibly,” Staley said.