The Importance of Integrating AI Ethics Into the College Curriculum
By Michele E. Ewing, APR, Fellow PRSA
September 2024
Join us as we again celebrate Ethics Month this September. PRSA provides educational opportunities to explore ethics within public relations year-round, including hosting a variety of special programs this month and at ICON. Check out PRSA's Ethics Month page here.
As the impact of artificial intelligence (AI) on public relations evolves, educators must integrate AI ethics into their curriculum to prepare future professionals for the challenges of this increasingly tech-driven industry.
From brainstorming and content creation to chatbots and AI-powered sentiment analysis and predictive analytics, AI offers a range of tools that can enhance the efficiency and effectiveness of PR strategies and tactics.
However, many ethical concerns come with these AI advancements, such as algorithmic bias, data privacy, misinformation, transparency and accountability. Students need to understand how to ethically augment human critical thinking, communication skills, creativity and emotional intelligence with AI capabilities.
Conveying the importance of using AI responsibly
Cayce Myers, Ph.D., LL.M., J.D., APR, professor and director of Graduate Studies, Public Relations and Advertising Division, School of Communication at Virginia Tech, embraces AI in education to prepare students for the changing workplace.
“We are sending our students into an industry that is using AI more every day,” Myers said. “Why would I try to prepare them for a workplace that is not the workplace they will enter?”
Sarah Maben, Ph.D., APR, assistant professor, Bob Schieffer College of Communication, Texas Christian University, also encourages educators to incorporate the responsible use of AI in courses.
“If we tell students: ‘You can’t do this in the classroom,’ we are hindering our students who will be expected to know some of these tools and at least be conversant in the workplace.”
This sentiment aligns with the Commission on Public Relations Education’s recommendations in its report on advancing undergraduate PR education, which emphasizes the need for entry-level PR professionals to understand how to make “ethical and governance judgments about the AI tools systems they use and the outputs they generate.”
The 2024 Work Trend Index Report from Microsoft conveys the rapid integration of AI in the workplace and emphasizes the need for AI education to ensure responsible AI adoption. AI education in public relations curricula prepares students for both current technologies and future ethical challenges in their field.
Strategically integrating AI ethics in PR education
Incorporating ethics throughout the curriculum helps students routinely practice ethical decision-making and enhance critical thinking skills, enabling them to assess and address potential issues proactively. They must understand how AI can influence public perception and trust — the foundation of sound PR practice.
• Apply existing ethical frameworks and guidelines. To start, encourage students to apply existing guidelines and codes as they apply AI tools in different courses. For example, the PRSA Board of Ethics and Professional Standards (BEPS) developed a framework, “Promise & Pitfalls: The Ethical Use of AI for Public Relations Practitioners.” It aligns PRSA’s Code of Ethics and offers guidelines for preventing and addressing ethical issues related to AI use.
“I use the PRSA Code of Ethics all the time and have them refer to that as a tool and as a guide,” said Holly Hall, J.D., APR, professor of strategic communication, Arkansas State University.
She also encourages students to apply the BEPS guidelines to different AI-related ethics scenarios. “I use this resource in class to help students use AI tools appropriately and understand where the guardrails need to be,” Hall said.
• Communicate a clear AI policy. Educators should communicate and model responsible AI use, as AI policies may differ across courses and workplaces. One challenge in integrating AI into PR education is students’ apprehension about using AI in fear of academic dishonesty.
“My students were initially unsure if they were allowed to use AI, and worried about accusations of plagiarism, Maben said. “Once I clarified the guidelines, they began enthusiastically exploring AI’s capabilities.”
She recommends professors include an AI policy in the course syllabus and routinely discuss and practice responsible AI use in the classroom. Maben prefers a conversational approach when dealing with potential misuse.
“If a student’s work doesn’t seem like their voice, I initiate a discussion to understand their process and ask for drafting documents,” Maben said. “I believe no detection software can keep up with AI advancements and prefer to rely on direct communication.”
Hall also emphasizes clear communication about how and when students should use AI tools for coursework, as well as encouraging educators to avoid being too punitive for the misuse of AI in courses.
While Hall uses AI detection tools when she suspects a student inappropriately uses AI, she recognizes these detection tools can deliver false positives. She evaluates suspected AI-generated work using several tools, testing prompts for the assignment, and initiating conversations with students.
“For first-time offenses, I typically speak with the students about my concern about using AI for this assignment wasn’t an acceptable use and allow them to rewrite this (assignment),” Hall said. She recommends educators find ways to create assignments and experiences in the classroom that mitigate inappropriate AI use.
For example, educators can incorporate a personal aspect into an assignment like connecting life experiences to course concepts. Myers cautioned educators to avoid obsessing about plagiarism and AI use and to teach the class with the assumption that AI is being used. “The best thing to do is to start assuming that AI is being used in your classroom on assignments, whether authorized or not,” Myers said. “Then, look for ways to incorporate AI responsibly in your classes.”
• Model the responsible use of AI tools. Maben explains to students how she used AI tools in her role as a professor, such as helping to brainstorm class activities and assignments. She shares examples of prompts used and then outlines the same expectations for students.
“I require students to document their AI use by showing the prompts, the structure and the output,” Maben explained. “This helps them understand the importance of a clear paper trail and ethical considerations in using AI.”
Practical experience with AI tools allows students to experiment with and learn about their capabilities, limitations and ethical implications. Myers also emphasized the value of leading discussions about AI’s benefits and disadvantages. He shows examples of how AI didn’t deliver accurate information and how editing is needed to elevate AI-driven work.
Educators should collaborate with PR practitioners to help students learn about current AI uses in PR practice. Guest speakers can demonstrate AI tools, discuss AI-related campaigns and case studies, and share insights to inform the curriculum.
“I'm always looking for ways to keep a pulse on what’s going on in the industry, and I ask our alumni to help address any gaps in our curriculum,” Hall said.
Exploring ideas to teach about AI ethical challenges
Ideally, educators can create AI-related class activities and assignments that encourage critical thinking and provide practical experience in navigating ethical challenges. Analyzing case studies or real-world scenarios helps students understand the complexities of AI ethics and develop problem-solving skills.
Educators can use case studies to illustrate both successful and problematic uses of AI in PR.
“If they're having to actively participate and think through the scenario, they can think about how they would deal with this dilemma if they were in that position,” Hall explained.
Here are some ideas for students to explore key ethical challenges with AI-driven public relations:
- Data privacy and security: Educators can provide students with a dataset, ask students to use an AI tool for audience segmentation, and then encourage discussions about legal and ethical considerations. To explore ethical issues in chatbot use, students could design a chatbot and outline its purpose, data collection practices and ethical guidelines.
- Algorithmic bias and fairness: Students should learn about identifying and mitigating biases in AI systems to ensure fairness and equity in public relations practice. Maben uses an assignment called “Bias Busters,” which asks students to analyze various scenarios where AI might introduce bias. Students then propose ideas to eliminate the potential biases. “The bias is inherent in these systems,” Maben said. “We have to make certain we aren't perpetuating it.”
- Transparency and accountability: The importance of transparency and accountability in the development and use of AI technologies should be discussed with students.
“So I might present something that is a fake image or a voice clone, and then tell them that it’s not real, and then ask what they think about the ethics of that,” Myers said.
Another idea is to ask students to experiment with AI-generated press releases or social media posts. They can discuss the pros and cons of using AI for content creation and how public relations professionals maintain authenticity, disclose this AI use and ensure quality work.
Maben, who researches moral muteness among public relations practitioners, emphasized the importance of students learning when and how to disclose AI use and understanding the need for human oversight in AI-generated work. She noted interns and entry-level PR professionals often work on technology-driven activities and can promote responsible AI use in the workplace. Hall agreed, adding that employers frequently seek students and graduates with social media expertise, and the demand for AI proficiency is expected to grow.
Myers advocates for a proactive approach.
“I can’t teach students how to write press releases on typewriters because they’re going to use the internet and Word. This is the same kind of revolution,” he said. “As uncomfortable as it may make people, if AI is used responsibly, ultimately [PR students and professionals] create better work.”