Onboarding and Retention, Team Performance , Employee Engagement
Heather's AI Heartburn: How to Keep Her Team Through the AI Change
Heather is worried. She is an executive at a major research institution who loves her team and the people she works with. But AI is threatening their jobs. Heather says, “AI can do everything my team does, and sometimes better, all for less money. How do I keep my team relevant and employed in the face of the AI juggernaut?”
Sound familiar? AI has the potential to help employees in their work, but it could also replace them. The question becomes how organizations can adapt to the changes that will come with AI. In other words, how do you take advantage of what AI has to offer while keeping your team? One answer is to approach the problem from a change management perspective.
Change management is about helping people in organizations learn new skills and behaviors so they can adapt. We teach people new skills, get them ready to go live, and support them with coaching as they develop mastery. In Heather’s situation, her team knows how to write good content. But if they want to stay relevant and keep their jobs, they must learn three new skills: how to write good prompts, fix what AI generates, and use the AI content ethically.
For Heather’s team, this is the nexus between AI and change management. It is the same as any other change, but with AI, the stakes are higher. The solution? Managers must accurately predict the new skills and behaviors that employees must learn, then provide the training and follow-up support.
Let’s take a closer look. The example below expands on the skills that Heather’s team will need to learn. Oh, and there’s a surprise at the end, so keep reading.
Skill #1 – Writing Effective Prompts: The Art of Asking the Right Questions
AI tools like ChatGPT rely on prompts to generate meaningful and accurate responses. The quality of the prompt directly impacts the quality of the output, making this a foundational skill for AI adoption.
A. Be Specific and Clear
Ambiguous prompts often lead to vague or irrelevant results. To improve outcomes, include enough detail to guide the AI. This means making sure you clarify the topic or issue you’re looking into and being specific as possible. For example, a poor prompt might look like: “Tell me about AI.” A better prompt says, “Explain the key benefits of using AI in healthcare, focusing on diagnostic accuracy and patient care.”
B. Set Context and Constraints
Supplying contextual or situational information helps the AI model tailor feedback to be more appropriate and helpful. On way of doing this is by providing a framework for the AI to operate within by specifying the purpose, audience, format, or organizational context. For example, something that would give you situation specific results is a prompt such as, “Draft a 500-word blog post on the future of AI in education, targeting educators.”
C. Iterate and Refine
AI responses may not meet expectations on the first attempt. Working with AI to get a response that’s helpful typically takes a few iterations, be willing to work with the tool and change the way you’re asking for something. Experiment with rephrasing or adding more details. For example, the original prompt may have been, “List some challenges of AI.” A refined prompt could be, “List five challenges organizations face when implementing AI, including cost and data privacy.”
Be sure to indicate the desired outcome of your action. Explain what you hope to achieve and specify whether you need a strategic overview or tactical steps. Sometimes evidence-based recommendations are needed. In this case ask for the tool to reference them. As a best practice, ask for supporting research, citations, or case studies to ensure continuity between the tool and the research.
Skill #2 – Critically Evaluating AI Output: Thinking Beyond the Surface
AI systems can produce compelling but occasionally inaccurate or biased results. Critical evaluation is essential to ensure the reliability and relevance of AI-generated content.
A. Check for Accuracy
Accuracy should be a priority when assessing AI output. Cross-referencing the information with trusted sources helps confirm the validity of facts. For instance, if the AI provides statistics, these should be verified against credible databases or publications to ensure their authenticity.
B. Assess Relevance and Context
It’s important to confirm that the AI’s response aligns with the original intent of your prompt and meets the contextual requirements of your query. For example, if the AI suggests strategies for reducing workplace stress, it’s necessary to validate that these recommendations are practical and suitable for your specific industry.
C. Identify Bias and Gaps
AI can unintentionally reflect the biases inherent in its training data, so it’s essential to be vigilant about spotting stereotypes or skewed perspectives. For instance, if the AI’s output disproportionately highlights male contributors in a historical overview, take the time to research and incorporate more diverse perspectives to create a balanced narrative. Some good questions to ask are:
- What assumptions underlie the response?
- Who benefits and who is potentially harmed by this output?
- Are minority of dissenting viewpoints included?
- What additional perspectives or data could enhance this output?
D. Engage in Iterative Feedback
By providing specific feedback or adjusting the prompt, you can guide the AI toward producing more accurate and relevant responses. For example, if the output feels outdated, you might specify, “Include recent examples from the past five years.” This iterative process ensures the content evolves to meet your expectations.
Skill #3 – Addressing Ethical Implications: A Moral Compass for AI Use
AI adoption brings ethical challenges, such as ensuring fairness, protecting privacy, and mitigating unintended consequences. Understanding and addressing these implications is vital for responsible AI use.
A. Understand the Ethical Landscape
This involves familiarizing yourself with foundational principles like transparency, accountability, and fairness. For instance, reviewing guidelines from organizations such as the AI Ethics Initiative or relevant government bodies can provide valuable insights into best practices and standards.
B. Ensure Data Privacy and Security
Whenever possible, anonymous data should be used, and adherence to privacy regulations like HIPAA is non-negotiable. Regular audits of data handling practices can help maintain compliance and reinforce trust.
C. Avoid Reinforcing Bias
Actively seeking diverse perspectives and incorporating varied demographics into training datasets can help minimize systemic bias in AI models. For example, ensuring datasets represent a wide range of users can lead to fairer and more equitable outcomes. One can initially explore this by prompting:
- “What groups are represented in this dataset and what groups are not represented?”
- “What historical inequities or biases are perpetuated in this response?”
D. Promote Transparency and Explainability
Transparency and explainability are equally important, particularly in high-stakes scenarios where understanding how AI decisions are made is crucial. Employing interpretable AI models or providing clear documentation about the decision-making process can enhance trust among stakeholders and improve accountability. It is helpful for people to understand how one managed the AI tool and the role it tactically played.
E. Establish Ethical Review Mechanisms
Establishing robust ethical review mechanisms can help organizations navigate potential risks and ensure alignment with moral standards. Multidisciplinary teams, including an ethics committee, can evaluate AI use cases and identify possible harms. For instance, applying the two-rule method of affordance—asking whether an action will harm others and if it can be mitigated—can serve as a practical framework. If the answer is no, the project should be reconsidered or abandoned.
Now for the surprise. For this post, Scontrino-Powell (SP) identified the three skills necessary and then prompted ChatGPT to outline each one. SP then went through and critically evaluated content, edited, and assessed for ethical standards. This is because the expert judgement of trained individuals cannot be replaced by AI (yet). This is an example of how to use AI as an aid rather than a substitute.
In case you were curious, the ChatGPT prompt used was: Write a short article, 1500 words or less, on the change management involved with AI adoption. Specifically, people need to learn new skills and behaviors: writing good question prompts, critically evaluating the output from ChatGPT, and ethical implications. For each new skill, outline how to do each of these.
Looking for more information on using AI in your organization?
We recommend the following articles (which we also referenced to independently cross-reference ChatGPT output):
Aguinis, H., Beltran, J. R., & Cope, A. (2024). How to use generative AI as a human resource management assistant. Organizational Dynamics, 53(1), 101029.
Andrieux, P., Johnson, R. D., Sarabadani, J., & Van Slyke, C. (2024). Ethical considerations of generative AI-enabled human resource management. Organizational Dynamics, 53(1), 101032.