AI is a tool created by humans, for humans. It is still a new and complex technology, but it’s quickly becoming an integral part of our daily lives and a strategic imperative for organizations. As we begin to see AI embedded across the workplace, and specifically the HR function, it is important to consider the ethical implications and the concerns of employees.
The Best Aspects of AI
Artificial Intelligence (AI) applications are beginning to be deployed across the talent lifecycle and provide promising solutions for HR professionals. Leveraging AI tools enables human augmentation. For example, here are four key ways AI is being used to enhance HR processes and outcomes:
Human Resources teams that use AI to automate more time-consuming or redundant tasks have the opportunity to transition their focus from administrative tasks to those that are strategic and vital.
Reducing Human Error.
AI can provide objective results that are consistent and reliable, and errors can be dealt with systematically. Human judgment is not as easily quantified and addressed.
Data-Driven Decision Making.
AI can drive data collection and analysis initiatives to help predict candidate behavior, enabling recruiters and managers to make better hiring decisions.
Improving the Employee Experience.
AI has been leveraged to support the creation of real-time feedback platforms for employee engagement and training.
With AI, we can unlock people’s potential and allow them to focus on more creative and strategic topics. For example, one organization is using AI to create an automated career pathing tool that can be accessed by employees anytime, 24/7. This resource can pave the way for focused conversations between employees and managers around the topic of career development.
Cautions and Concerns
Despite the benefits, the thought of AI having so much influence can feel disconcerting. Even the term artificial intelligence is a very broad and loosely defined umbrella term that covers the many sub-fields and applications of AI. So, when there’s apprehension around AI, it’s understandable considering it’s an emerging technology and it’s evolving at such a rapid pace.
The greatest concern with AI, especially in the realm of HR and selection, is the issue of bias. George Lawton, a technology journalist, provides an excellent overview of the topic. Lawton says, “The most common form of bias in AI comes from the historical data used to train the algorithms. Even when teams make efforts to ignore specific differences, AI inadvertently gets trained on biases hidden in the data.” For example, resume analytics may screen for characteristics, such as sports or extracurricular activities, that are more often undertaken by the wealthy or those in certain demographic categories. Lawton makes the point, “Just because chess players make good programmers does not mean that non-chess players couldn’t have equally valuable programming talent.”
Therefore, it’s important to identify the biased parameters in the algorithmic data and remove them to be able to provide less biased recommendations.
Another concern is that we will become overly reliant on AI to make decisions. AI can remove friction from a process and increase efficiencies, so it may be tempting to relinquish control to the algorithm. The key is putting up guardrails. For example, companies should guarantee that AI applications are never making the final decision on who gets hired, fired, or promoted.
Finally, as with any new technology, there will inevitably be missteps. Disruptive innovations usually involve a learning curve. Employing AI will require human oversight to monitor results, course correct, and intervene when necessary.
How do we continue to bring human intelligence into artificial intelligence?
Decisions around employment are high stakes and have significant consequences for individuals and organizations. It is critical to consider the balance between improving the efficiency and accuracy of HR processes and the risk of alienating employees and candidates by implementing AI in an unethical way.
Accordingly, the responsibility will be on organizations to demonstrate their commitment to ethical, transparent AI and to ensure an inclusive employee experience. This involves employing a set of practices and safeguards to ensure AI doesn’t end up betraying people’s trust. Conveying transparency in the approach to AI is a good place to start. At a minimum, this should be articulated in an easily accessible public statement.
Still, concerns will remain and there is a push for legislation to address these concerns. Currently there are no federal regulations or restrictions on AI, but some states have begun to enact related legislation. In addition, the FTC, DOJ, and EEOC have all released statements that echo I/O psychologists’ main concerns. Organizations should carefully monitor guidelines and legal requirements based on current and proposed legislation. Although well-intended, the long-term impact of legislation on AI is unclear at this point.
Despite its drawbacks, the promise of AI suggests many opportunities to augment and enhance our lives. By merging human intelligence and artificial intelligence, we can strive to develop solutions that will deliver equitable results for all.