𝐖𝐞𝐛𝐢𝐧𝐚𝐫: So you think you can lead, eh? – Understanding Leadership and Team Dynamics
Wednesday, May 8, 2024 – from 11:00 am to 12:00 pm (EDT) | Register now!

Artificial intelligence: is there a human in the loop?

If you still haven’t tried using an AI-based chatbot like ChatGPT (as have 70% of Canadians in February 2024, according to the Leger Marketing firm), this article outlines its main features. Although they yield impressive results, AI models are far from perfect and therefore require human input and validation. Here’s why. 

Data approximation

When it comes to generative AI, compromise, however subtle, is inevitable. In fact, to guarantee that the algorithm can be applied beyond the training data, it’s important to find a balance between its two extremes: overfitting and underfitting. 

Overfitting occurs when the AI model is overtrained or has been fed too much data. The model memorizes the training data it uses to perform (including anomalies), but doesn’t adapt well to new data. 

Underfitting occurs when the AI model has been undertrained or has been fed too little data and therefore cannot capture the complexity of the relationship between them. Ultimately, it cannot perform, both with the training data and the new data. 

As a result, accuracy is often compromised to make the algorithm more versatile. 

Algorithm design and its inherent biases

Although algorithm engineers strive to create the best models possible, they might unintentionally introduce many biases into them. There are three types of biases, each of which has its own implications in the context of HR management: 

1. Algorithmic and design bias  
Reflects the views, background, and values of the people who designed it. Choices made at this stage, such as selection criteria or weighting of variables, may favour certain groups.  

  • Resume scanners: an algorithm that has been programmed with keywords associated with roles traditionally associated with men (e.g., “leader,” “decisive”) could put female applicants at a disadvantage. 
  • Personality assessment tools: if based on the dominant culture’s norms, these tools could penalize members of minority groups, whose personality traits may be interpreted differently. 
  • Job and talent matching software: an algorithm based on candidates’ professional experience could exclude talented people with non-traditional backgrounds or who have gaps in their resumes. 

2. Biases linked to source data 
If the training data is incomplete, unbalanced, or not representative of the population’s diversity, the machine learning algorithm will reproduce these biases in the results it generates.  

  • Analysis of staff morale: if most of the training data comes from comments from satisfied people, the algorithm may underestimate the dissatisfaction of other groups, especially minorities. 
  • Performance forecasting: an algorithm trained on biased historical performance data (favouring privileged individuals, for example) could perpetuate these inequalities by recommending unfair promotions or raises. 

3. Historical bias in data  
Historical data often reflects systemic discrimination and disparities in society. To use such data without analyzing and correcting it leads to the reproduction and amplification of these injustices.  

  • Salary analysis: if historical data reveals a wage gap between men and women, an algorithm could suggest maintaining it—or worse—increasing it. 
  • Social media-based recruitment: an algorithm that relies on data from professional social networks could exclude talented individuals who don’t have access to them, or who choose not to use them. 

The solution: an iterative approach with a human in the loop

When a chatbot receives a prompt, it generates an initial response that may contain inaccuracies and biases. Providing a more detailed and contextual prompt leads to smaller margins of error, thereby enabling the system to refine its results. 

The iterative approach to crafting prompts is primarily aimed at improving the quality of the responses generated by the generative AI agent. This method explores the nuances of the subject and fully exploits the system’s capabilities. With each iteration, the human can further refine the prompt based on the previous response to generate more refined or targeted information. As a result:  

  • answers become more refined, comprehensive, and meaningful 
  • the human learns to formulate better questions 
  • the chatbot provides increasingly specific and useful information 

Thanks to this continuous improvement process, generated results become richer and better suited to the user’s specific needs, thereby maximizing the value of the AI model. 

Share this article

Do you want to go further? Contact us.

Didier Dubois CHRP, Distinction Fellow
  • Partner, Organizational Transformation & Strategy

Start with us!

Privacy policy

Stay up-to-date

with Humance newsletters