top of page
Search
  • Writer's pictureBenoit Hardy-Vallée

Achieving Fairness in Algorithmic Decision Making in HR (Season 2 episode 5)




Join us on this episode as we dive into the complex world of algorithmic fairness in HR with Manish Raghavan, Assistant Professor of Information Technology at the MIT Sloan School of Management. Discover the challenges and opportunities of using algorithms to make decisions about people, and learn about the importance of preventing algorithms from replicating discriminatory and unfair human decision-making. Get insights into the distinction between procedural fairness and outcome fairness, and understand why the deployment environment of a machine learning model is just as crucial as the technology itself. Gain a deeper understanding of the scoring mechanism behind algorithmic tools, and the potential dangers and consequences of their use. Learn how common signals in assessments can result in similar assessments across organizations and what it takes to achieve fairness in algorithmic decision-making in HR.




  • Algorithmic fairness refers to the use of algorithms to make decisions about people, specifically in the HR field.

  • The objective of algorithmic fairness is to avoid algorithms replicating biased and unjust human decision-making.

  • Achieving algorithmic fairness is challenging due to a lack of agreement on what constitutes fair decision-making and technical difficulties in incorporating values into an algorithm.

  • There is a distinction between procedural fairness and outcome fairness in algorithmic decision-making.

  • Demographic proxies can still impact decisions, even if the algorithm does not directly consider them.

  • Outcomes are the primary way to assess fairness in algorithmic decision-making.

  • Examining the inner workings of a model (the "black box") to determine fairness is difficult.

  • Psychometric traits can be utilized in AI HR applications.

  • Automated HR tools, such as resume screeners, facilitate easier analysis of system impacts.

  • The decisions in automated HR tools come from the machine learning system.

  • Vendors selling HR tools may claim high predictive validity, but this does not solve the larger fairness issue.

  • Larger datasets are typically better for machine learning, but can still suffer from problems like lack of coverage and systematic mislabeling.

  • The challenge with larger datasets is ensuring representativeness, meaning the dataset must reflect the world as it should be, not just as it was in the past.

  • Solving systematic discrimination is difficult as it requires determining what constitutes qualification.

  • Organizations can strive for fairness at various levels: product, policy, and systems implementation.

  • Product-level fairness involves developing products that are equally useful and accessible to all.

  • Policy-level fairness involves implementing policies that promote fairness and equity.

  • Systems implementation-level fairness involves ensuring the fair execution of policies and practices.

  • Eye contact can predict success in video interviews, but this tool is inherently biased towards certain groups.

  • The issue of fairness is complex and requires multiple levels of intervention and determination.

  • The deployment environment of a machine learning model is equally important as the technology itself.

  • Algorithmic monoculture refers to the widespread use of similar data, models, and platforms for prediction, leading to similar mistakes across organizations.

  • For example, LinkedIn creates algorithmic monoculture in hiring by ranking and scoring candidates in a similar way, limiting opportunities for firms and candidates.

  • Common signals used in assessments (such as work history and experience) result in similar assessments across organizations.

  • The growing use of algorithmic tools for hiring is becoming increasingly concerning and dangerous.

  • The scoring mechanism behind algorithmic tools is often hidden, making it difficult to understand their consequences.

  • Credit scoring is an example of reducing an individual's worth to a single number.

  • Similarly, reducing a candidate's worth to a single score for a specific role or organization raises fairness concerns.

21 views0 comments
bottom of page