Solutions

Overview

GenSpace Individual

For Business

Overview

GenSpace Business

Small Business

Small business productivity tools

New business

Tools for new businesses

Startups

Startup productivity tools

For Enterprise

Overview

GenSpace Enterprise

Frontline Workers

GenSpace for the frontline

Work Safer

Protect organizations from cyberattacks

Products

Mail

Custom business email

Calendar

Custom business email

Files

Files

Documents

Documents

Presentations

Presentations

Spreadsheets

Spreadsheets

Talk

Talk

Notes

Notes

Deck

Deck

The Double-Edged Sword of AI in Hiring: Navigating Efficiency and Bias

Using AI for hiring seems like a foolproof plan. After all, humans are flawed and biased. However, AI also carries its own biases. Critics of artificial intelligence, often vocal about potential job displacement, have long feared this technology would eliminate jobs from the workforce. Surprisingly, new research suggests AI is discriminating against qualified candidates based on gender, race, and age.

AI has the potential to level the playing field in recruitment, but it’s trained by humans, and thus inherits human biases. Let’s break it down:

Sources of Bias in AI

In theory, an AI-powered screening tool is perfect because it cannot think or pass judgment, allowing it to make objective decisions. However, as AI adoption has increased, so have reported incidents of bias. Here are a few examples from 2023:

  • In June, Bloomberg reported that an analysis of 5,000 images generated by Stable Diffusion reinforced gender and racial stereotypes worse than those found in the real world. High-paying jobs were consistently represented by subjects with lighter skin tones, while lower-paying jobs were associated with darker skin tones. Gender stereotypes were also observed, with cashiers and social workers largely represented by women, while politicians and engineers were mostly men.
  • A spokesperson for StabilityAI, which runs Stable Diffusion, acknowledged that all AI models have biases based on the datasets they’re trained on. This highlights the core issue: flawed datasets lead to flawed models.
  • Workday, a systems software company offering HR and finance solutions, is facing a class-action lawsuit led by Derek Mobley. He alleges their AI screening tool discriminates against older, Black, and disabled applicants. Despite meeting job qualifications, Mobley, a 40-year-old Black man with anxiety and depression, has been rejected from approximately 100 positions since 2018. He represents an undisclosed number of people reporting similar discrimination.

These examples illustrate that AI tools trained on data with cultural, racial, and gender disparities will mirror those biases, perpetuating the very issues companies aim to eliminate.

Government Response to AI Recruitment Bias

Initially, governmental response to AI recruitment bias was limited. However, several legislative actions have since been taken:

  • In 2019, Illinois passed a bill requiring employers to notify candidates of AI analysis during video interviews.
  • Maryland followed with a prohibition on using facial recognition during pre-employment interviews without applicant consent.
  • In 2021, the Equal Employment Opportunity Commission (EEOC) launched the “Artificial Intelligence and Algorithmic Fairness Initiative” to monitor and assess AI technology in hiring practices.
  • That same year, the New York City Council passed a law requiring employers using AI technology to disclose it to applicants and undergo yearly bias audit checks, with enforcement starting in July 2023. However, critics argue the law is not specific enough and lacks provisions for age and disability discrimination.

The U.S. Justice Department and the EEOC also released a joint guide advising employers against “blind reliance on AI” to avoid civil rights violations. Recently, the EEOC conducted hearings with experts to discuss the potential benefits and harms of AI in the workplace.

The Future of AI in Hiring

Many U.S. companies have integrated AI into their hiring processes. A 2022 study by SHRM found that 79% of companies used automation and/or AI for recruitment and hiring. Organizations should anticipate increased oversight in their hiring practices. The New York law is expected to influence other states, with California, New Jersey, Vermont, and New York reportedly working on their own AI regulations.

Resource-heavy companies like Amazon can develop, train, and test their tools for better outcomes. However, companies purchasing tools from third-party vendors face higher risks of bias and civil rights violations. Thorough vetting and auditing processes are crucial for businesses using AI technology for employment purposes.

The takeaway for businesses? Soon, it won’t be enough to say, “It wasn’t us, AI did it.” Companies must plan for the humanity on both sides of their new AI hiring software.

GenSpace.ai is an autonomous AI workspace that integrates with chat platforms like Discord or Slack. It lets you control all your work and productivity apps and browse the web via simple chat commands. Our AI agents automate tasks, manage workflows, and act as your digital assistant, streamlining operations and reducing costs for entrepreneurs and startups.

Share the Post:

Related Posts

GenSpace Logo

Request Access

If you have any feedback or inquiries, feel free to contact [email protected]