The topic for this article was chosen by Decrypt readers via Snapshot vote! To help shape future Decrypt content, vote here. To learn more about AI and earn a free on-chain certificate, take our free Getting Started with AI course. Decrypt will cover the gas fees for the first 10,000 mints.

With the recent dramatic uptick in artificial intelligence (AI) platforms and technology, disruptors are keen to find new ways that AI can automate a host of tasks previously performed by humans. Indeed, AI has gone so far as to make possible a range of projects that are impossible for humans to achieve on their own. Powerful AI tools can be used to write (and even act in) film and television projects, to generate sometimes-harmful recipe ideas, to reproduce music with the help of mind-reading, and even to provide life coach advice.

It should come as no surprise, then, that AI is seeing increased adoption by human resources branches of various companies as well as by firms dedicated to recruiting and hiring practices.

With the tremendous power and potential of AI come some very real threats and dangers as well. Among the immediate risks of AI is the likelihood that human biases and prejudices will become engrained—consciously or subconsciously—in the AI tools that we build. A recent peer-reviewed computer science paper found that popular large language models (LLMs) like ChatGPT demonstrate political biases, for instance. Biases may emerge as a result of the data that protocols developed using machine learning processes use. They may also crop up due to the biases of programmers, or because of poor calibration in the machine learning process, and even as a result of larger systemic biases as well. The problem is a growing and immediate concern for many both inside and outside of the AI space and leads to issues including housing discrimination and much more.

AD

As companies have increasingly found ways to incorporate AI into their hiring practices, these issues have come to a head. Close to two-thirds of employees in the U.S. say they have witnessed workplace discrimination, including in the recruitment and hiring processes. Below, we take a closer look at the current landscape of AI and discriminatory hiring practices, as well as some broader concerns for the future of this space.

How Do Companies Use AI in Their Hiring?

It makes sense that companies would want to automate aspects of HR. Recruiting is time-consuming, expensive, and repetitive. AI is designed to process vast amounts of data with tremendous speed. 99% of Fortune 500 companies and 83% of all employers use automated tools of some kind as part of their process of recruiting and/or hiring employees. Indeed about 79% of employers that use AI at all use it specifically to support HR activities. While the practice is widespread, it’s important to keep in mind that companies may adopt automation and AI in a wide variety of ways when it comes to hiring, some of which are much more extensive than others.

AI programs are capable of assisting with—or completely taking over, in some cases—everything from recruiting to interviewing to onboarding new employees. AI programs can scan through troves of resumes or LinkedIn profiles to source potential candidates for a job, sending along personalized messages to attempt to recruit top targets. These tools can act as chatbots to smooth the application process and answer questions from applicants. They can evaluate application materials and make recommendations for people to advance to the next steps in the hiring process. AI programs can even schedule and assist with the interviewing and negotiating processes and assist HR in writing layoff notices. Unfortunately, bias may be found in any of these areas, although some remain largely theoretical for the time being.

Bias in AI Recruitment

In 2018, Amazon scrapped a tool that it had developed over a period of several years to help automate its employee search process by reviewing applicant resumes. The model, which had been trained on a set of resumes submitted to Amazon over a 10-year period, displayed bias against non-male applicants. One of the likely reasons for this bias was the data set itself—most applications in the data pool were from male applicants, leading the AI model to “learn” that male candidates were preferable. The model indeed rated applications lower when they included words like “women’s” or made reference to all-women’s colleges. Despite the company’s efforts to address these issues, it ultimately decided to abandon the project entirely. Even in recent years, Amazon’s efforts to incorporate AI into other projects—including as part of a set of facial recognition tools designed to aid law enforcement and related agencies—have met backlash for allegations of inherent bias.

AD

In 2018, Amazon scrapped a tool that displayed bias against non-male applicants

Even AI systems primed for the potential to have bias against non-male job applicants may have a difficult time maintaining neutrality. Research has shown that women frequently downplay their skills on resumes, while men are more likely to exaggerate theirs. Similar biases can emerge relating to race, age, disability, and much more. As the list of screening and pre-screening tools like Freshworks, Breezy HR, and Zoho Recruits continues to grow, so too does the potential for bias.

Other Types of AI Hiring Bias

AI bias in hiring can take many other forms as well. AI tools such as HireVuew aim to use applicant computer and cellphone cameras to analyze facial movements, speaking voice, and other parameters to create what it calls an “employability” score. Detractors of this type of practice say it is rife with potential for bias against a wide range of applicants, including non-native speakers, people suffering from speech impediments or other medical issues impacting speech and movement, and more.

Another company developing an AI tool for hiring, Sapia (previously known as PredictiveHire), has used a chatbot to ask candidates questions. Based on responses, it provides an assessment of traits such as “drive” and “resilience.” Again, detractors have said that this type of tool, which also seeks to estimate an applicant’s likelihood of “job hopping” between positions, may hold biases against some candidates.

Other types of AI tools used in hiring practices may approach the pseudoscience known as phrenology, which claimed to link skull patterns to different personality characteristics. These include some facial recognition services which may be inclined to mischaracterize certain applicants in biased ways. A 2018 study from the University of Maryland, for example, found that Face++ and Microsoft’s Face API, two such facial recognition tools, tended to interpret Black applicants as having more negative emotions than white counterparts. HireVue discontinued its practice of facial analysis in early 2020 following a complaint made with the Federal Trade Commission by the Electronic Privacy Information Center.

A 2017 study found that deep neural networks were consistently more accurate than humans when it comes to accurately detecting sexual orientation based on facial images. Other AI tools like DeepGestalt can accurately predict certain genetic diseases based on facial images. These types of capabilities could potentially lead to bias in recruiting and hiring for employment, either intentionally or otherwise.

What Is Being Done

Many AI developers and companies utilizing AI in their hiring processes are working to ensure that biases are eliminated as completely as is possible. Fortunately, there are also outside efforts to monitor and regulate how AI is used in hiring.

In 2021, the U.S. Equal Employment Opportunity Commission launched an initiative aiming to monitor how AI was used in employment decisions and to enforce compliance with civil rights laws. Former attorney general of D.C. Karl Racine announced a bill aiming to ban algorithmic discrimination at the end of 2021, while senators from Oregon, New Jersey, and New York introduced the Algorithmic Accountability Act of 2022 with similar aims. The latter bill stipulated impact assessments to determine whether AI might suffer from bias and other issues. The 2022 bill failed early in 2023. More recently, a New York City law aiming to address AI discrimination in employment practices went into effect in mid-2023.

Even if regulation is slow to catch up to some of the dangers and risks inherent in AI used for hiring purposes, businesses may be inclined to make adjustments on their own if it becomes clear that such tools could pose a threat. For example, if using a particular AI tool may open up a company to the possibility of discrimination suits or other legal trouble, that company may be less likely to adopt that practice.

AD

Fortunately, job applicants may be able to work to overcome some of these issues as well. Companies using resume-scanning tools are likely to search for keywords matching the language from the job description. This means that resumes incorporating action-focused words drawn from the job posting itself may be at an advantage. Applicants may even give themselves a leg up by simplifying the format of the resume itself and submitting a common file type, both of which may be easier for AI tools to scan.

Stay on top of crypto news, get daily updates in your inbox.