' Machines May Not be the Solution to Tech Recruiting’s Gender Bias | MTTLR

Machines May Not be the Solution to Tech Recruiting’s Gender Bias

The tech industry is currently being scrutinized for gender discrimination and a gender employment gap. While women make up more than half of the U.S. workforce, they make up less than 20% of U.S. tech jobs. High-profile women at technology companies have come forward to tell their stories of sexual harassment at work. Other women have spoken out about the often-toxic atmosphere for women in technology workplaces.

Perhaps this is why, as Reuters reported, Amazon began testing an AI tool to help streamline the recruiting process. After all, since humans are clearly biased in hiring, making the process more objective and turning it over to machines could be the answer. Unfortunately, Amazon discontinued the experimental tool after discovering that it showed bias against women.

The technology rated candidates on a scale of one star to five stars on a variety of factors. It was designed to take in a large number of candidates and output the top few options. However, Amazon discovered that the system taught itself to prefer male candidates: It penalized resumes that included the word “women’s,” as in “women’s tennis team member,” and downgraded graduates of two all-women’s colleges. Further, it gave preference to so-called “masculine language”: words such as “executed” or “captured.”

Amazon tried altering the program to fix these problems, but the issue is bigger than these two instances. The computer model learned from patterns submitted to the company over a 10-year period, and male applicants have dominated the industry since its inception. The program learned the biases from the humans that had done the job before it, and these biases could present in a myriad of other ways.

While Amazon has scrapped this project, the potential for more AI involvement in hiring decisions creates a new realm for disparate impact discrimination cases. Under Title VII of the Civil Rights Act of 1964, employers are prohibited from using neutral tests or selection procedures that have the effect of disproportionately excluding persons based on race, color, religion, sex, or national origin. While Title VII was initially only used to protect workers and applicants from disparate treatment, or intentional discrimination, it expanded to include disparate impact protection when it became clear that discrimination is not always intentional or even the result of conscious decision-making.

If an AI program such as the one Amazon developed were used to hire applicants (Amazon maintains that the technology was never actually deployed to evaluate candidates), companies like Amazon could be subject to disparate impact litigation. Employers would then be put in the position of defending their programs and may have to share data on how the technology was developed. Additionally, this could radically alter the discovery process and the evidence produced in disparate impact cases.

Though Amazon ultimately decided not to use the technology it developed, its determination that the program learned gender bias could place future uses of AI technology in hiring under a microscope, and even potentially plant the seed for future disparate impact litigation under Title VII.*

*Rachel Foster is an associate editor on the Michigan Technology Law Review.

 

1 Comment

  1. Machine learning and AI are also as effective as humans in discriminating based on other features, such as names. For example, job candidates with short, non-ethnic names such as “Mike” or “Dave” dominate in the workplace. I would be interested to see more written on this topic.

    Reply

Submit a Comment

Your email address will not be published. Required fields are marked *