' Kay Li | MTTLR

Anti-Discrimination Laws and Algorithmic Discrimination

Machine algorithms can discriminate. More accurately, machine algorithms can produce discriminatory outcomes. It seems counterintuitive to think that dispassionately objective machines can make biased choices, but it is important to remember that machines are not completely autonomous in making decisions. Ultimately, they follow instructions written by humans to perform tasks with data provided by humans, and there are many ways discriminations and biases can occur during this process. The training data fed to the machine algorithm may contain inherent biases, and the algorithm may then focus on factors in the data that are discriminatory towards certain groups. For example, the natural language processing algorithm “word2vec” learns word associations from a large corpus of text. After finding a strong pattern of males being associated with programming and females being associated with homemakers in the large text datasets fed to it, the algorithm came up with the analogy: “Man is to Computer Programmer as Woman is to Homemaker.” Such stereotypical determinations are among the many discriminatory outcomes algorithms can produce. The European Union (EU), out of fear of these outcomes leading to discriminatory effects produced by decision-making algorithms, included Article 22 when enacting the General Data Protection Regulation, which gives people “the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” Although what constitutes “solely automated processing” is debatable, the EU’s concern of algorithmic discrimination is evident. In the United States (U.S.), instead of passing laws that specifically target algorithmic discrimination, such concerns are handled largely under regular anti-discrimination laws,...