Example: bachelor of science

Gender Shades: Intersectional Accuracy Disparities in ...

Proceedings of Machine Learning Research 81:1 15, 2018 Conference on Fairness, Accountability, and Transparency Gender shades : Intersectional Accuracy Disparities in Commercial Gender Classification . Joy Buolamwini MIT Media Lab 75 Amherst St. Cambridge, MA 02139. Timnit Gebru Microsoft Research 641 Avenue of the Americas, New York, NY 10011. Editors: Sorelle A. Friedler and Christo Wilson Abstract who is hired, fired, granted a loan, or how long Recent studies demonstrate that machine an individual spends in prison, decisions that learning algorithms can discriminate based have traditionally been performed by humans are on classes like race and Gender . In this rapidly made by algorithms (O'Neil, 2017; Citron work, we present an approach to evaluate and Pasquale, 2014). Even AI-based technologies bias present in automated facial analysis al- that are not specifically trained to perform high- gorithms and datasets with respect to phe- stakes tasks (such as determining how long some- notypic subgroups.)

117 million Americans are included in law en-forcement face recognition networks. A year-long research investigation across 100 police de-partments revealed that African-American indi-viduals are more likely to be stopped by law enforcement and be subjected to face recogni-tion searches than individuals of other ethnici-ties (Garvie et al.,2016).

Tags:

  Enforcement, Gender, Shades, Stopped, Forcement, Stopped by law enforcement, Law en forcement, Gender shades

Information

Domain:

Source:

Link to this page:

Please notify us if you found a problem with this document:

Other abuse

Transcription of Gender Shades: Intersectional Accuracy Disparities in ...

1 Proceedings of Machine Learning Research 81:1 15, 2018 Conference on Fairness, Accountability, and Transparency Gender shades : Intersectional Accuracy Disparities in Commercial Gender Classification . Joy Buolamwini MIT Media Lab 75 Amherst St. Cambridge, MA 02139. Timnit Gebru Microsoft Research 641 Avenue of the Americas, New York, NY 10011. Editors: Sorelle A. Friedler and Christo Wilson Abstract who is hired, fired, granted a loan, or how long Recent studies demonstrate that machine an individual spends in prison, decisions that learning algorithms can discriminate based have traditionally been performed by humans are on classes like race and Gender . In this rapidly made by algorithms (O'Neil, 2017; Citron work, we present an approach to evaluate and Pasquale, 2014). Even AI-based technologies bias present in automated facial analysis al- that are not specifically trained to perform high- gorithms and datasets with respect to phe- stakes tasks (such as determining how long some- notypic subgroups.)

2 Using the dermatolo- one spends in prison) can be used in a pipeline gist approved Fitzpatrick Skin Type clas- that performs such tasks. For example, while sification system, we characterize the gen- face recognition software by itself should not be der and skin type distribution of two facial analysis benchmarks, IJB-A and Adience. trained to determine the fate of an individual in We find that these datasets are overwhelm- the criminal justice system, it is very likely that ingly composed of lighter-skinned subjects such software is used to identify suspects. Thus, ( for IJB-A and for Adience) an error in the output of a face recognition algo- and introduce a new facial analysis dataset rithm used as input for other tasks can have se- which is balanced by Gender and skin type. rious consequences. For example, someone could We evaluate 3 commercial Gender clas- be wrongfully accused of a crime based on erro- sification systems using our dataset and neous but confident misidentification of the per- show that darker-skinned females are the petrator from security video footage analysis.

3 Most misclassified group (with error rates of up to ). The maximum error rate for lighter-skinned males is The Many AI systems, face recognition tools, substantial Disparities in the Accuracy of rely on machine learning algorithms that are classifying darker females, lighter females, darker males, and lighter males in Gender trained with labeled data. It has recently classification systems require urgent atten- been shown that algorithms trained with biased tion if commercial companies are to build data have resulted in algorithmic discrimination genuinely fair, transparent and accountable (Bolukbasi et al., 2016; Caliskan et al., 2017). facial analysis algorithms. Bolukbasi et al. even showed that the popular Keywords: Computer Vision, Algorith- word embedding space, Word2 Vec, encodes soci- mic Audit, Gender Classification etal Gender biases.

4 The authors used Word2 Vec to train an analogy generator that fills in miss- ing words in analogies. The analogy man is to 1. Introduction computer programmer as woman is to X was completed with homemaker , conforming to the Artificial Intelligence (AI) is rapidly infiltrating stereotype that programming is associated with every aspect of society. From helping determine men and homemaking with women. The biases Download our Gender and skin type balanced PPB in Word2 Vec are thus likely to be propagated dataset at throughout any system that uses this embedding. c 2018 J. Buolamwini & T. Gebru. Gender shades Although many works have studied how to 1988) six-point skin type scale, allowing us to create fairer algorithms, and benchmarked dis- benchmark the performance of Gender classifica- crimination in various contexts (Kilbertus et al.

5 , tion algorithms by skin type. Second, this work 2017; Hardt et al., 2016b,a), only a handful of introduces the first Intersectional demographic works have done this analysis for computer vi- and phenotypic evaluation of face-based Gender sion. However, computer vision systems with classification Accuracy . Instead of evaluating ac- inferior performance across demographics can curacy by Gender or skin type alone, Accuracy have serious implications. Esteva et al. showed is also examined on 4 Intersectional subgroups: that simple convolutional neural networks can be darker females, darker males, lighter females, and trained to detect melanoma from images, with ac- lighter males. The 3 evaluated commercial gen- curacies as high as experts (Esteva et al., 2017). der classifiers have the lowest Accuracy on darker However, without a dataset that has labels for females.

6 Since computer vision technology is be- various skin characteristics such as color, thick- ing utilized in high-stakes sectors such as health- ness, and the amount of hair, one cannot measure care and law enforcement , more work needs to the Accuracy of such automated skin cancer de- be done in benchmarking vision algorithms for tection systems for individuals with different skin various demographic and phenotypic groups. types. Similar to the well documented detrimen- tal effects of biased clinical trials (Popejoy and Fullerton, 2016; Melloni et al., 2010), biased sam- 2. Related Work ples in AI for health care can result in treatments Automated Facial Analysis. Automated fa- that do not work well for many segments of the cial image analysis describes a range of face per- population. ception tasks including, but not limited to, face In other contexts, a demographic group that detection (Zafeiriou et al.)

7 , 2015; Mathias et al., is underrepresented in benchmark datasets can 2014; Bai and Ghanem, 2017), face classifica- nonetheless be subjected to frequent targeting. tion (Reid et al., 2013; Levi and Hassner, 2015a;. The use of automated face recognition by law Rothe et al., 2016) and face recognition (Parkhi enforcement provides such an example. At least et al., 2015; Wen et al., 2016; Ranjan et al., 2017). 117 million Americans are included in law en- Face recognition software is now built into most forcement face recognition networks. A year- smart phones and companies such as Google, long research investigation across 100 police de- IBM, Microsoft and Face++ have released com- partments revealed that African-American indi- mercial software that perform automated facial viduals are more likely to be stopped by law analysis (IBM; Microsoft; Face++; Google).

8 enforcement and be subjected to face recogni- A number of works have gone further than tion searches than individuals of other ethnici- solely performing tasks like face detection, recog- ties (Garvie et al., 2016). False positives and un- nition and classification that are easy for humans warranted searches pose a threat to civil liberties. to perform. For example, companies such as Af- Some face recognition systems have been shown fectiva (Affectiva) and researchers in academia to misidentify people of color, women, and young attempt to identify emotions from images of peo- people at high rates (Klare et al., 2012). Moni- ple's faces (Dehghan et al., 2017; Srinivasan et al., toring phenotypic and demographic Accuracy of 2016; Fabian Benitez-Quiroz et al., 2016). Some these systems as well as their use is necessary to works have also used automated facial analysis protect citizens' rights and keep vendors and law to understand and help those with autism (Leo enforcement accountable to the public.)

9 Et al., 2015; Palestra et al., 2016). Controversial We take a step in this direction by making two papers such as (Kosinski and Wang, 2017) claim contributions. First, our work advances Gender to determine the sexuality of Caucasian males classification benchmarking by introducing a new whose profile pictures are on Facebook or dating face dataset composed of 1270 unique individu- sites. And others such as (Wu and Zhang, 2016). als that is more phenotypically balanced on the and Israeli based company Faception (Faception). basis of skin type than existing benchmarks. To have developed software that purports to deter- our knowledge this is the first Gender classifica- mine an individual's characteristics ( propen- tion benchmark labeled by the Fitzpatrick (TB, sity towards crime, IQ, terrorism) solely from 2. Gender shades their faces.

10 The clients of such software include Kemelmacher-Shlizerman et al., 2016). Any sys- governments. An article by (Aguera Y Arcas et tematic error found in face detectors will in- al., 2017) details the dangers and errors propa- evitably affect the composition of the bench- gated by some of these aforementioned works. mark. Some datasets collected in this manner Face detection and classification algorithms have already been documented to contain signif- are also used by US-based law enforcement for icant demographic bias. For example, LFW, a surveillance and crime prevention purposes. In dataset composed of celebrity faces which has The Perpetual Lineup , Garvie and colleagues served as a gold standard benchmark for face provide an in-depth analysis of the unregulated recognition, was estimated to be male and police use of face recognition and call for rigorous White (Han and Jain, 2014).


Related search queries