2022-12-01 更新
Are Commercial Face Detection Models as Biased as Academic Models?
Authors:Samuel Dooley, George Z. Wei, Tom Goldstein, John P. Dickerson
As facial recognition systems are deployed more widely, scholars and activists have studied their biases and harms. Audits are commonly used to accomplish this and compare the algorithmic facial recognition systems’ performance against datasets with various metadata labels about the subjects of the images. Seminal works have found discrepancies in performance by gender expression, age, perceived race, skin type, etc. These studies and audits often examine algorithms which fall into two categories: academic models or commercial models. We present a detailed comparison between academic and commercial face detection systems, specifically examining robustness to noise. We find that state-of-the-art academic face detection models exhibit demographic disparities in their noise robustness, specifically by having statistically significant decreased performance on older individuals and those who present their gender in a masculine manner. When we compare the size of these disparities to that of commercial models, we conclude that commercial models - in contrast to their relatively larger development budget and industry-level fairness commitments - are always as biased or more biased than an academic model.
PDF This preprint and arXiv:2108.12508 were combined and a more rigorous analysis added to result in the NeurIPS Datasets & Benchmark 2022 paper arXiv:2211.15937