Advertisement
Australia markets closed
  • ALL ORDS

    7,937.50
    -0.40 (-0.01%)
     
  • ASX 200

    7,683.00
    -0.50 (-0.01%)
     
  • AUD/USD

    0.6508
    +0.0008 (+0.13%)
     
  • OIL

    82.94
    +0.13 (+0.16%)
     
  • GOLD

    2,328.50
    -9.90 (-0.42%)
     
  • Bitcoin AUD

    98,595.07
    -4,255.80 (-4.14%)
     
  • CMC Crypto 200

    1,390.51
    -33.59 (-2.36%)
     
  • AUD/EUR

    0.6075
    +0.0005 (+0.08%)
     
  • AUD/NZD

    1.0949
    +0.0007 (+0.06%)
     
  • NZX 50

    11,946.43
    +143.15 (+1.21%)
     
  • NASDAQ

    17,526.80
    +55.33 (+0.32%)
     
  • FTSE

    8,040.38
    -4.43 (-0.06%)
     
  • Dow Jones

    38,460.92
    -42.77 (-0.11%)
     
  • DAX

    18,088.70
    -48.95 (-0.27%)
     
  • Hang Seng

    17,295.93
    +94.66 (+0.55%)
     
  • NIKKEI 225

    37,683.75
    -776.33 (-2.02%)
     

Federal study shows face recognition accuracy varies by gender and race

It's not definitive proof of bias, but there are reasons for concern.

Researchers have studied the potential for bias in facial recognition algorithms before, but now it's the US government's turn to weigh in. The National Institute of Standards and Technology has published a study indicating "demographic differentials" in the majority of the facial recognition algorithms it tested. The report, which examined both one-to-one matching (such as verifying a passport photo) and one-to-many matching (looking for criminals in a crowd), saw noticeable surges in false positives based on gender, age and racial background -- but cautioned against this representing definitive proof of systemic bias.

In one-to-one matches, there were dramatic increases in false positives for African American, Asian and native American faces compared to their Caucasian counterparts, with mistakes frequently happening "10 to 100 times" more often. African American women were also more likely to be the victims of false positives in one-to-many matches, and women as a whole were two to five times more likely to deal with those false hits. However, these problems didn't creep up everywhere. Asian-developed algorithms, for example, didn't show large discrepancies in results between Asian and Caucasian faces. NIST suggested that this might be due to a more diverse set of training images. In other words, the flaws may stem not so much from the algorithms themselves as their source data.

The study is one of the more comprehensive of its kind. While the study for African American women relied on 1.6 million FBI mugshots, the majority of the study relied on 18.27 million images of 8.49 million, all plucked from the FBI, Homeland Security and the State Department. None of it was taken from social networks or surveillance cameras, NIST said.

The institute stressed that its researchers "do not explore" the causes of these differences in the report itself. With that said, it believed the information could prove vital to developers, governments and customers who want to understand the "limitations and appropriate use" of facial recognition algorithms.

ADVERTISEMENT

For civil rights groups, NIST's findings stood as evidence that government and police should curb their uses of facial recognition. ACLU Senior Policy Analyst Jay Stanley maintained that this was evidence facial recognition tech was "flawed and biased," and that a bad result could lead to everything from inconveniences like missing a flight to dire consequences like being placed on terrorist watch lists. Stanley called on government agencies to "immediately halt" use of recognition tech.

Those rights advocates are already getting their wish in some areas, if not as many as they might like. While non-Americans will still deal with face scans, Customs and Border Protection stressed that it wouldn't require scans for US citizens. Likewise, multiple cities have banned facial recognition, with the potential for bias often cited as a factor in the decision. This isn't the same as banning the use of the tech across whole federal- or state-level governments, though, and those deployments that persist won't necessarily address flaws in algorithms or training data. The NIST study could help -- but only if officials take it under serious consideration.