Yet there are two paragraphs in the middle that draw the eye.
The algorithm was aggressively detecting comments denigrating White people more than attacks on every other group, according to several of the documents. One April 2020 document said roughly 90 percent of “hate speech” subject to content takedowns were statements of contempt, inferiority and disgust directed at White people and men, though the time frame is unclear. And it consistently failed to remove the most derogatory, racist content. The Post previously reported on a portion of the project.Researchers also found in 2019 that the hate speech algorithms were out of step with actual reports of harmful speech on the platform. In that year, the researchers discovered that 55 percent of the content users reported to Facebook as most harmful was directed at just four minority groups: Blacks, Muslims, the LGBTQ community and Jews, according to the documents.
When human users report hate speech, then, 55% of reports are of speech targeting blacks, Muslims, the LGBTQ community and/or Jews; but when the machine blindly identifies and removes denigrating comments, 90% of them target white people and/or men.
The piece's clear perspective is that this shows that the algorithm is failing, as it is identifying content to remove that is different from what the human users think should be removed. The alternative is that our society has a clear bias towards allowing hate speech towards men and/or white people, and that it is only the other sorts of bias that violate our norms enough for people to report it.
This is consistent with Big Tech facial recognition algorithms identifying non-black faces far more accurately than black faces.
ReplyDeleteSince the algorithms are written by Big Tech's human employees, the questions that are not getting asked is who are these human programmers, whence their biases, and why are their Big Tech employers doing nothing about their programmers or their programmers' (or their own) biases?
Eric Hines
That’s one way to look at it. Another possibility is that the background level of anti-male or anti-white bias is genuinely high (at least online in social media); but culturally we aren’t primed to notice it.
ReplyDelete