Researches are discovering what was considered cutting edge technology a few short years ago concerning facial recognition technology, has become an error prone industry of high-tech miscalculations.
Aside from being unable to recognize in many instances, the facial differences between a man and a woman because of various racial nuances within humankind. The technology utterly fails to recognizes self-described “transgender men” as “woman,” classifying them as “men” in almost 30% of the cases, and “transgender woman” as “male” in about 23% of the cases.
The study conducted at the University of Colorado Boulder recently tested a number of leading facial analysis services on photos of cisgender, transgender and otherwise gendered Instagram users.
For those individuals not familiar with the term “cisgender” it simply refers to someone born at birth to the sex they currently are, and with a traditional family consisting of a mother and father of the same race (white).
Researches then gathered 2,450 photos on social media searching under criptive hash-tags such as; #woman, #man, #transwoman, #transman, #agenderqueer or #nonbinary.
The names of the hashtags were then “crowd-sourced” exclusively from “queer, trans, and/or non-binary individuals.”
The overall success rate for all the current detection facial recognition applications in the marketplace, identifying cisgender individuals was extremely accurate with just under 98% for both men and woman.
However, the services performed significantly worse when it came to transgender people. Moreover because the current technology can only “see” male and female structural features, rather then an individuals preferred gender identity, the technology for this group of individuals failed 100% of the time.
The services tested included Amazon’s Rekognition, IBM’s Watson, Microsoft’s Azure and Clarifai. Assistant professor of information science Jed Brubaker oversaw the research, which was published in November in the issue of “Proceedings of the ACM on Human Computer Interaction.”
Researches also discovered that societal issues play a key role within facial recognition applications, noting that a growing body of research indicates that facial analysis technologies suffer from both gender and racial bias.
A recent study conducted in January of 2018 by MIT Media Lab discovered that Amazon’s Rekognition application misidentified darker-skinned women as men one-third of the time. The software also mislabeled white women as men at higher rates than it did white men as women.
Another worrisome issue surrounding the new technology involves simple labeling of photos by a number of the services in identifying facial images and features when describing certain characteristics conflating, for example long blond hair and makeup, with opinion classifications such “beautiful” and “pretty.” And some labels, like “halter (women’s top)” and “women’s shorts” seemed unnecessarily feminized.
As an example, the researchers pointed to a nonbinary Instagram user whose photo they included in their study. The user wore “heavy winged eyeliner,” and all the services categorized the user as a woman, the researchers said.
Ultimately, gender identity becomes an extremely human characteristic that no computer can fully identify.
Researches are now suggesting to high tech companies working on new facial recognition software to reconsider gender classification.
In fact Brubaker recommends, “Abandoning gender classification in facial analysis technology” altogether.
Instead, Brubaker advises tech companies to lean into gender-neutral labels like “person,” “people,” and “human,” which they said “provide inclusive information about the presence of human beings in a photograph.”
Adding, “If gender must be used” in facial recognition, the researchers said, companies should at least be aware of what “kinds of bias are being privileged and what kinds of bias are being made invisible.”
For example, it is acceptable to use the labels “woman” or “transgender” in an effort to hire more people from these disadvantaged identity groups, they said. But it is of course forbidden to discriminate against a transgender person.
Perhaps the biggest concern thus far with the questionable technology is in the field of law enforcement and national security, along with the privacy rights of individuals, and the real potential threat of lawsuits by individual’s falsely identified.
Put them in jail since they are guilty of a hate crime.
If this technology is supposed to be able to tell John Doe from Joe Smith, but it can’t tell a woman from a man, what good is it?
It does tell the difference between a woman from a man. What it does not do is to play to the person’s mental illness. Think a person’s id as demonstrated by fingerprints, DNA vs aliases. As long as the technology correctly identifies the person, it does not matter what make believe name the person is using.
My only question (concern) is, does this device “see” correctly the actual physiological truth of the examined subject? We mustn’t expect a machine to be able to divine our phantasy.
Better yet, just stripsearch, looking for appropriate plumbing.
How can one comment on an article as Ignorant as this one? As we try to ignore facts and work fiction into the process! It doesn’t work. No wonder these people are messed up.
Sorry but I can not say that it is a failure of technology when it identifies a man as a man just because that man is mentally ill and feels it is a female.
Its the bone structure of the head and face that’s throwing this contraption a curve ball, men have larger denser bones than women, race has nothing to do with it.
So no matter what level of crazy these people are on, boys are boys and girls are girls.
Untuk selalus bi 6a memdsaa bantu adanya keladas car san terhadap setiap jenis judi yang a