by Nicolas Kayser-BrilA Google service that automatically labels images produced starkly different results depending on skin tone on a given image. The company fixed the issue, but the problem is likely much broader.

Bạn đang xem: Debbie cannon là ai


*
*
On 6 April, the result for the dark-skinned hand had been updated.

Background checks

It is easy khổng lồ underst& why computer vision produces different outcomes based on skin complexion. Such systems processed millions of pictures that were painstakingly labeled by humans (the work you vày when you cliông xã on the squares containing cars or bridges to prove that you are not a robot, for instance) and draw automated inferences from them.

Computer vision does not recognize any object in the human sense. It relies on patterns that were relevant in the training data. Retìm kiếm has shown that computer vision labeled dogs as wolves as soon as they were photographed against a snowy background, & that cows were labeled dogs when they stood on beaches.

Because dark-skinned individuals probably featured much more often in scenes depicting violence in the training data phối, a computer making automated inferences on an image of a dark-skinned hvà is much more likely lớn label it with a term from the lexical field of violence.

Other computer vision systems show similar biases. In December, Facebook refused to let an Instagram user from Brazil advertise a picture, arguing that it contained weapons. In fact, it was a drawing of a boy and Formula One driver Lewis Hamilton. Both characters had dark skins.

Xem thêm: Thông Tin Về 7 Trường Hợp Bệnh Nhân Số 32 Là Ai, Thông Tin Về 7 Trường Hợp Bệnh Nhân Mắc Covid

Real-world consequences

Labeling errors could have consequences in the physical world. Deborah Raji, a tech fellow at New York University’s AI Now Institute and a specialist in computer vision, wrote in an tin nhắn that, in the United States, weapon recognition tools are used in schools, concerts halls, apartment complexes và supermarkets. In Europe, automated surveillance deployed by some police forces probably use it as well. Because most of these systems are similar to Google Vision Cloud, “they could easily have the same biases”, Ms Raji wrote. As a result, dark-skinned individuals are more likely to lớn be flagged as dangerous even if they hold an object as harmless as a hand-held thermometer.

Nakeema Stefflbauer, founder & CEO of FrauenLoop, a community of technologists with a focus on inclusivity, wrote in an email that bias in computer vision software would “definitely” impact the lives of dark-skinned individuals. Because the rate of mis-identification is consistently higher for women và dark-skinned people, the spread of computer vision for surveillance would disproportionately affect them, she added.

Referring khổng lồ the examples of Ousmane Bah, a teenager who was wrongly accused of theft at an Apple Store because of faulty face recognition, and of Amara K. Majeed, who was wrongly accused of taking part in the 2019 Sri Lanka bombings after her face was misidentified, Ms Stefflbauer foresees that, absent effective sầu regulation, whole groups could kết thúc up avoiding certain buildings or neighborhoods. Individuals could face de facto restrictions in their movements, were biased computer vision lớn be more widely deployed, she added.

Incremental change

In her statement, Ms Frey, the Google director, wrote that fairness was one of Google’s “core AI principles” and that they were “committed to making progress in developing machine learning with fairness as a critical measure of successful machine learning.”

But Google’s image recognition tools have sầu returned racially biased results before. In năm ngoái, Google Photos labelled two dark-skins individuals “gorillas”. The company apologized but, according khổng lồ a report by Wired, did not fix the issue. Instead, it simply stopped returning the “gorilla” label, even for pictures of that specific mammal.

That công nghệ companies still produce racially biased products can be explained by at least two reasons, according to lớn AI Now’s Deborah Raji. Firstly, their teams are overwhelmingly trắng và male, making it unlikely that results that discriminate against other groups will be found và addressed at the development stage. Secondly, “companies are now just beginning to establish formal processes khổng lồ kiểm tra for và report these kinds of failures in the engineering of these systems,” she wrote. “External accountability is currently the main method of alerting these engineering teams,” she added.

“Unfortunately, by the time someone complains, many have sầu already been disproportionately impacted by the model"s biased performance.”

Bài viết liên quan

Trả lời

Email của bạn sẽ không được hiển thị công khai. Các trường bắt buộc được đánh dấu *