Facebook’s feckless ‘Fairness Flow’ won’t fix its broken AI

[ad_1]

Facebook today posted a blog post detailing a three-year-old solution to its modern artificial intelligence problems: an algorithm inspector that only works on a few business systems.

In the front: Called Fairness Flow, the new diagnostic tool allows Facebook’s machine learning developers to determine whether certain types of machine learning systems contain biases against or towards specific groups of people. It works by inspecting the data flow for a given model.

By a corporate blog post:

To measure the performance of an algorithm’s predictions for certain groups, Fairness Flow works by dividing the data used by a model into appropriate groups and calculating the model’s performance group by group. For example, one of the equity measures examined by the toolkit is the number of examples from each group. The goal is not to have each group represented with exactly the same numbers, but to determine if the model has sufficient representation in each group’s dataset.

Other areas that Fairness Flow examines include whether a model can accurately rank or rank content for people in different groups, and whether a model consistently overestimates or underestimates one or more groups relative to others.

Background: The blog post doesn’t clarify exactly why Facebook is touting Fairness Flow right now, but its timing does give a clue as to what could be going on behind the scenes on the social network.

Karen Hao of MIT Technology Review recently wrote an article exposing Facebook’s anti-bias efforts. Their article claims that Facebook is driven solely by “growth” and apparently has no intention of tackling the stigma around AI where this would prevent its relentless expansion.

Hao wrote:

It was clear from my conversations that the AI ​​team had failed to make progress against disinformation and hate speech because they had never made these issues their primary focus. Most importantly, I realized that if he tried to do it, it would be a failure.

The reason is simple. Everything the company does and chooses not to do is driven by one motivation: Zuckerberg’s relentless desire for growth.

Following Hao’s article, Facebook’s great AI guru Yann LeCun immediately rejected the article and its reporting.

Facebook reportedly timed the publication of a research paper with Hao’s article. Based on LeCun’s reaction, the company appeared stunned by the play. A few weeks later, we were treated to a blog post of over 2,500 words on Fairness Flow, a tool that discusses the exact issues in Hao’s article.

[Read: Facebook AI boss Yann LeCun goes off in Twitter rant, blames talk radio for hate content]

However, addresses maybe too strong a word. Here are some excerpts from the Facebook blog post about the tool:

  • Fairness Flow is a technical toolkit that allows our teams to analyze the performance of certain types of AI models and labels in different groups. Fairness Flow is a diagnostic tool, so it cannot solve fairness issues on its own.
  • The use of Fairness Flow is currently optional, although it is encouraged in cases supported by the tool.
  • Fairness Flow is available for product teams on Facebook and can be applied to models even after they have been deployed to production. However, Fairness Flow cannot analyze all types of models, and because every AI system has a different purpose, its approach to fairness will be different.

Quick setting: No matter how long and boring Facebook posts its blog posts, it can’t hide the fact that Fairness Flow can’t solve all problems with Facebook’s AI.

The reason bias is such a problem at Facebook is that a lot of the social network’s AI is black box AI – which means we have no idea why it makes the decisions. output that it takes in a given iteration.

Imagine a game where you and all of your friends toss your names in a hat, then your good buddy Mark removes a name and gives that person a five dollar bill. Mark does this 1000 times, and as the game progresses you notice that only your white and male friends get money. Mark never seems to remove the name of a woman or a non-white person.

Upon investigation, you are convinced that Mark is not doing anything intentionally to cause the bias. Instead, you determine that the problem must be occurring inside the hat.

At this point you have two decisions: First, you can stop playing the game and go get a new hat. And this time you try it out before playing again to make sure it doesn’t have the same biases.

Or you can go the route Facebook gave up: tell people that hats are inherently biased and that you’re working on new ways to identify and diagnose these problems. After that, just insist that everyone keep playing the game while you figure out what to do next.

At the end of the line: Fairness Flow is nothing more than an opt-in “observe and report” tool for developers. It doesn’t solve or fix anything.

Published March 31, 2021 – 17:38 UTC


[ad_2]

Leave a Reply

Your email address will not be published. Required fields are marked *