Yann LeCun, the world-renowned AI guru on Facebook, had some issues with an article written about his company yesterday. So he did what each of us would do, he took to social media to voice his grievances.
Only, he did not lead the fight on Facebook as you might expect. Instead, over a span of several hours, he went back and forth with a lot of people on Twitter.
No, this is not the case.
Apparently, one can write about the fairness of AI without paying attention to journalistic fairness.
One of those cases where you know the real thing, read an article about it, and go “WTF?”
– Yann LeCun (@ylecun) March 12, 2021
Can we stop for a moment and appreciate that, on a random Thursday in March, the father of Facebook’s AI program kicks in? Twitter to discuss an article by reporter Karen Hao, AI reporter for MIT’s Technology Review?
Hao wrote an amazing long feature on Facebook’s content moderation problem. The article is titled “How Facebook Got Addicted to Spreading Disinformation,” and the caption is a doozy:
The company’s AI algorithms have given it an insatiable habit of lies and hate speech. Now the man who built them cannot solve the problem.
I will quote here only one paragraph from Hao’s article which captures its essence:
Everything the company does and chooses not to do is driven by one motivation: Zuckerberg’s relentless desire for growth … [Facebook AI lead Joaquin Quiñonero Candela’s] AI expertise has accelerated this growth. As I learned in my reports, his team set out to target AI bias, as preventing AI bias helps the company avoid a regulatory proposal that could, if passed, hamper this growth. Facebook’s leadership has also repeatedly weakened or halted many initiatives to eliminate disinformation on the platform, as it would undermine that growth.
There’s a lot to unbox out there, but the bottom line is that Facebook is driven by the singular goal of “growth.” The same could be said of cancer.
LeCun, apparently, did not like the article. He jumped on the app Jack made and shared his thoughts, including what appear to be personal attacks questioning Hao’s journalistic integrity:
Yes, that’s why your article came as a surprise to me and to many of my colleagues.
From my perspective, your play is full of factual errors and incorrect assumptions of bad intention.
What happened to you?
– Yann LeCun (@ylecun) March 12, 2021
His shade extended yesterday to blame radio and journalism for his business woes:
More importantly, the increase in polarization is a uniquely American phenomenon (which started before FB existed)
Many other countries have seen polarization * decrease * over the past decade. And they use FB just as much.
I blame the cable news and I am talking about the radio.
– Yann LeCun (@ylecun) March 12, 2021
Really Yann? Is the increased polarization via disinformation uniquely American? Have you met my friend “The Reason Every War Has Ever Been Waged In Ever History?”
It wouldn’t be the first time he took to Twitter to defend his company, but more yesterday was happening than it seems. LeCun’s tirade began with a tweet announcing new equity research from Facebook’s Artificial Intelligence Team (FAIR).
According to Hao, Facebook coordinated the publication of the document to coincide with the Tech Review article:
People wondered: Did FB publish this in response to your story? No, let me clarify. They wanted this paper to be in my story and gave me a first draft. Then, in anticipation of its presence in my story, they timed its post with my piece, hoping the two would complement each other. https://t.co/pST8LYsTmG
– Karen Hao (@_KarenHao) March 11, 2021
Based on the evidence, it appears Facebook was absolutely shocked by Hao’s reporting. It seems that the social network was expecting information on the progress made in strengthening its algorithms, detecting bias and combating hate speech. Instead, Hao laid bare Facebook’s core problem: it’s a spider’s web.
These are my words, not Hao’s. What they wrote was:
Towards the end of our hour-long interview … [Quiñonero] began to point out that AI was often unfairly labeled ‘culprit’. Whether or not Facebook uses AI, he said, people would still spit out lies and hate speech, and that content would still spill over the platform.
If I had to rephrase this to have an impact I could say something like ‘no matter if our company dumps gasoline on the ground and gives everyone a book of matches, we’re still going to have wildfires. . ” But, again, these are my words.
And when I say Facebook is a spider web, what I mean is: spider webs are good, until they get too big. For example, if you see a spider web in the corner of your barn, that’s great! This means that you have a little arachnid warrior who helps you take down the nastiest bugs. But if you see a spider web covering your entire city, like something from “Kingdom of the Spiders”, that’s really a bad thing.
And it’s obvious LeCun knows that because his whole Twitter talk yesterday was just a giant admission that Facebook is getting out of anyone’s control. Here are some snippets of his tweets on the subject:
… When it became clear that such things were happening, FB took corrective action [measures] quickly…..
certainly not fast enough to prevent bad things from happening in the meantime (hence these UN reports).
But taking these corrective measures is neither instantaneous, nor easy, nor cheap….
When the Myanmar government started spreading hateful disinformation against the Rohingya, FB deleted their puppet accounts and hired Burmese moderators.
But the volume was such that AI systems had to be developed to automate as much as possible.
FB has therefore developed the best Burmese-English translation system in the world, so that the English-speaking moderator can help….
Simultaneously, he developed hate and violent speech detectors for the Burmese. But the data was scarce.
So, over the past 2 years, FB has developed multilingual systems capable of detecting hate speech in any language and not requiring a lot of training data.
It all takes expertise, time, money and, in this case, the latest advances in AI.
Interesting. LeCun’s core assertion seems to be that stopping disinformation is really hard. Well, it is. There are a lot of really difficult things that we haven’t understood.
As my colleague Matthew Beedham pointed out in today’s Shift newsletter, building a production car powered by a nuclear reactor in its trunk is really tough.
But, as the scientists who worked on exactly this for the Ford Company decades ago, nuclear technology is simply not advanced enough to make it safe enough to power mainstream production vehicles. Nuclear is ideal for aircraft carriers and submarines, but not so much for the family station wagon.
I would say Facebook’s impact on humanity is almost certainly far, far more damaging and far-reaching than a meager little nuclear fusion in the trunk of a Ford Mustang. After all, only 31 people died as a direct result of the Chernobyl nuclear meltdown, and experts estimate that a maximum of around 4,000 people were indirectly affected (from a health perspective, anyway).
Facebook has 2.45 billion users. And whenever his platform creates or exacerbates a problem for one of those users, his response is some version of “we’ll take a look at it.” The only place where this kind of reactionary response to the technological imbalance actually serves the public is in a Whac-A-Mole game.
If Facebook were a nuclear power plant trying to fix a leak that sent nuclear waste into our drinking water every time someone misused the power grid: We would shut it down until it plugs the leaks.
But we’re not shutting down Facebook because it’s not really a business. It’s a trillion dollar PR machine for a standalone entity. It is a country. And we have to either sanction him or treat him as a hostile force until he does something to prevent abuse of his platform instead of only reacting when the shit hits the fan.
And, if we can’t keep nuclear waste out of our drinking water, or build a safe car with a nuclear reactor in its trunk, maybe we should just shut down factories or scuttle plans until we can. may. It worked well for Ford.
Maybe, just maybe, the reason journalists like Hao and I, and politicians around the world can’t offer solutions to Facebook’s problems, is that there aren’t any. not.
Maybe hiring the smartest AI researchers on the planet and surrounding them with the world’s biggest PR machine isn’t enough to overcome the problem of poisoning humans for fun and profit. on a giant unregulated social network.
There are issues you can’t just throw money at and press releases at.
Hats off to Karen Hao for her excellent reporting and to the staff at Technology Review for speaking the truth in the face of power.
Published March 12, 2021 – 19:37 UTC