This past week saw a turbulent beginning of the end of the unprecedented term of President Trump. A rally in Washington supporting the president and calls to overturn the election turned violent. Protesters stormed both chambers of Congress and one was shot before being driven from Capitol Hill. Members of Congress returned to certify Joe Biden as president-elect. Meantime, Facebook, Twitter and YouTube took aggressive, new steps to clamp down hard on President Trump’s social media accounts. Among other objections, Twitter said President Trump violated policies in his video urging protesters to be peaceful and go home, because he reiterated claims about election fraud. Some welcome Big Tech’s crackdown; others say it’s a radical violation of free speech. It highlights a long-simmering battle over the control of information online. That’s the focus of our special investigation.
Sharyl Can you give sort of the short version of how you discovered or came to believe something wrong was going on?
Zachary Vorhies: Yeah. I was working at YouTube in 2016 and everything was really great. But then something happened. And what happened was Donald Trump won the election.
2016 news clip, news anchor: We can now project the winner of the presidential race, Donald Trump wins the presidency.
Vorhies: And after he won the election, the company just took a hard left and decided that they were going to abandon their liberal principles and start going towards an authoritarian sort of management of their products and services.
Zachary Vorhies was as an insider for more than eight years, a senior software engineer at Google and Google’s YouTube.
Sharyl: Can you describe how that manifested itself, this change in direction you describe?
Vorhies: It happened the first week after Donald Trump won the election. Google had an all hands meeting, which they usually do every week, called TGIF. The CFO broke down into tears recounting how she was communicating with the New York office about how they were going to lose this election. The founder Sergey Brin said that he was personally offended at the election of Donald Trump. And Sundar Pichai, the CEO, said that one of the most successful things that they had done in the election was applying machine learning in order to hide fake news.
Donald Trump’s candidacy didn’t only ignite a new trend of heavy handed manipulation and censorship at Google. Ten days after Trump was elected, Facebook’s Mark Zuckerberg announced efforts unheard of before. Facebook would begin judging and rating news organizations in terms of trustworthiness and attaching warning labels to content. It also changed how “trending topics” work, no longer reflecting the number of people discussing something.
The liberal propaganda group Media Matters, founded by Hillary Clinton supporter David Brock, took credit for convincing Facebook to take the drastic new steps.
Within days of the inauguration in January 2017 the whole strategy was outlined in in a confidential memo to donors by Media Matters and some of its affiliates, American Bridge, CREW, Citizens for Responsibility and Ethics in Washington, and ShareBlue.
The memo stated that Media Matters was a “partner” of Facebook and other Big Tech players to crack down on online information Media Matters didn’t like. “Facebook needed our help in fully understanding the problem and identifying concrete solutionsWe've been engaging with Facebook leadership behind the scenes to share our expertise” with the goal of getting Facebook to “adjust its model” Media Matters also said it lobbied Google to "cut off access to revenue” of “40 of the worst fake news sites” —as identified by Media Matters, of course.
This leaked internal video shows the CEO of Google’s YouTube, Susan Wojcicki, discussing their new approach.
Susan Wokcicki (2017 video): We’re pushing down the fake news and demoting it, and we are increasing authoritative news and promoting it. Content that isn’t that we don’t think that is authoritative news it’s just kind of encouraging people to look at is not true. We work with Google news on that to define what reputable sources are.
Sharyl: And when you say ‘They bragged about effectively cracking down on fake news, that sounds like a good thing?
Vorhies: Yeah, you would think. But when I looked at the design documents, I started to notice something very interesting, which was a lot of the fake news that they were using as examples of things that they should censor were things involving Hillary Clinton. And I was sort of apolitical, but I started to think to myself, is this really fake news? Why is Google defining this as fake news in order to justify censorship of it? So once I realized that there was this fake news regimen that they were using and it seemed like it was political, I started looking for what that censorship execution could be. And I found it and the project was called Machine Learning Fairness.
Sharyl: What does Machine Learning Fairness describe?
Vorhies: Machine learning is a type of A.I. You've got A.I. that plays chess and checkers
Sharyl: Artificial intelligence?
Vorhies leaked a confidential Google document describing what he calls Artificial Intelligence censorship designed to rerank the entire internet by making “machine-learning intentionally human-centered” to “intervene for fairness.”
Sharyl: Do you think there's evidence that the instructions that Google basically gave its program or machine on how to flag fake news was skewed unfairly, and in favor of liberal interests over conservative interests?
Vorhies: I mean, I wouldn't even call them liberal because they're kind of authoritarian, totalitarian right now. And yeah, the evidence of that is whenever you go to news.google.com or you type in a Google search, it's always being directed towards anti-Trump sentiment. And the reason why this is because they're training these classifiers with people that are highly biased.
At the same time, Vorhies says Google was working on social reconstruction to correct what it calls “algorithmic unfairness.”
Sharyl: What is algorithmic unfairness?
Vorhies: Algorithmic unfairness is any sort of algorithm that reinforces existing stereotypes. So a really good question that was answered at Google is: Could objective reality be algorithmically unfair? And Google's answer to that was actually yes. And the example that they give was let's say you're doing a search for CEOs. And let's say Google returns a bunch of images and most of those images are images of men. Now, even if that reflects objective reality, this can still be considered algorithmically unfair and justify product intervention in order to fix.
Sharyl: And that was happening at Google?
Vorhies: That was happening at Google. And you could tell this because you can go into Google search and you can say, “men can", and then let Google auto complete. And what it was doing is it was saying “men can have babies,” “men can have periods,” “men can get pregnant.” And then you try to do the same thing but for women, and Google returned results like “women can get drafted,” “women can do anything.” So it's this inversion of the stereotypes that they were trying to enforce. And it wasn't just them being equal. They were actually trying to reverse the gender roles. And this is what they think is an algorithmically fair engine.
Meantime, Vorhies says people within Google began organizing anti-Trump activism through an email list: “Resist at Google dot com." and suggested chants like: “What do we want? JUSTICE! When do we want it? NOW!”
In June 2019, the next presidential campaign was gearing up. After more than eight years at Google, Vorhies says he decided to resign and blow the whistle.
Vorhies: I realized that I couldn't remain silent anymore and that I had to go and seek out and disclose this to the public because it appeared that Google was attempting a coup on the president.
Sharyl (on camera): After a break, our investigation continues into how the Big Tech shaping of information accelerated and impacted the 2020 election.