Source: TW
Thread: THE TWITTER FILES
Background
What you’re about to read is the first installment in a series, based upon thousands of internal documents obtained by sources at Twitter. The “Twitter Files” tell an incredible story from inside one of the world’s largest and most influential social media platforms. It is a Frankensteinian tale of a human-built mechanism grown out the control of its designer.
Twitter in its conception was a brilliant tool for enabling instant mass communication, making a true real-time global conversation possible for the first time. In an early conception, Twitter more than lived up to its mission statement, giving people “the power to create and share ideas and information instantly, without barriers.” As time progressed, however, the company was slowly forced to add those barriers. Some of the first tools for controlling speech were designed to combat the likes of spam and financial fraudsters. Slowly, over time, Twitter staff and executives began to find more and more uses for these tools. Outsiders began petitioning the company to manipulate speech as well: first a little, then more often, then constantly.
By 2020, requests from connected actors to delete tweets were routine. One executive would write to another: “More to review from the Biden team.” The reply would come back: “Handled.” Image Celebrities and unknowns alike could be removed or reviewed at the behest of a political party: Image. Both parties had access to these tools. For instance, in 2020, requests from both the Trump White House and the Biden campaign were received and honored. However: This system wasn’t balanced. It was based on contacts. Because Twitter was and is overwhelmingly staffed by people of one political orientation, there were more channels, more ways to complain, open to the left (well, Democrats) than the right. https://www.opensecrets.org/orgs/twitter/summary?id=D000067113 The resulting slant in content moderation decisions is visible in the documents you’re about to read. However, it’s also the assessment of multiple current and former high-level executives.
Okay, there was more throat-clearing about the process, but screw it, let’s jump forward.
Biden Laptop Story
The Twitter Files, Part One: How and Why Twitter Blocked the Hunter Biden Laptop Story.
On October 14, 2020, the New York Post published BIDEN SECRET EMAILS, an expose based on the contents of Hunter Biden’s abandoned laptop: Smoking-gun email reveals how Hunter Biden introduced Ukrainian businessman to VP dad. NYT Twitter took extraordinary steps to suppress the story, removing links and posting warnings that it may be “unsafe.” They even blocked its transmission via direct message, a tool hitherto reserved for extreme cases, e.g. child pornography. White House spokeswoman Kaleigh McEnany was locked out of her account for tweeting about the story, prompting a furious letter from Trump campaign staffer Mike Hahn, who seethed: “At least pretend to care for the next 20 days.” This led public policy executive Caroline Strom to send out a polite WTF query. Several employees noted that there was tension between the comms/policy teams, who had little/less control over moderation, and the safety/trust teams: Image. Strom’s note returned the answer that the laptop story had been removed for violation of the company’s “hacked materials” policy. Link
Although several sources recalled hearing about a “general” warning from federal law enforcement that summer about possible foreign hacks, there’s no evidence - that I’ve seen - of any government involvement in the laptop story. In fact, that might have been the problem… The decision was made at the highest levels of the company, but without the knowledge of CEO Jack Dorsey, with former head of legal, policy and trust Vijaya Gadde playing a key role. “They just freelanced it,” is how one former employee characterized the decision. “Hacking was the excuse, but within a few hours, pretty much everyone realized that wasn’t going to hold. But no one had the guts to reverse it.” You can see the confusion in the following lengthy exchange, which ends up including Gadde and former Trust and safety chief Yoel Roth. Comms official Trenton Kennedy writes, “I’m struggling to understand the policy basis for marking this as unsafe”. By this point “everyone knew this was fucked,” said one former employee, but the response was essentially to err on the side of… continuing to err. Former VP of Global Comms Brandon Borrman asks, “Can we truthfully claim that this is part of the policy?” To which former Deputy General Counsel Jim Baker again seems to advise staying the non-course, because “caution is warranted”.
A fundamental problem with tech companies and content moderation: many people in charge of speech know/care little about speech, and have to be told the basics by outsiders. To wit: In one humorous exchange on day 1, Democratic congressman Ro Khanna reaches out to Gadde to gently suggest she hop on the phone to talk about the “backlash re speech.” Khanna was the only Democratic official I could find in the files who expressed concern. Gadde replies quickly, immediately diving into the weeds of Twitter policy, unaware Khanna is more worried about the Bill of Rights. Khanna tries to reroute the conversation to the First Amendment, mention of which is generally hard to find in the files.
Within a day, head of Public Policy Lauren Culbertson receives a ghastly letter/report from Carl Szabo of the research firm NetChoice, which had already polled 12 members of congress – 9 Rs and 3 Democrats, from “the House Judiciary Committee to Rep. Judy Chu’s office.” NetChoice lets Twitter know a “blood bath” awaits in upcoming Hill hearings, with members saying it’s a “tipping point,” complaining tech has “grown so big that they can’t even regulate themselves, so government may need to intervene.” Szabo reports to Twitter that some Hill figures are characterizing the laptop story as “tech’s Access Hollywood moment”:
Twitter files continued: “THE FIRST AMENDMENT ISN’T ABSOLUTE”
Szabo’s letter contains chilling passages relaying Democratic lawmakers’ attitudes. They want “more” moderation, and as for the Bill of Rights, it’s “not absolute”. An amazing subplot of the Twitter/Hunter Biden laptop affair was how much was done without the knowledge of CEO Jack Dorsey, and how long it took for the situation to get “unfucked” (as one ex-employee put it) even after Dorsey jumped in.
While reviewing Gadde’s emails, I saw a familiar name - my own. Dorsey sent her a copy of my Substack article blasting the incident Image There are multiple instances in the files of Dorsey intervening to question suspensions and other moderation actions, for accounts across the political spectrum. The problem with the “hacked materials” ruling, several sources said, was that this normally required an official/law enforcement finding of a hack. But such a finding never appears throughout what one executive describes as a “whirlwind” 24-hour, company-wide mess.
It’s been a whirlwind 96 hours for me, too. There is much more to come, including answers to questions about issues like shadow-banning, boosting, follower counts, the fate of various individual accounts, and more. These issues are not limited to the political right.
Source: TW
THREAD: THE TWITTER FILES PART TWO.
TWITTER’S SECRET BLACKLISTS.
A new #TwitterFiles investigation reveals that teams of Twitter employees build blacklists, prevent disfavored tweets from trending, and actively limit the visibility of entire accounts or even trending topics—all in secret, without informing users.
Twitter once had a mission “to give everyone the power to create and share ideas and information instantly, without barriers.” Along the way, barriers nevertheless were erected.
- Take, for example, Stanford’s Dr. Jay Bhattacharya (@DrJBhattacharya) who argued that Covid lockdowns would harm children. Twitter secretly placed him on a “Trends Blacklist,” which prevented his tweets from trending. Image
- Or consider the popular right-wing talk show host, Dan Bongino (@dbongino), who at one point was slapped with a “Search Blacklist.”
- Twitter set the account of conservative activist Charlie Kirk (@charliekirk11) to “Do Not Amplify.”
Twitter denied that it does such things. In 2018, Twitter’s Vijaya Gadde (then Head of Legal Policy and Trust) and Kayvon Beykpour (Head of Product) said: “We do not shadow ban.” They added: “And we certainly don’t shadow ban based on political viewpoints or ideology.” What many people call “shadow banning,” Twitter executives and employees call “Visibility Filtering” or “VF.” Multiple high-level sources confirmed its meaning. “Think about visibility filtering as being a way for us to suppress what people see to different levels. It’s a very powerful tool,” one senior Twitter employee told us. “VF” refers to Twitter’s control over user visibility. It used VF to block searches of individual users; to limit the scope of a particular tweet’s discoverability; to block select users’ posts from ever appearing on the “trending” page; and from inclusion in hashtag searches. All without users’ knowledge. “We control visibility quite a bit. And we control the amplification of your content quite a bit. And normal people do not know how much we do,” one Twitter engineer told us. Two additional Twitter employees confirmed.
The group that decided whether to limit the reach of certain users was the Strategic Response Team - Global Escalation Team, or SRT-GET. It often handled up to 200 “cases” a day. But there existed a level beyond official ticketing, beyond the rank-and-file moderators following the company’s policy on paper. That is the “Site Integrity Policy, Policy Escalation Support,” known as “SIP-PES.” This secret group included Head of Legal, Policy, and Trust (Vijaya Gadde), the Global Head of Trust & Safety (Yoel Roth), subsequent CEOs Jack Dorsey and Parag Agrawal, and others. This is where the biggest, most politically sensitive decisions got made. “Think high follower account, controversial,” another Twitter employee told us. For these “there would be no ticket or anything.”
One of the accounts that rose to this level of scrutiny was @libsoftiktok—an account that was on the “Trends Blacklist” and was designated as “Do Not Take Action on User Without Consulting With SIP-PES.” The account—which Chaya Raichik began in November 2020 and now boasts over 1.4 million followers—was subjected to six suspensions in 2022 alone, Raichik says. Each time, Raichik was blocked from posting for as long as a week. Twitter repeatedly informed Raichik that she had been suspended for violating Twitter’s policy against “hateful conduct.” But in an internal SIP-PES memo from October 2022, after her seventh suspension, the committee acknowledged that “LTT has not directly engaged in behavior violative of the Hateful Conduct policy.” The committee justified her suspensions internally by claiming her posts encouraged online harassment of “hospitals and medical providers” by insinuating “that gender-affirming healthcare is equivalent to child abuse or grooming.”
Compare this to what happened when Raichik herself was doxxed on November 21, 2022. A photo of her home with her address was posted in a tweet that has garnered more than 10,000 likes. When Raichik told Twitter that her address had been disseminated she says Twitter Support responded with this message: “We reviewed the reported content, and didn’t find it to be in violation of the Twitter rules.” No action was taken. The doxxing tweet is still up.
In internal Slack messages, Twitter employees spoke of using technicalities to restrict the visibility of tweets and subjects. Here’s Yoel Roth, Twitter’s then Global Head of Trust & Safety, in a direct message to a colleague in early 2021: Image. Six days later, in a direct message with an employee on the Health, Misinformation, Privacy, and Identity research team, Roth requested more research to support expanding “non-removal policy interventions like disabling engagements and deamplification/visibility filtering.” Roth wrote: “The hypothesis underlying much of what we’ve implemented is that if exposure to, e.g., misinformation directly causes harm, we should use remediations that reduce exposure, and limiting the spread/virality of content is a good way to do that.” He added: “We got Jack on board with implementing this for civic integrity in the near term, but we’re going to need to make a more robust case to get this into our repertoire of policy remediations – especially for other policy domains.”
There is more to come on this story, which was reported by @AbigailShrier @ShellenbergerMD @NellieBowles @IsaacGrafstein and the team The Free Press @TheFP. Keep up with this unfolding story here and at our brand new website: thefp.com. The authors have broad and expanding access to Twitter’s files. The only condition we agreed to was that the material would first be published on Twitter.
We’re just getting started on our reporting. Documents cannot tell the whole story here. A big thank you to everyone who has spoken to us so far. If you are a current or former Twitter employee, we’d love to hear from you. Please write to: tips@thefp.com . Watch @mtaibbi for the next installment.
Trump removal
THREAD: The Twitter Files THE REMOVAL OF DONALD TRUMP
October 2020-January 6th
The world knows much of the story of what happened between riots at the Capitol on January 6th, and the removal of President Donald Trump from Twitter on January 8th… We’ll show you what hasn’t been revealed: the erosion of standards within the company in months before J6, decisions by high-ranking executives to violate their own policies, and more, against the backdrop of ongoing, documented interaction with federal agencies.
This first installment covers the period before the election through January 6th. Tomorrow, @Shellenbergermd will detail the chaos inside Twitter on January 7th. On Sunday, @BariWeiss will reveal the secret internal communications from the key date of January 8th. Whatever your opinion on the decision to remove Trump that day, the internal communications at Twitter between January 6th-January 8th have clear historical import. Even Twitter’s employees understood in the moment it was a landmark moment in the annals of speech. As soon as they finished banning Trump, Twitter execs started processing new power. They prepared to ban future presidents and White Houses – perhaps even Joe Biden. The “new administration,” says one exec, “will not be suspended by Twitter unless absolutely necessary.”
Twitter executives removed Trump in part over what one executive called the “context surrounding”: actions by Trump and supporters “over the course of the election and frankly last 4+ years.” In the end, they looked at a broad picture. But that approach can cut both ways. The bulk of the internal debate leading to Trump’s ban took place in those three January days. However, the intellectual framework was laid in the months preceding the Capitol riots.
Before J6, Twitter was a unique mix of automated, rules-based enforcement, and more subjective moderation by senior executives. As @BariWeiss reported, the firm had a vast array of tools for manipulating visibility, most all of which were thrown at Trump (and others) pre-J6. As the election approached, senior executives – perhaps under pressure from federal agencies, with whom they met more as time progressed – increasingly struggled with rules, and began to speak of “vios” as pretexts to do what they’d likely have done anyway.
After J6, internal Slacks show Twitter executives getting a kick out of intensified relationships with federal agencies. Here’s Trust and Safety head Yoel Roth, lamenting a lack of “generic enough” calendar descriptions to concealing his “very interesting” meeting partners. These initial reports are based on searches for docs linked to prominent executives, whose names are already public. They include Roth, former trust and policy chief Vijaya Gadde, and recently plank-walked Deputy General Counsel (and former top FBI lawyer) Jim Baker.
One particular slack channel offers an unique window into the evolving thinking of top officials in late 2020 and early 2021. On October 8th, 2020, executives opened a channel called “us2020_xfn_enforcement.” Through J6, this would be home for discussions about election-related removals, especially ones that involved “high-profile” accounts (often called “VITs” or “Very Important Tweeters”). There was at least some tension between Safety Operations – a larger department whose staffers used a more rules-based process for addressing issues like porn, scams, and threats – and a smaller, more powerful cadre of senior policy execs like Roth and Gadde. The latter group were a high-speed Supreme Court of moderation, issuing content rulings on the fly, often in minutes and based on guesses, gut calls, even Google searches, even in cases involving the President.
During this time, executives were also clearly liaising with federal enforcement and intelligence agencies about moderation of election-related content. While we’re still at the start of reviewing the #TwitterFiles, we’re finding out more about these interactions every day. Policy Director Nick Pickles is asked if they should say Twitter detects “misinfo” through “ML, human review, and **partnerships with outside experts?*” The employee asks, “I know that’s been a slippery process… not sure if you want our public explanation to hang on that.” Pickles quickly asks if they could “just say “partnerships.” After a pause, he says, “e.g. not sure we’d describe the FBI/DHS as experts.”
This post about the Hunter Biden laptop situation shows that Roth not only met weekly with the FBI and DHS, but with the Office of the Director of National Intelligence (DNI). Roth’s report to FBI/DHS/DNI is almost farcical in its self-flagellating tone:
“We blocked the NYP story, then unblocked it (but said the opposite)… comms is angry, reporters think we’re idiots… in short, FML” (fuck my life).
Some of Roth’s later Slacks indicate his weekly confabs with federal law enforcement involved separate meetings. Here, he ghosts the FBI and DHS, respectively, to go first to an “Aspen Institute thing,” then take a call with Apple.
Here, the FBI sends reports about a pair of tweets, the second of which involves a former Tippecanoe County, Indiana Councilor and Republican named @JohnBasham claiming “Between 2% and 25% of Ballots by Mail are Being Rejected for Errors.” The FBI’s second report concerned this tweet by @JohnBasham. The FBI-flagged tweet then got circulated in the enforcement Slack. Twitter cited Politifact to say the first story was “proven to be false,” then noted the second was already deemed “no vio on numerous occasions.”
The group then decides to apply a “Learn how voting is safe and secure” label because one commenter says, “it’s totally normal to have a 2% error rate.” Roth then gives the final go-ahead to the process initiated by the FBI. Examining the entire election enforcement Slack, we didn’t see one reference to moderation requests from the Trump campaign, the Trump White House, or Republicans generally. We looked. They may exist: we were told they do. However, they were absent here.
In one case, former Arkansas governor Mike Huckabee joke-tweets about mailing in ballots for his “deceased parents and grandparents.” This inspires a long Slack that reads like an @TitaniaMcGrath parody. “I agree it’s a joke,” concedes a Twitter employee, “but he’s also literally admitting in a tweet a crime.” The group declares Huck’s an “edge case,” and though one notes, “we don’t make exceptions for jokes or satire,” they ultimately decide to leave him be, because “we’ve poked enough bears.” “Could still mislead people… could still mislead people,” the humor-averse group declares, before moving on from Huckabee. Roth suggests moderation even in this absurd case could depend on whether or not the joke results in “confusion.” This seemingly silly case actually foreshadows serious later issues.
In the docs, execs often expand criteria to subjective issues like intent (yes, a video is authentic, but why was it shown?), orientation (was a banned tweet shown to condemn, or support?), or reception (did a joke cause “confusion”?). This reflex will become key in J6.
In another example, Twitter employees prepare to slap a “mail-in voting is safe” warning label on a Trump tweet about a postal screwup in Ohio, before realizing “the events took place,” which meant the tweet was “factually accurate”. “VERY WELL DONE ON SPEED” Trump was being “visibility filtered” as late as a week before the election. Here, senior execs didn’t appear to have a particular violation, but still worked fast to make sure a fairly anodyne Trump tweet couldn’t be “replied to, shared, or liked”. “VERY WELL DONE ON SPEED”: the group is pleased the Trump tweet is dealt with quickly.
A seemingly innocuous follow-up involved a tweet from actor @realJamesWoods , whose ubiquitous presence in argued-over Twitter data sets is already a #TwitterFiles in-joke. After Woods angrily quote-tweeted about Trump’s warning label, Twitter staff – in a preview of what ended up happening after J6 – despaired of a reason for action, but resolved to “hit him hard on future vio.” Here a label is applied to Georgia Republican congresswoman Jody Hice for saying, “Say NO to big tech censorship!” and, “Mailed ballots are more prone to fraud than in-person balloting… It’s just common sense.” Twitter teams went easy on Hice, only applying “soft intervention,” with Roth worrying about a “wah wah censorship” optics backlash.
Meanwhile, there are multiple instances of involving pro-Biden tweets warning Trump “may try to steal the election” that got surfaced, only to be approved by senior executives. This one, they decide, just “expresses concern that mailed ballots might not make it on time.” “THAT’S UNDERSTANDABLE”: Even the hashtag #StealOurVotes – referencing a theory that a combo of Amy Coney Barrett and Trump will steal the election – is approved by Twitter brass, because it’s “understandable” and a “reference to… a US Supreme Court decision.” In this exchange, again unintentionally humorous, former Attorney General Eric Holder claimed the U.S. Postal Service was “deliberately crippled,”ostensibly by the Trump administration. He was initially hit with a generic warning label, but it was quickly taken off by Roth. Later in November 2020, Roth asked if staff had a “debunk moment” on the “SCYTL/Smartmantic vote-counting” stories, which his DHS contacts told him were a combination of “about 47” conspiracy theories.
On December 10th, as Trump was in the middle of firing off 25 tweets saying things like, “A coup is taking place in front of our eyes,” Twitter executives announced a new “L3 deamplification” tool. This step meant a warning label now could also come with deamplification. Some executives wanted to use the new deamplification tool to silently limit Trump’s reach more right away, beginning with the following tweet. However, in the end, the team had to use older, less aggressive labeling tools at least for that day, until the “L3 entities” went live the following morning.
The significance is that it shows that Twitter, in 2020 at least, was deploying a vast range of visible and invisible tools to rein in Trump’s engagement, long before J6. The ban will come after other avenues are exhausted. In Twitter docs execs frequently refer to “bots,” e.g. “let’s put a bot on that.” A bot is just any automated heuristic moderation rule. It can be anything: every time a person in Brazil uses “green” and “blob” in the same sentence, action might be taken. In this instance, it appears moderators added a bot for a Trump claim made on Breitbart. The bot ends up becoming an automated tool invisibly watching both Trump and, apparently, Breitbart (“will add media ID to bot”). Trump by J6 was quickly covered in bots.
There is no way to follow the frenzied exchanges among Twitter personnel from between January 6thand 8th without knowing the basics of the company’s vast lexicon of acronyms and Orwellian unwords.
- To “bounce” an account is to put it in timeout, usually for a 12-hour review/cool-off.
- “Interstitial,” one of many nouns used as a verb in Twitterspeak (“denylist” is another), means placing a physical label atop a tweet, so it can’t be seen.
- PII has multiple meanings, one being “Public Interest Interstitial,” i.e. a covering label applied for “public interest” reasons. The post below also references “proactive V,” i.e. proactive visibility filtering.
This is all necessary background to J6. Before the riots, the company was engaged in an inherently insane/impossible project, trying to create an ever-expanding, ostensibly rational set of rules to regulate every conceivable speech situation that might arise between humans. This project was preposterous yet its leaders were unable to see this, having become infected with groupthing, coming to believe – sincerely – that it was Twitter’s responsibility to control, as much as possible, what people could talk about, how often, and with whom. The firm’s executives on day 1 of the January 6th crisis at least tried to pay lip service to its dizzying array of rules. By day 2, they began wavering. By day 3, a million rules were reduced to one: what we say, goes.
January 7
Source: TW
As the pressure builds, Twitter executives build the case for a permanent ban On Jan 7, senior Twitter execs:
- create justifications to ban Trump
- seek a change of policy for Trump alone, distinct from other political leaders
- express no concern for the free speech or democracy implications of a ban
This #TwitterFiles is reported with @lwoodhouse.
For years, Twitter had resisted calls to ban Trump. “Blocking a world leader from Twitter,” it wrote in 2018, “would hide important info… [and] hamper necessary discussion around their words and actions.”
But after the events of Jan 6, the internal and external pressure on Twitter CEO @jack grows. Former First Lady @MichelleObama , tech journalist @karaswisher , @ADL , high-tech VC @ChrisSacca , and many others, publicly call on Twitter to permanently ban Trump. Dorsey was on vacation in French Polynesia the week of January 4-8, 2021. He phoned into meetings but also delegated much of the handling of the situation to senior execs @yoyoel , Twitter’s Global Head of Trust and Safety, and @vijaya Head of Legal, Policy, & Trust.
As context, it’s important to understand that Twitter’s staff & senior execs were overwhelmingly progressive. In 2018, 2020, and 2022, 96%, 98%, & 99% of Twitter staff’s political donations went to Democrats. In 2017, Roth tweeted that there were “ACTUAL NAZIS IN THE WHITE HOUSE.” In April 2022, Roth told a colleague that his goal “is to drive change in the world,” which is why he decided not to become an academic.
On January 7, @jack emails employees saying Twitter needs to remain consistent in its policies, including the right of users to return to Twitter after a temporary suspension. After, Roth reassures an employee that “people who care about this… aren’t happy with where we are”. Around 11:30 am PT, Roth DMs his colleagues with news that he is excited to share. “GUESS WHAT,” he writes. “Jack just approved repeat offender for civic integrity.” The new approach would create a system where five violations (“strikes”) would result in permanent suspension. “Progress!” exclaims a member of Roth’s Trust and Safety Team. The exchange between Roth and his colleagues makes clear that they had been pushing @jack for greater restrictions on the speech Twitter allows around elections. The colleague wants to know if the decision means Trump can finally be banned. The person asks, “does the incitement to violence aspect change that calculus?” Roth says it doesn’t. “Trump continues to just have his one strike” (remaining). Roth’s colleague’s query about “incitement to violence” heavily foreshadows what will happen the following day.
On January 8, Twitter announces a permanent ban on Trump due to the “risk of further incitement of violence.” On J8, Twitter says its ban is based on “specifically how [Trump’s tweets] are being received & interpreted.” But in 2019, Twitter said it did “not attempt to determine all potential interpretations of the content or its intent.” The only serious concern we found expressed within Twitter over the implications for free speech and democracy of banning Trump came from a junior person in the organization. It was tucked away in a lower-level Slack channel known as “site-integrity-auto.”
“This might be an unpopular opinion but one off ad hoc decisions like this that don’t appear rooted in policy are imho a slippery slope… This now appears to be a fiat by an online platform CEO with a global presence that can gatekeep speech for the entire world…”
Twitter employees use the term “one off” frequently in their Slack discussions. Its frequent use reveals significant employee discretion over when and whether to apply warning labels on tweets and “strikes” on users. Here are typical examples. Recall from #TwitterFiles2 by @bariweiss that, according to Twitter staff, “We control visibility quite a bit. And we control the amplification of your content quite a bit. And normal people do not know how much we do.”
Twitter employees recognize the difference between their own politics & Twitter’s Terms of Service (TOS), but they also engage in complex interpretations of content in order to stamp out prohibited tweets, as a series of exchanges over the “#stopthesteal” hashtag reveal. Roth immediately DMs a colleague to ask that they add “stopthesteal” & [QAnon conspiracy term] “kraken” to a blacklist of terms to be deamplified. Roth’s colleague objects that blacklisting “stopthesteal” risks “deamplifying counterspeech” that validates the election. Indeed, notes Roth’s colleague, “a quick search of top stop the steal tweets and they’re counterspeech” But they quickly come up with a solution: “deamplify accounts with stopthesteal in the name/profile” since “those are not affiliated with counterspeech”. But it turns out that even blacklisting “kraken” is less straightforward than they thought. That’s because kraken, in addition to being a QAnon conspiracy theory based on the mythical Norwegian sea monster, is also the name of a cryptocurrency exchange, and was thus “allowlisted”.
Employees struggle with whether to punish users who share screenshots of Trump’s deleted J6 tweets - “we should bounce these tweets with a strike given the screen shot violates the policy”, “they are criticising Trump, so I am bit hesitant with applying strike to this user”. What if a user dislikes Trump and objects to Twitter’s censorship? The tweet still gets deleted. But since the intention is not to deny the election result, no punishing strike is applied. “if there are instances where the intent is unclear please feel free to raise”.
Around noon, a confused senior executive in advertising sales sends a DM to Roth. Sales exec: “jack says: ‘we will permanently suspend [Trump] if our policies are violated after a 12 hour account lock’… what policies is jack talking about?” Roth: “ANY policy violation”
What happens next is essential to understanding how Twitter justified banning Trump. Sales exec: “are we dropping the public interest [policy] now…” Roth, six hours later: “In this specific case, we’re changing our public interest approach for his account…” The ad exec is referring to Twitter’s policy of “Public-interest exceptions,” which allows the content of elected officials, even if it violates Twitter rules, “if it directly contributes to understanding or discussion of a matter of public concern”
Public-interest exceptions to enforcement of Twitter rules Learn why we make certain exceptions, under what circumstances, and how we balance risk of harm vs. the public interest. https://help.twitter.com/en/rules-and-policies/public-interest
Roth pushes for a permanent suspension of Rep. Matt Gaetz even though it “doesn’t quite fit anywhere (duh)” It’s a kind of test case for the rationale for banning Trump. “I’m trying to talk [Twitter’s] safety [team] into… removal as a conspiracy that incites violence.”
Around 2:30, comms execs DM Roth to say they don’t want to make a big deal of the QAnon ban to the media because they fear “if we push this it looks we’re trying to offer up something in place of the thing everyone wants,” meaning a Trump ban. That evening, a Twitter engineer DMs to Roth to say, “I feel a lot of debates around exceptions stem from the fact that Trump’s account is not technically different from anybody else’ and yet treated differently due to his personal status, without corresponding Twitter rules..” Roth’s response hints at how Twitter would justify deviating from its longstanding policy. “To put a different spin on it: policy is one part of the system of how Twitter works… we ran into the world changing faster than we were able to either adapt the product or the policy.”
The evening of January 7, the same junior employee who expressed an “unpopular opinion” about “ad hoc decisions… that don’t appear rooted in policy,” speaks up one last time before the end of the day. Earlier that day, the employee wrote, “My concern is specifically surrounding the unarticulated logic of the decision by FB. That space fills with the idea (conspiracy theory?) that all… internet moguls… sit around like kings casually deciding what people can and cannot see.” The employee notes, later in the day, “And Will Oremus noticed the inconsistency too…,” linking to an article for OneZero at Medium called, “Facebook Chucked Its Own Rulebook to Ban Trump.”
Facebook Chucked Its Own Rulebook to Ban Trump The move is a reminder of social platforms’ power over online speech — and the inconsistency with which they wield it https://onezero.medium.com/facebook-chucked-its-own-rulebook-to-ban-trump-ecc036947f5d
“The underlying problem,” writes @WillOremus , is that “the dominant platforms have always been loath to own up to their subjectivity, because it highlights the extraordinary, unfettered power they wield over the global public square… … and places the responsibility for that power on their own shoulders… So they hide behind an ever-changing rulebook, alternately pointing to it when it’s convenient and shoving it under the nearest rug when it isn’t.”
“Facebook’s suspension of Trump now puts Twitter in an awkward position. If Trump does indeed return to Twitter, the pressure on Twitter will ramp up to find a pretext on which to ban him as well.” Indeed. And as @bariweiss will show tomorrow, that’s exactly what happened.
Jan 8
Source: TW
On the morning of January 8, President Donald Trump, with one remaining strike before being at risk of permanent suspension from Twitter, tweets twice.
6:46 am: “The 75,000,000 great American Patriots who voted for me, AMERICA FIRST, and MAKE AMERICA GREAT AGAIN, will have a GIANT VOICE long into the future. They will not be disrespected or treated unfairly in any way, shape or form!!!”
7:44 am: “To all of those who have asked, I will not be going to the Inauguration on January 20th.”
For years, Twitter had resisted calls both internal and external to ban Trump on the grounds that blocking a world leader from the platform or removing their controversial tweets would hide important information that people should be able to see and debate. “Our mission is to provide a forum that enables people to be informed and to engage their leaders directly,” the company wrote in 2019. Twitter’s aim was to “protect the public’s right to hear from their leaders and to hold them to account.”
But after January 6, as @mtaibbi and @ShellenbergerMD have documented, pressure grew, both inside and outside of Twitter, to ban Trump. There were dissenters inside Twitter. “Maybe because I am from China,” said one employee on January 7, “I deeply understand how censorship can destroy the public conversation.” But voices like that one appear to have been a distinct minority within the company. Across Slack channels, many Twitter employees were upset that Trump hadn’t been banned earlier.
After January 6, Twitter employees organized to demand their employer ban Trump. “There is a lot of employee advocacy happening,” said one Twitter employee. “We have to do the right thing and ban this account,” said one staffer. It’s “pretty obvious he’s going to try to thread the needle of incitement without violating the rules,” said another. In the early afternoon of January 8, The Washington Post published an open letter signed by over 300 Twitter employees to CEO Jack Dorsey demanding Trump’s ban. “We must examine Twitter’s complicity in what President-Elect Biden has rightly termed insurrection.” But the Twitter staff assigned to evaluate tweets quickly concluded that Trump had not violated Twitter’s policies.“I think we’d have a hard time saying this is incitement,” wrote one staffer. “It’s pretty clear he’s saying the ‘American Patriots’ are the ones who voted for him and not the terrorists (we can call them that, right?) from Wednesday.” Another staffer agreed: “Don’t see the incitement angle here.” “I also am not seeing clear or coded incitement in the DJT tweet,” wrote Anika Navaroli, a Twitter policy official. “I’ll respond in the elections channel and say that our team has assessed and found no vios”—or violations—“for the DJT one.” She does just that: “as an fyi, Safety has assessed the DJT Tweet above and determined that there is no violation of our policies at this time.” (Later, Navaroli would testify to the House Jan. 6 committee:“For months I had been begging and anticipating and attempting to raise the reality that if nothing—if we made no intervention into what I saw occuring, people were going to die.”) Next, Twitter’s safety team decides that Trump’s 7:44 am ET tweet is also not in violation. They are unequivocal: “it’s a clear no vio. It’s just to say he’s not attending the inauguration”
To understand Twitter’s decision to ban Trump, we must consider how Twitter deals with other heads of state and political leaders, including in Iran, Nigeria, and Ethiopia.
In June 2018, Iran’s Ayatollah Ali Khamenei tweeted, “#Israel is a malignant cancerous tumor in the West Asian region that has to be removed and eradicated: it is possible and it will happen.” Twitter neither deleted the tweet nor banned the Ayatollah. In October 2020, the former Malaysian Prime Minister said it was “a right” for Muslims to “kill millions of French people.” Twitter deleted his tweet for “glorifying violence,” but he remains on the platform.
The tweet below was taken from the Wayback Machine: Muhammadu Buhari, the President of Nigeria, incited violence against pro-Biafra groups. “Those of us in the fields for 30 months, who went through the war,” he wrote, “will treat them in the language they understand.” Twitter deleted the tweet but didn’t ban Buhari.
In October 2021, Twitter allowed Ethiopian Prime Minister Abiy Ahmed to call on citizens to take up arms against the Tigray region. Twitter allowed the tweet to remain up, and did not ban the prime minister.
In early February 2021, Prime Minister Narendra Modi’s government threatened to arrest Twitter employees in India, and to incarcerate them for up to seven years after they restored hundreds of accounts that had been critical of him. Twitter did not ban Modi.
But Twitter executives did ban Trump, even though key staffers said that Trump had not incited violence—not even in a “coded” way. Less than 90 minutes after Twitter employees had determined that Trump’s tweets were not in violation of Twitter policy, Vijaya Gadde—Twitter’s Head of Legal, Policy, and Trust—asked whether it could, in fact, be “coded incitement to further violence.” A few minutes later, Twitter employees on the “scaled enforcement team” suggest that Trump’s tweet may have violated Twitter’s Glorification of Violence policy—if you interpreted the phrase “American Patriots” to refer to the rioters.
Things escalate from there. Members of that team came to “view him as the leader of a terrorist group responsible for violence/deaths comparable to Christchurch shooter or Hitler and on that basis and on the totality of his Tweets, he should be de-platformed.” Two hours later, Twitter executives host a 30-minute all-staff meeting. Jack Dorsey and Vijaya Gadde answer staff questions as to why Trump wasn’t banned yet. But they make some employees angrier. “Multiple tweeps [Twitter employees] have quoted the Banality of Evil suggesting that people implementing our policies are like Nazis following orders,” relays Yoel Roth to a colleague. Dorsey requested simpler language to explain Trump’s suspension. Roth wrote, “god help us [this] makes me think he wants to share it publicly”.
One hour later, Twitter announces Trump’s permanent suspension “due to the risk of further incitement of violence.” Many at Twitter were ecstatic. And congratulatory: “big props to whoever in trust and safety is sitting there whack-a-mole-ing these trump accounts” By the next day, employees expressed eagerness to tackle “medical misinformation” as soon as possible: “For the longest time, Twitter’s stance was that we aren’t the arbiter of truth,” wrote another employee, “which I respected but never gave me a warm fuzzy feeling.” But Twitter’s COO Parag Agrawal—who would later succeed Dorsey as CEO—told Head of Security Mudge Zatko: “I think a few of us should brainstorm the ripple effects” of Trump’s ban. Agrawal added: “centralized content moderation IMO has reached a breaking point now.”
Outside the United States, Twitter’s decision to ban Trump raised alarms, including with French President Emmanuel Macron, German Prime Minister Angela Merkel, and Mexico’s President Andres Manuel Lopez Obrador.
Macron told an audience he didn’t “want to live in a democracy where the key decisions” were made by private players. “I want it to be decided by a law voted by your representative, or by regulation, governance, democratically discussed and approved by democratic leaders.”
Merkel’s spokesperson called Twitter’s decision to ban Trump from its platform “problematic” and added that the freedom of opinion is of “elementary significance.”
Russian opposition leader Alexey Navalny criticized the ban as “an unacceptable act of censorship.”
Whether you agree with Navalny and Macron or the executives at Twitter, we hope this latest installment of #TheTwitterFiles gave you insight into that unprecedented decision. From the outset, our goal in investigating this story was to discover and document the steps leading up to the banning of Trump and to put that choice into context. Ultimately, the concerns about Twitter’s efforts to censor news about Hunter Biden’s laptop, blacklist disfavored views, and ban a president aren’t about the past choices of executives in a social media company. They’re about the power of a handful of people at a private company to influence the public discourse and democracy.
This was reported by @ShellenbergerMD, @IsaacGrafstein, @SnoozyWeiss, @Olivia_Reingold, @petersavodnik, @NellieBowles. Follow all of our work at The Free Press: @TheFP Please click here to subscribe to The Free Press, where you can continue reading and supporting independent journalism: thefp.com/subscribe