Twitter’s new owner, Elon Musk, is feverishly promoting his “Twitter dossier”: internal communications curated from the company, painstakingly tweeted by a sympathetic staff. But Musk clearly thinks he was wrong to unleash some partisan sea monsters — far from conspiracy or systemic abuse, the documents are a valuable glimpse behind the scenes of mass moderation, suggesting that every social media platform is engaging in Sisyphean labor.
For a decade, companies like Twitter, YouTube, and Facebook have choreographed a dance to alike keep bad actors, regulators, and the media out of reach of their review processes detail.
Revealing too much will expose the process to abuse by spammers and scammers (who literally take advantage of every detail leaked or published), while revealing too little will lead to Destructive reports and rumours, as they lose control of the narrative. At the same time, they must be ready to demonstrate and document their methods or risk reprimand and fines from government agencies.
Turns out, while everyone knows a little about how exactly these companies censor, filter, and curate postings on their platforms enough to be sure that what we’re seeing is just the tip of the iceberg.
Sometimes there are exposures of methods we suspect – hourly contractors clicking violent and sexual images in a disgusting but clearly necessary industry. Sometimes these companies exaggerate their game, such as repeated claims about how AI can revolutionize moderation and subsequent reports that the AI systems used for this purpose are incomprehensible and unreliable.
something that almost never happens – and usually companies don’t unless they’re forced to – is that mass content moderation is exposed without filters Practical tools and processes. That’s what Musk did, perhaps at his own peril, but sure to generate extreme excitement for anyone wondering what moderators actually do, say, and click when making decisions that could affect millions of people. big interest.
Don’t pay attention to the honest, complicated conversations behind the scenes
Email chains, Slack conversations and screenshots (or more screenshots, to be exact) reports published last week offer a glimpse into this important and little-understood process. What we see is some source material, and it’s not the partisan Illuminati that some might expect–though it’s clear, through its highly selective presentation, that’s how we’re supposed to perceive it.
Far from it: the people involved are alternately cautious and confident, pragmatic and philosophical, candid and reasonable, showing that the choice of restriction or prohibition is not arbitrary but based on An evolving consensus of opposing views is made.
(Update: Shortly after I posted this, a new thread started , it’s more of the same — Serious discussions with experts, law enforcement, and others on complex issues.)
Stories that led to the choice to temporarily restrict Hunter Biden’s laptop — Possibly the most controversial detente decision of the past few years at this point, behind banning Trump — there is neither partisanship nor conspiracy implied by the document’s bombshell packaging.
Ins we found serious, thoughtful people trying to reconcile conflicting and inadequate definitions and policies: What constitutes “hacked” material? How confident are we in this or that assessment? What is a proportional response? How and when should we communicate with whom? If we don’t limit, what are the consequences if we do? What precedents have we set or broken?
The answers to these questions are not at all obvious, and are often the result of months of research and discussion, even in court (legal precedent affects legal language and impact). They need to get it done quickly, before the situation somehow gets out of hand. Dissent from inside and outside (from the U.S. representative, also ironically, in a thread with Jack Dorsey who violated his own policy) was considered and honestly integrated.
“This is an emerging situation where the facts are not yet known,” said Yoel Roth, a former director of trust and safety. “We mistakenly included a warning and prevented this content from being amplified.”
Some questioned the decision. Some question the facts presented. Others said their reading of the policy did not support that. One said they needed to be very clear about the particular basis and scope of the action because it would obviously be scrutinized as partisan. Deputy General Counsel Jim Baker asked for more information, but said caution was warranted. There is no clear precedent; the facts are nonexistent or unsubstantiated at this point; and some material is clearly images of involuntary nudity.
“I think Twitter itself should reduce the content it recommends or puts in trending news, your policy on the QAnon group is fine,” acknowledged Rep. Ro Khanna, also opined The action in question goes too far. “It’s a difficult balance.”
Neither the public nor the media are aware of these conversations, and the truth is we are as curious as we are, and largely in the dark readers. It would be incorrect to call the published material a complete or even accurate representation of the entire process (they were blatantly, if not validly, picked and selected to fit the narrative), but even so, we are better informed than we were before.
Tools of the Trade
More directly is the next thread which contains actual screenshots of what Twitter employees are using Audit tool. While the post hypocritically attempts to equate the use of these tools to shadow banning, the screenshots do not show malicious activity, nor do they need to be for fun.
Image source: Push Special
On the contrary, what is shown is very compelling because it is so unremarkable, so bland No wonder. Here are all the social media companies explaining over and over again the various techniques they use, but now presenting them without comment, before we express them with PR-happy diplomacy: “Trend blacklist,” “High profile,” “DO NOT TAKE ACTION” and others.
Meanwhile, Yoel Roth explained that actions and policies need to be better coordinated, more research is needed, and plans are underway to improve:
Much of what we have implemented assumes that if exposure to misinformation directly causes harm, we should use exposure-reducing remedies, limit content The dissemination/virality of .
Again, the content belies the context in which it is presented: these are hardly one’s deliberative secret liberties The activist cabal has lashed out at its ideological enemies with the hammer of prohibition. It’s an enterprise grade dashboard like the ones you might see for lead tracking, logistics or accounts, discussed and iterated by sane people within practical constraints, designed to satisfy multiple stakeholders needs of the reader.
As it should be: Twitter, like other social media platforms, has worked for years to make the moderation process efficient and systematizable to operate at scale. Not only to keep the platform from being overwhelmed by bots and spam, but also to comply with legal frameworks like FTC orders and GDPR. (Giving outsiders “broad, unfiltered access” to the photo tools likely constituted a violation. Authorities told TechCrunch they were “engaging” with Twitter on the matter.)
a small number of employees making arbitrary decisions without rules or oversight cannot effectively regulate or satisfy such legal requirements; neither is automation (as evidenced by the resignations of several members of Twitter’s Trust and Safety Council) . You need a large network of people, collaborating and working according to standardized systems, with clear boundaries and escalation procedures. That’s certainly what the screenshots posted by Musk show.
Not shown in the document is any kind of systemic bias, which Musk’s stand-in has alluded to but not fully confirmed. But whether or not it fits their desired narrative, anyone who thinks these companies should be more candid about their policies will be interested in what’s being released. It’s a triumph of transparency, even if Musk’s opaque approach was achieved more or less by accident.