Wednesday, May 31, 2023
HomeUncategorizedSocial media is polluting society.Moderation alone will not solve the problem

Social media is polluting society.Moderation alone will not solve the problem

We all want to be able to speak our minds online – for our friends to hear and talk to our opponents (reply). At the same time, we do not want to be exposed to inappropriate or cross-border remarks. Tech companies have addressed this conundrum by setting free speech standards, a practice protected by federal law. They hire internal moderators to review individual content and remove posts if they violate predefined rules set by the platform.

There are clearly problems with this approach: harassment, misinformation on topics like public health, and false representations of legitimate elections are rampant. However, even if content moderation is implemented flawlessly, it still misses many issues that are often described as moderation issues but are not. To address these nonverbal problems, we need a new strategy: viewing social media companies as potential polluters of the social fabric and directly measuring and mitigating the human impact of their choices. That means establishing a policy framework—perhaps through social media like the Environmental Protection Agency or the Food and Drug Administration—that can be used to identify and assess the social harm these platforms generate. If these hazards persist, the group may be empowered to enforce these policies. But to go beyond the limits of content moderation, such regulation must be driven by clear evidence and be able to have a visible impact on the problems it claims to address.

Temperance (whether automatic or man-made) may act on what we call “acute” hazards: those caused directly by a single content harm. But we need this new approach because there are also many “structural” issues—such as discrimination, declining mental health, and declining civic trust—that are broadly expressed across the product, not through any single content. A well-known example of this structural problem is Facebook’s 2012 “emotional contagion” experiment, which showed that users’ emotions (measured by their behavior on the platform) occurred measurably based on the version of the product they were exposed to. Variety.

In the backlash that ensued after the results were made public, Facebook (now Meta) ended this deliberate experiment. But just because they stopped measuring these effects doesn’t mean product decisions won’t continue to have them.

Structural issues are a direct result of product selection. Product managers at tech companies like Facebook, YouTube, and TikTok are incentivized to focus on maximizing time and engagement on their platforms. Experimentation there is still very much alive: almost every product change is deployed to a small test audience through a randomized controlled trial. To assess progress, companies implement rigorous management processes to advance their core missions (called Objectives and Key Results, or OKRs), and even use those results to determine bonuses and promotions. Responsibility for addressing the consequences of product decisions often rests with other teams that are often downstream and do not have the authority to address the root cause. These teams are often able to deal with serious hazards, but are often unable to address issues caused by the product itself.

With attention and focus, the same product development structure can turn to social harm issues. Consider Francis Haugen’s testimony to Congress last year, and the media’s scrutiny of Facebook’s alleged impact on teenage mental health disclosure. In response to the criticism, Facebook explained that it studied whether teens felt the product had a negative impact on their mental health, and whether that perception led them to use the product less , not whether the product really has an adverse effect. While the response may have addressed this particular controversy, it suggests that a study that addresses mental health issues directly—rather than its effect on user engagement—would not be a big stretch.

Inclusion in assessments of systemic hazards is not easy. We have to figure out what we can actually measure rigorously and systematically, what we ask of the company, and what issues to prioritize in any such assessment.

Companies can implement protocols themselves, but their financial interests often run counter to meaningful constraints on product development and growth. This reality is the standard case for regulation that operates on behalf of the public. Whether it’s through a new legal mandate from the Federal Trade Commission or a new government agency’s hazard mitigation guidelines, it’s the regulator’s job to work with technology companies’ product development teams to design implementable Protocol Product development to assess meaningful hazard signals.

This approach may sound cumbersome, but for the largest companies (the only ones that should apply regulations), adding these types of protocols should be straightforward as they are already in their development A procedure for measuring its efficacy in randomized controlled trials has been established. The more time-consuming and complex part will be defining the standard; the actual execution of the test requires no regulatory involvement at all. It just needs to raise diagnostic questions alongside normal growth-related issues, and then make the data accessible to external reviewers. Our upcoming paper at the 2022 Fairness and Access Conference on ACM Algorithms, Mechanisms, and Optimization will explain this process in more detail and outline how to build it efficiently.

When products reach tens of millions of people tested for their ability to increase engagement, companies need to ensure that these products – at least in general – also adhere to the “Don’t make the problem worse” “the rules. Over time, more aggressive criteria can be established to roll back existing impacts of approved products.

There are many methods that may be suitable for this type of process. These include protocols such as the Photo Influence Meter, which has been used to diagnose how exposure to products and services affects mood. Tech platforms are already using surveys to assess product changes; according to reporters Cecilia Kang and Sheera Frankel, Mark Zuckerberg studied survey-based growth metrics for most product decisions, and the result is that he chose to return after the 2020 election. Roll part of a “better” version of Facebook’s news feed algorithm.

It’s reasonable to ask the tech industry if this approach works and if companies would object to it. While any potential regulation could have such a reaction, we’ve received positive feedback from early conversations about the framework — perhaps because under our approach, most product decisions pass. (Causing measurable harm of the kind described here is a very high bar that most product choices would clear.) Unlike other proposals, this strategy eschews direct regulation of speech, at least outside the most extreme cases .

Meanwhile, we don’t have to wait for regulators to act. Companies can easily implement these procedures themselves. However, it is difficult to establish a case for change without first starting to collect the kind of high-quality data we describe here. This is because the existence of these types of hazards cannot be proven without real-time measurements, creating a chicken-and-egg challenge. Proactive monitoring for structural harm will not solve the platform’s content problems. But it allows us to meaningfully and consistently verify whether the public interest has been subverted.

The U.S. Environmental Protection Agency is an apt analogy. The agency’s original purpose was not to legislate environmental policy, but to develop standards and protocols in order to create policies with actionable outcomes. From this perspective, the EPA’s lasting impact is not to resolve environmental policy debates (which it does not), but to enable them . Likewise, the first step in fixing social media is to create the infrastructure we need to examine outcomes of speech, mental health and civic trust in real time. Otherwise, we won’t be able to address many of the most pressing problems these platforms create.

Nathaniel Lubin is a fellow at Cornell Tech University’s Digital Living Initiative and former director of the White House Office of Digital Strategy under President Obama. Thomas Krendl Gilbert is a postdoctoral researcher at Cornell Tech and has an interdisciplinary PhD in machine ethics and epistemology from the University of California, Berkeley.

RELATED ARTICLES

LEAVE A REPLY

Please enter your comment!
Please enter your name here

LAST NEWS

Featured NEWS