Last Thursday, on 3rd March 2022, Twitter Inc announced the expansion of its crowdsourced fact-checking and content moderation program “Birdwatch” in a blog post (1). It would make notes on potentially misleading tweets visible to more users on the platform.
The micro-blogging platform launched the project named Birdwatch last year as an experiment that asked users to identify misleading posts and write notes to offer information debunking the content, which would then be affixed to the original post.
Like other social media sites, Twitter has been under severe pressure to do more to prevent false content from spreading among its 217 million+ daily active users. It has kept the notes written by over 10k contributors in the Birdwatch pilot program on a different website (2).
According to Twitter, a small group of randomized users on Twitter US will see these notes directly on notes and rate the information’s helpfulness.
Over time, the social media site aims to expand the content moderation program to additional users in more countries.
In addition, Twitter highlighted, according to its survey, users were 20 to 40 percent less likely to agree with the potentially misleading tweet after reading a Birdwatch note about it compared to those who saw it without the note.
This development indicates that as tech companies face more regulatory risks, they may aim to pre-empt it with AI-based or community-driven rankings to do the work they can’t do.
Read Also: New Guidelines for Indian Social Media Influencers
More About Birdwatch
“By empowering people to do this, they can rate the usefulness of tweets and add helpful and informative context for people from different perspectives,” said Keith Coleman, Vice President of Product, Twitter, in the announcement (3).
Twitter also emphasized the importance of contributors with diverse perspectives, determined not by their demographics but by how they have rated past notes. A note will only be visible under tweets if enough contributors from different points of view have rated it helpful.
It is launching Birdwatch to more users following “encouraging indications that Birdwatch can be helpful and informative to people” on the platform. With crowdsourcing content moderation and fact checks of viral content, Twitter asks users to act in good faith to help tackle misinformation on the platform.
However, data published by Twitter suggests low participation.
An analysis recently published by The Washington Post highlighted that contributors of Birdwatch were flagging only 42 tweets a day in 2022 before the Russia-Ukraine crisis on 24th February. Post which, the number increased to 156 tweets, said the analysis (4).
Read Also: The Data of Your Mobile’s Location is Worth Multi-Billion Dollars
Increased Scrutiny Against Content Moderation Practices of Social Media Sites Amid the War Crisis
While discussing content moderation on social media platforms, it is worth highlighting that the recent war-related misinformation has increased scrutiny against social media platforms.
Several Birdwatch notes last week addressed misleading content related to the Russia-Ukraine crisis.
For instance, a tweet on 26th February, which had been retweeted over 17k times, showed Paul McCartney’s image, a British rockstar waiving a Ukrainian flag while on a stage in front of the audience.
A note attached to one of the tweets on the Birdwatch site remarked that the picture was taken backing 2008 when McCarthy went to perform at an “Independence Concert” in Kyiv (5).
TikTok has also witnessed an explosion of conflict-related content on its platform (6). While most security analysts and people are used to receiving 90% of their war information from official sources, millions went to TikTok to see content live from the concerned areas.
Over the past few days, videos from Ukraine on TikTok have exploded at a rate of 1 billion+ view a day. One bomb blast video was seen more than 44 million times before authority had suspended the poster’s account.
It is especially problematic for a platform like TikTok since it is the platform of choice for users to document this war, considering its user-friendliness, instant gratification, and scrappy content style.
A fake TikTok from Ukraine has garnered over 5 million views in 12 hours.
It features a couple repeating “oh my god, oh my god” then there’s a loud explosion, screaming and he saw “ow my leg.”
I found this exact audio on another video from the 2020 Beirut explosion. pic.twitter.com/fP20IdtfX7
— Abbie Richards (@abbieasr) February 25, 2022
However, the scale to which it distributes content makes it challenging to control. Fake news travels six times faster than legitimate information because of its emotional impacts (7). And TikTok has contributed to the spread of false information through its reusable audio feature that repurposes actual Ukrainian war audio over unrelated videos.
TikTok has more than a billion users, and even if its algorithm filter out 99% of its war-related content, at least 10 million people could still see it.
As social media platforms continue to grow, their success could be used just as easily to distribute misinformation. However, at the same time, these platforms lack the resources to police all posts with adequacy.
While we are developing AI engines to regulate every aspect of social media, including the mental health diagnosis from a single post (8), it has its limits.
Read Also: Social Commerce: Trends, Opportunities, and Growth Strategies
Why Now?
Many people are eager to remove or alter Section 230 of the Communications Decency Act, which protects social media platforms from liability for user material, as the content moderation of disinformation has become a major concern (9).
Section 230 was drafted by Sen. Ron Wyden (D-OR) and Rep. Chris Cox (R-CA) to allow website owners to censor their sites without legal repercussions.
The rule is especially important for social media platforms, but it also applies to a wide range of websites and services, including news outlets with comment areas, like TimesNext. It is dubbed as “an essential law protecting internet speech” by the Electronic Frontier Foundation (10).
However, it is becoming increasingly polarizing and frequently misunderstood. Critics claim that the law’s wide protections allow big corporations to ignore serious user damage (11). On the other hand, some Legislators assert that it merely safeguards “neutral platforms,” a word that has no bearing on the law (12).
In February, a Senate committee introduced a controversial EARN IT bill that would create a Section 230 exception for child sexual exploitation, setting the stage for more carve-outs that would weaken the law’s protections (13).
Read Also: Get the Attention of High-Profile Founders, Investors, and Executives With These Tips
More about Section 230
Senators on the Judiciary Committee did not vote against the bill’s passage to the Senate floor. Nonetheless, according to The Washington Post (14), numerous people have expressed worries about the bill’s potential risks to privacy and free expression.
The bill’s goal is to “create recommended best practices that interactive computer service providers might choose to use to prevent, mitigate, and respond to child sexual exploitation online.” However, unless businesses opt to scan anything held on cloud-based services, including messages, images, online backups, and more technologists and privacy groups are concerned that the measure will put them in legal jeopardy.
EARN IT would “make the way for an extensive new surveillance system, run by private organizations that would black out some of the most essential privacy and security features in technology used by people globally,” according to the Electronic Frontier Foundation, a digital civil rights organization (15).
The bill’s supporter, the National Center on Sexual Exploitation, disputed the privacy and security claims, asserting that online platforms have “serious problems” that EARN IT will fix.
Senators Lindsay Graham and Richard Blumenthal introduced EARN IT. If this name is sounding familiar, it is because a previous version of the law was introduced in 2020, only to be withdrawn due to widespread dissent (16).
On the other hand, Apple has already attempted to scan for CSAM content (Suggested Reading: What is NeuralHash? Breaking Down Apple’s New CSAM-Detection Tool). Like the initial EARN IT act, CSAM scanning technology in Apple’s iCloud was a huge flop. Following the backlash, Apple put the plans on hold and promised to confer with scholars and advocacy groups. The EARN IT law, on the other hand, would compel Apple’s hand.
The EARN IT bill is now headed to the Senate floor for a vote, but it’s unclear whether it’ll get the votes it needs to become law.
Read Also: Consistent Brand Voice: Leverage Rhetoric to Step Up Your Social Media Marketing Campaigns
Closing Remarks
We believe while community-driven content moderation and ranking systems like Twitter’s Birdwatch can slow down misinformation, bad actors can also abuse it.
Even if social media platforms start using AI or user-generated content moderation systems, it will hold users more accountable for what they post online.
In other words, it could be an easy way for social media platforms, a way for them to deflect responsibility and skirt this pending regulation.
Source :- https://timesnext.com Author :- Team Rucha Joshi Date :-March 08, 2022 at 12:22PM