Facebook partially documents its content recommendation system – TechCrunch

0

Algorithmic recommendation systems on social media sites like YouTube, Facebook and Twitter have borne much of the responsibility for the spread of disinformation, propaganda, hate speech, conspiracy theories and other content. damaging. Facebook, in particular, has come under fire in recent days for allowing QAnon conspiracy groups to thrive on its platform and for helping militias increase their numbers. Today, Facebook is trying to tackle claims that its recommendation systems are somehow responsible for how people are exposed to disturbing, objectionable, dangerous, deceptive and deceptive content.

The company has, for the first time, made public how its content recommendation guidelines work.

In new documentation available in Facebook’s Help Center and Instagram Help Center, the company details how Facebook and Instagram’s algorithms work to filter content, accounts, pages, groups, and events. of its recommendations.

Currently, Facebook suggestions can appear as pages you might like, “suggested for you” posts in the News Feed, people you might know, or groups you should join. Instagram suggestions can be found in Instagram Explore, Accounts You May Like, and IGTV Discover.

The company says Facebook’s existing guidelines have been in place since 2016 as part of a strategy it calls “remove, reduce and inform.” This strategy focuses on removing content that violates Facebook’s community standards, reducing the spread of problematic content that does not violate its standards, and providing people with additional information so they can choose what to click. , read or share, explains Facebook.

Referral guidelines are generally part of Facebook’s “shrink” efforts and are designed to maintain a higher standard than Facebook’s community standards, as they push users to follow new accounts, groups, pages, and so on.

Facebook, in the new documentation, details five key categories that are not eligible for recommendations. Instagram’s guidelines are similar. However, the documentation doesn’t offer any deep insight into how Facebook actually chooses what to recommend to a given user. It’s a key part of understanding recommender technology, and one Facebook intentionally left out.

An obvious category of content that many are not eligible for the recommendation includes those that would hinder Facebook’s “ability to foster a safe community,” such as content focused on self-harm, suicide, eating disorders, violence, sexually explicit content, regulated content such as tobacco or drugs, or content shared by unsavory accounts or entities.

Facebook also claims not to recommend sensitive or low-quality content, content users often say they dislike and content associated with low-quality posts. These other categories include items such as click baits, deceptive business models, payday loans, products making exaggerated health claims or offering “miracle cures”, content promoting cosmetic procedures, contests. , giveaways, engagement bait, non-original content stolen from another source, content from websites that get a disproportionate number of clicks from Facebook compared to other places on the web or news that don’t do not include transparent information about paternity or staff.

Additionally, Facebook says it will not recommend false or misleading content, such as those making false statements by independent fact-checkers, vaccine misinformation, and content promoting the use of fraudulent documents.

He says he will also be “to try” not recommending accounts or entities that have recently violated community standards, shared content that Facebook tries not to recommend, posted false vaccine-related information, engaged in buying “likes”, has been banned from broadcasting advertisements, has published false information or is associated with movements linked to violence.

This latest claim, of course, follows recent news that a Kenosha militia Facebook event remained on the platform after being reported 455 times after it was created, and was cleared by four moderators as non-content. violating. The associated page had issued a “call to arms” and welcomed comments on people asking what types of weapons to bring. In the end, two people were killed and a third was injured during protests in Kenosha, Wisconsin, when a 17-year-old armed with an AR-15 type rifle broke the curfew, crossed state borders and shot at protesters.

Given Facebook’s track record, it is worth considering how well Facebook is able to meet its own guidelines. Many people have found their way to what should be ineligible content like conspiracy theories, unhealthy content, COVID-19 misinformation and more by clicking on suggestions at times when guidelines fail. QAnon grew up on Facebook’s recommendations, it was reported.

It should also be noted that there are many gray areas that such guidelines fail to cover.

Militias and conspiracy theories are just a few examples. In the midst of the pandemic, US users who disagreed with government guidelines on business closures can easily find themselves directed to various “reopen” groups where members are not just discussing. political, but openly brag about not wearing masks in public or even when necessary. at their workplace. They offer advice on how to not wear masks and celebrate their successes with selfies. These groups may technically not break the rules by their description alone, but encourage behavior that poses a threat to public health.

Meanwhile, even if Facebook doesn’t directly recommend a group, a quick topic search will direct you to what would otherwise be ineligible content in Facebook’s recommendation system.

For example, a quick search for the word “vaccines” currently suggests a number of groups focused on vaccine injury, alternative remedies, and general anti-vaccine content. These are even more numerous than the pro-vax content. At a time when scientists around the world are trying to develop protection against the novel coronavirus in the form of a vaccine, allowing anti-vaccines a massive public forum to disseminate their ideas is just one example of how Facebook enables the dissemination of ideas that may ultimately become a global threat to public health.

The more complicated question, however, is where does Facebook draw the line in terms of controlling users having these discussions versus promoting an environment that supports free speech? With few government regulations in place, Facebook ends up making this decision on its own.

Recommendations are just one part of Facebook’s overall engagement system, and one that is often blamed for directing users to harmful content. But a lot of the harmful content that users find could be those groups and pages that show up at the top of Facebook search results when users turn to Facebook for general information on a topic. Facebook’s search engine promotes engagement and activity – like how many members of a group or how often users post – not how close its content is to accepted truths or medical guidelines.

Facebook’s search algorithms are not documented in the same way in so much detail.

Leave A Reply

Your email address will not be published.