These Social Media Platforms Majorly Failed GLAAD's Safety Index

Emell Adolphus READ TIME: 2 MIN.

GLAAD seems to have a message for social media companies: Do better.

The LGBTQ+ media advocacy organization recently released its fourth annual Social Media Safety Index, a report which grades socials on LGBTQ+ safety, and the major social media platforms' performance was less than stellar against GLAAD's metrics.

TikTok, one of the newest social platforms, earned a D+. Meanwhile, YouTube, X, Facebook, Instagram, and Threads all received failing grades for the third consecutive year.


So, what's wrong? Well, though there have been some minor improvements among the social brands, many failed to address issues of data privacy, moderation transparency, training of content moderators, workforce diversity, and more.

"Leaders of social media companies are failing at their responsibility to make safe products," said GLAAD president and CEO Sarah Kate Ellis in a statement released with the report. "When it comes to anti-LGBTQ hate and disinformation, the industry is dangerously lacking on enforcement of current policies. There is a direct relationship between online harms and the hundreds of anti-LGBTQ legislative attacks, rising rates of real-world anti-LGBTQ violence and threats of violence, that social media platforms are responsible for and should act with urgency to address."

Created in partnership with Ranking Digital Rights (RDR), the SMSI Platform Scorecard considers 12 LGBTQ-specific indicators and evaluates each of the six major platforms to generate a rating. According to GLAAD, the SMSI Scorecard does not include metrics on the enforcement of policies.

Furthermore, the report calls on social media platforms to prioritize LGBTQ+ safety, given a proliferation of anti-trans hate, disinformation, and harassment on social platforms.

"In addition to these egregious levels of inadequately moderated anti-LGBTQ hate and disinformation, we also see a corollary problem of over-moderation of legitimate LGBTQ expression – including wrongful takedowns of LGBTQ accounts and creators, shadowbanning, and similar suppression of LGBTQ content," explained GLAAD's Senior Director of Social Media Safety Jenni Olson. "Meta's recent policy change limiting algorithmic eligibility of so-called 'political content,' which the company partly defines as 'social topics that affect a group of people and/or society large,' is especially concerning."

Read the full report at

by Emell Adolphus

Read These Next