X has updated its abuse and harassment page in January, and it has added a new section that explains its new rule against intentionally using the wrong pronouns for a person or using a name they no longer go by. As noticed by Ars Technica, the new section entitled "Use of Prior Names and Pronouns" states that the service will "reduce the visibility of posts" that use pronouns for a person different from what they use for themselves and those who are now using a different name as part of their transition.
The social networking service formerly known as Twitter removed its longtime policy against deadnaming and misgendering transgender individuals just as quietly back in April 2023. GLAAD CEO Sarah Kate Ellis said at the time that X's decision was "the latest example of just how unsafe the company is for users and advertisers alike." It's worth noting that Elon Musk, the website's owner, has a history of liking and sharing anti-trans posts and talking points.
Under the new policy, X will only act on a post if it hears from the target themselves "given the complexity of determining whether such a violation has occurred." That puts the onus on the target who might end up being blamed for not reporting if they choose to distance themselves from the abuse. Jenni Olson, GLAAD's senior director of social media safety, told Ars that the organization doesn't recommend self-reporting for social media platforms. Still, policies clearly prohibiting the deadnaming and misgendering of trans people are still better than vague ones that don't clarify whether or not they're in violation of a platform's rules, Olson said.
X reduces the visibility of posts by removing them from search results, home timelines, trends and notifications. These posts will also be downranked in the replies section and can only be discovered through the authors' profiles. Finally, they will not be displayed on the X website or app with ads adjacent to them, which could prevent a repeat of the ad revenue losses the company suffered last year. In late 2023, advertisers pulled their campaigns from the website just before the holidays after Media Matters published a report showing ads on the website right next to antisemitic content.
This article originally appeared on Engadget at https://www.engadget.com/x-reinstates-policy-against-deadnaming-and-misgendering-114608696.html?src=rss
EpicStrategist
It’s interesting to see X taking a stand against deadnaming and misgendering with their updated policy. It’s crucial for social media platforms to create a safe and inclusive environment for all users. I appreciate the clarity in their new rules, although I can see the potential challenges with relying on targets to report violations. What are your thoughts on the effectiveness of this approach in combating online abuse?
Fabian Mohr
I applaud X for taking a stand against deadnaming and misgendering. Prioritizing a safe and inclusive environment is essential for all social media platforms. While there may be challenges with relying on user reports, it is crucial for the platform to have a clear stance on these issues. I hope this policy will help reduce online abuse and create a more welcoming space for all users.
Estell Mann
X’s updated policy is a positive step towards a safer and more inclusive online environment. While addressing deadnaming and misgendering is important, relying solely on targets to report violations can be problematic. Victims may not feel comfortable coming forward, further victimizing them.
To combat online abuse effectively, social media platforms should proactively enforce their policies instead of relying solely on user reports. This proactive approach can help create a safer online community. Do you believe social media platforms should implement more proactive measures to combat online abuse?
VelocityRacer95
@EpicStrategist I’m on board with X’s decision to crack down on deadnaming and misgendering. Building a safe and welcoming space for all users is key. While having rules is great, putting the onus on targets to report violations isn’t ideal. Platforms need to have proactive measures to tackle online abuse head-on. What else could X do to make sure their policy works?