As somebody who has long understood the power of social platforms to amplify voices and build community, the decision to step back from X has not come lightly. This platform, once a vibrant space for connection and conversation, has devolved into a hostile environment where anti-Blackness thrives, digital rights are trampled, and accountability is nonexistent. Here’s why I’ve decided to leave X and what it signifies for our digital future.
A Platform Overrun with Anti-Blackness
The systemic failure of X to protect its Black users is glaringly evident. Recently, I conducted an experiment: I reported 20 explicitly racist posts, each featuring slurs and dehumanizing language. These were not borderline cases but blatantly hateful content. Yet, X’s response was consistent: “This doesn’t violate our policies.”
This indifference to overt racism sends a clear message—X has no intention of prioritizing safety for marginalized communities. The normalization of hate speech, paired with the platform’s refusal to address user complaints, perpetuates a dangerous environment that disproportionately harms Black users.
Disinformation: A Feature, Not a Bug
One of the most alarming aspects of X’s current trajectory is its apparent reward system for disinformation. Inaccurate and inflammatory content is routinely boosted by the algorithm, creating an ecosystem where lies outpace truth. This is not incidental—it’s structural. Disinformation garners attention, and attention fuels engagement, which translates to profit.
This deliberate amplification of falsehoods has devastating consequences. It keeps the masses misinformed, fuels polarization, and manufactures conflict where none existed. Whether it’s denial of systemic racism, demonization of marginalized communities, or outright propaganda, the platform’s prioritization of divisive narratives actively undermines the pursuit of justice and truth.
Disinformation doesn’t just mislead—it erodes the very foundation of collective understanding, making solidarity and coordinated action harder to achieve. For a platform that wields such immense influence over public discourse, this is not just irresponsible—it’s dangerous.
Digital Rights Eroded
The platform’s new Terms of Service, effective November 15, 2024, mark a low point for user rights. Key changes include:
- AI Training Clause: User content is now explicitly used to train generative AI models, with no opt-out or compensation.
- Broad Content License: X retains a worldwide, irrevocable license to modify and sublicense all user-generated content.
- No Right to Privacy: Even private messages can be analyzed for ad targeting and AI development.
These policies strip users of agency and transparency, prioritizing corporate interests over the rights and dignity of the community.
A Leadership Crisis
Under its current ownership, X has embraced a culture of gaslighting and misinformation. The platform’s refusal to confront historical injustices mirrors broader systemic failures. When hate speech is framed as “free speech” and calls for accountability are dismissed as “mind viruses,” it becomes clear that this is not a space where justice can flourish.
A Broken Block Feature
Another glaring issue is the broken block feature. Despite claims of offering tools for user safety, X has made it so that blocking someone no longer fully prevents them from seeing your activity. This means that racists, stalkers, and trolls can continue to monitor and engage with users even after being blocked.
This failure to implement a functional block feature is not just an oversight—it’s a deliberate choice that prioritizes engagement metrics over user well-being. By eroding one of the most basic tools for self-protection, X leaves its most vulnerable users exposed to relentless harassment and harm. This is yet another way the platform perpetuates a hostile environment under the guise of free expression.
What Needs to Change
Leaving X doesn’t mean abandoning hope for what digital platforms can achieve. It means demanding better:
- User-Friendly Terms of Service: Policies that protect privacy, enforce accountability, and respect user-generated content.
- Active Enforcement of Safety Policies: A zero-tolerance approach to racism, harassment, and other forms of hate.
- Structural Change: Whether through new leadership, nationalization, or stringent regulation, X must evolve to serve its users equitably.
Moving Forward
I’m not deleting my account entirely, but I’ll be erasing older content and leaving only posts that align with equity and justice. If X decides to train their AI models on my anti-imperialist and anti-militarist content, so be it. But for now, I refuse to endorse a platform complicit in the spread of hate, the amplification of lies, and the erosion of digital rights.
Social media platforms are not inherently neutral—they reflect the priorities of those who build and run them. Until X prioritizes its users over its profits, I’ll be leaving X and seeking spaces that align with my values and allow me to contribute meaningfully without compromise.
Stay Connected
While I’m leaving X, the conversation doesn’t stop. You can find me on Bluesky, where I’ll continue to share thoughts, ideas, and discussions rooted in justice and liberation. Connect with me at @hashimmteuzi.com—let’s keep building the world we want to see, together.
Let’s not normalize platforms that dehumanize. Let’s build and demand better.