Global - Ekhbary News Agency
Digital Platforms Under Scrutiny: Substack Accused of Profiting from Nazi and White Supremacist Content
A recent in-depth investigation by The Guardian has exposed a troubling aspect of the global publishing platform Substack: the site is reportedly generating significant revenue from newsletters that actively promote virulent Nazi ideology, white supremacy, and antisemitism. This revelation casts a harsh spotlight on the platform, which boasts approximately 50 million users worldwide, regarding its ethical responsibilities and content moderation policies.
Substack operates on a business model where individuals can self-publish articles and charge for premium content, with the platform taking approximately 10% of the revenue generated by these newsletters. With about 5 million people paying for access to various newsletters, this means Substack directly profits from the dissemination of hateful and extremist content. The investigation unearthed several egregious examples that underscore the severity of this issue.
Read Also
- Iran Forges New Economic Pacts with Regional Nations, Bolstering Trade and Investment
- Global Momentum Builds for Sustainable Energy Transition Amidst Profound Economic Shifts
- Global Consensus Builds for Accelerated Climate Action Amidst Economic Shifts
- Ekhbary News Agency: Pioneering Multilingual Media to Connect the World
- Global Economic Outlook: Navigating Uncertainty and Embracing Innovation for Sustainable Growth
Among the newsletters identified is one titled 'NatSocToday,' which has amassed 2,800 subscribers. This newsletter charges $80 (approximately £60) for an annual subscription, although much of its content is freely available. 'NatSocToday' is believed to be operated by a far-right activist based in the United States and prominently features a swastika—a symbol appropriated by the Nazi party in the 1920s to represent white supremacy—as its profile picture. One of its recent posts falsely attributed responsibility for the Second World War to the Jewish people and lauded Adolf Hitler as 'one of the greatest men of all time.'
The investigation further revealed a concerning aspect of Substack's algorithmic behavior. Within just two hours of subscribing to 'NatSocToday' for the purpose of the inquiry, the Substack algorithm directed The Guardian's account to 21 other profiles featuring similar extremist content. This highlights a deeper problem with how algorithms can inadvertently amplify radical viewpoints, creating digital echo chambers where dangerous ideologies can proliferate and gain traction unchecked.
Other accounts brought to light include 'Erika Drexler,' a self-styled 'NS [national socialist] activist' with 241 subscribers, who charges $150 for an annual subscription. This US-based account shared posts describing Hitler as her 'hero' and the 'most overqualified leader ever.' Similarly, 'Ava Wolfe,' with 3,000 subscribers, who describes herself as an 'archivist of articles and videos about history in particular WW2,' appears to be based in the UK and charges £38 annually. Her profile prominently displays swastikas and other Nazi imagery, and a significant portion of her content engages in Holocaust denial, falsely claiming that 'no one was deliberately murdered by Germans' and that deaths were 'from disease and starvation only,' despite the historical fact of approximately 6 million Jews perishing in the Holocaust.
Another disturbing finding was the 'Third Reich Literature Archive,' an account with 2,100 subscribers, which shared postcards purporting to be from a Nazi propaganda rally in Nuremberg in 1938, the year before WWII began. This account also charges $80 a year for a premium subscription. The Guardian's investigation also found that the algorithm promoted posts propagating conspiracy theories about 'Jewish power and influence' and suggesting that antisemitism is a myth. Furthermore, it directed users to other extremist content, including newsletters related to the 'great replacement' conspiracy theory, which posits a plot to replace white Europeans with people from other races.
These revelations emerge at a time of a sharp global increase in antisemitism and Islamophobia, particularly since the onset of the Israel-Gaza conflict in October 2023. Danny Stone, chief executive of the Antisemitism Policy Trust, emphasized that harmful online content frequently inspires real-life attacks. He cited several tragic examples: the racially motivated murder of 10 African Americans in Buffalo, New York, in 2022; a synagogue shooting in Pittsburgh, Pennsylvania, in 2018 that killed 11 people; and the 2017 attack on a mosque in Finsbury Park, North London, which resulted in one death and several injuries. Stone unequivocally stated, 'People can be, and are, inspired by online harm to cause harm in real world.'
Related News
- Afghanistan: In-Depth Look at Latest Political and Economic Developments
- MLB Hot Stove Ignites: High-Stakes Free Agency and Blockbuster Trades Reshape 2026 World Series Contention
- The Perilous Dawn of Quantum Politics: Navigating a New Geopolitical Frontier
- UAE Tour Women: Elisa Longo Borghini Dominates Jebel Hafeet to Secure Third Overall Victory
- Remco Evenepoel Dominates Volta a la Comunitat Valenciana, Securing Overall Victory Amidst Chaotic Final Stage
The findings raise urgent questions about the accountability of technology platforms in monitoring and profiting from the content they host. While these platforms often champion free expression, profiting from material that incites hatred and violence represents a grave breach of ethical and societal standards. Immediate and decisive action is imperative to ensure that the digital space does not become a safe haven for extremists to spread their dangerous ideologies and incite real-world harm.