Behind the Scroll: How Social Media Shapes What We See (And Don't)
Have you ever wondered who decides what shows up on your feed?
Platforms like TikTok and Instagram are places we go to laugh, learn, connect and campaign — but they’re also spaces where we’re tracked, ranked, sold-to and sometimes silenced. For marginalised voices in particular, these apps offer visibility — but only on the platform’s terms. Behind every scroll lies a system shaped by algorithms, commercial agendas and design choices that amplify some voices while filtering out others (Noble 2018).
So what’s really going on behind the scenes?
This blog explores five key communication concepts that unpack how power operates in the digital age. From algorithms to participatory culture, these frameworks help explain how social media shapes identity, visibility and belonging. Examining which narratives are amplified, and which are sidelined, reveals the deeper power structures at play.
Platforms aren’t neutral. They’re engineered for influence — and often reflect the same inequalities we see offline. Understanding how they work is the first step toward using them more critically and consciously.
Platforms aren’t neutral. They’re engineered for influence — and often reflect the same inequalities we see offline.
1. Algorithmic Culture – Striphas (2015)
Culture is increasingly shaped not just by human choices, but by machines. Striphas (2015) describes this as algorithmic culture—a shift in which recommendation systems, not traditional gatekeepers, act as powerful cultural intermediaries, filtering content based on user behaviour and commercial interests.
This shift has profound consequences. On one hand, it democratises access to audiences. On the other, it limits diversity by favouring what is most clickable and shareable — often at the cost of nuance. Cheney-Lippold (2011) warns that algorithmic categorisation reduces identity to data points, ignoring self-identification in favour of predicted behaviours. For creators from marginalised backgrounds, this means they may be pushed into reductive categories or denied visibility altogether.
Even more troubling is the opacity of algorithmic systems. Users rarely understand how their feeds are curated, which creates what Pasquale (2015) calls the ‘black box society’. Without transparency, there is little accountability — and even less room for contesting the decisions that shape digital culture (Pasquale 2015). These algorithms may appear objective, but they’re shaped by the biases and priorities of those who design them (Noble 2018).
Everyone’s a creator — but not everyone gets seen.
2. Participatory Culture – Jenkins (2006) & Marwick (2013)
Everyone’s a creator — but not everyone gets seen. Jenkins (2006) described participatory culture as a media environment where users are no longer passive consumers but active participants who produce, remix and circulate content. This shift was viewed as a step toward democratising media by flattening hierarchies and empowering everyday users.
However, scholars like Marwick (2013) and boyd (2014) have since challenged this optimistic view. While platforms appear open and democratic, they often reward popularity, performance and algorithmic appeal over diversity or equity (boyd 2014). Visibility in participatory spaces is not evenly distributed — it hinges on users’ ability to perform a marketable, palatable identity (Marwick 2013). Marginalised creators frequently feel pressure to conform to platform norms to gain traction (boyd 2014).
Nakamura (2015) argues that this kind of ‘identity work’ disproportionately benefits platforms, which profit from user engagement while exposing creators to exploitation and surveillance. Despite branding themselves as inclusive, platforms reflect and reinforce offline inequalities. Marwick (2013) describes this through the concept of ‘networked status’, where visibility and influence are shaped by race, gender, class and platform literacy. In this light, participatory culture remains more aspirational than actual — and for many, deeply exclusionary.
3. The Public Sphere – Fraser (1990)
Online spaces can empower — or exclude. While Habermas’ (1989) original conception of the public sphere imagined a rational, inclusive space for democratic debate, Fraser (1990) challenged this ideal by showing how dominant public spheres marginalise certain voices. She introduced the concept of counter publics — parallel discursive arenas where subordinated groups create and circulate alternative narratives.
Social media platforms can host these counter publics. Hashtags like #MeToo, #BlackLivesMatter have become digital rallying points for marginalised users (Jackson, Bailey & Foucault Welles 2020). These communities foster solidarity, challenge dominant narratives and claim visibility. However, their existence within corporate platforms limits their power.
Papacharissi (2010) argues that digital publics are shaped by commercial imperatives — and thus remain vulnerable to algorithmic suppression, content moderation bias and platform censorship.
Visibility becomes conditional (Noble 2018)
A 2020 investigation found that TikTok’s algorithm had deliberately suppressed videos from Black creators by flagging search terms like 'Black Lives Matter' and 'Black success' as 'inappropriate' or 'sensitive', a bias the company later acknowledged and apologised for. One creator’s viral video captures the frustration and emotional toll of this kind of algorithmic erasure.
This precariousness is not accidental. Noble (2018) demonstrates that even search algorithms replicate social inequality. When resistance movements are filtered through systems optimised for ad revenue and ‘brand safety’, their visibility becomes conditional. So while counter publics can exist, they must constantly navigate platform structures that work against them.
4. Filter Bubbles – Pariser (2011)
When your feed only agrees with you — that’s not a coincidence. Pariser (2011) coined the term filter bubble to describe the algorithmic narrowing of content exposure. Platforms curate information based on perceived preferences, creating isolated informational ecosystems.
Users — especially marginalised ones — are compelled to self-brand and monetise their identity to remain visible.
This filtering encourages confirmation bias and weakens exposure to diverse perspectives (Sunstein 2017). In doing so, it undermines deliberative democracy and makes it harder for marginalised voices to reach broader audiences. Important but less 'engaging' content — such as Indigenous perspectives on climate policy, feminist critiques of power, or trans voices in public debates — may never reach users outside their immediate circles.
These bubbles are especially harmful when combined with misinformation and polarisation. Algorithms reward emotion and engagement over truth or complexity (Tufekci 2017). For marginalised communities, this can mean that even when their stories are told, they’re told inaccurately or made invisible. Breaking these bubbles requires not only critical media literacy but structural changes to how platforms prioritise and present content.
5. Media Convergence – Jenkins (2006)
Content, commerce and activism are now entangled. Jenkins (2006) defined media convergence as the flow of content across multiple media platforms, driven by both corporate and grassroots efforts. Today, a single TikTok video might inform, entertain, sell a product, and advocate for change — all at once. For example, creators like Taylor Cassidy use TikTok to teach Black history, promote merchandise and share personal stories in one post.
This hybridity creates opportunities, but also contradictions. Andrejevic (2007) highlights how convergence often leads to ‘the work of being watched’, where users — especially marginalised ones — are compelled to self-brand and monetise their identity to remain visible. This digital labour is unpaid, continuous and extractive.
Activism is particularly susceptible to this convergence. While causes can gain traction through viral content, they’re also easily commodified or diluted (Andrejevic 2007). A movement can be rebranded into a marketing trend, then abandoned when it no longer serves commercial interests. The convergence of media and commerce thus invites critical questions about authenticity, longevity and who benefits from visibility.
These platforms aren’t neutral — they’re socially constructed environments shaped by profit, power and ideology.
Understanding all of this is powerful, but what can we do with this knowledge?
Disengaging from social media isn’t the only option. Instead, recognising the systems behind our feeds is essential for navigating platforms more consciously and advocating for more equitable digital spaces. These platforms aren’t neutral tools — they’re socially constructed environments shaped by profit, power and ideology.
By examining how social media amplifies, suppresses and monetises speech, we gain critical tools to ask: Who built this system? Whose interests does it serve? Who gets to be seen — and who doesn’t?
Understanding these dynamics helps us challenge the systems shaping our digital lives — and imagine new possibilities for more inclusive, just ways of sharing and engaging online.
Post a comment