Social Media Algorithms and Political Polarization in the United States
The first in four articles on social media
Introduction
Political polarization in the United States has sharply increased over the past decade, and many observers point to social media as a key contributing factor. Over 70% of Americans now report getting at least some news via social media, which gives platforms like Facebook, YouTube, and Twitter (now X) enormous influence over public discourse. These platforms have faced intense scrutiny for enabling the spread of harmful, divisive content and for contributing to social polarization and radicalization. Unlike traditional media, social networks personalize the content each user sees using opaque algorithms that prioritize engagement – often measured by clicks, shares, view time, and “likes.” Researchers, whistleblowers, and policymakers increasingly worry that these algorithms exploit human psychology to push users toward more extreme and inflammatory content, reinforcing “echo chambers” and deepening partisan divides. This report analyzes how the engagement-driven design of major social media platforms contributes to political polarization in the U.S., drawing on evidence from peer-reviewed studies, investigative journalism, and individual case studies of algorithm-induced radicalization. It also compares these dynamics to experiences in other countries and concludes with policy recommendations to mitigate the most polarizing effects of social media.
Facebook: Engagement and Echo Chambers
Facebook is the largest social media platform and a prime example of how algorithmic design can foster polarization. Facebook’s News Feed algorithm learned early on that divisive, sensational content keeps users glued to the screen – and it began optimizing for it. In 2018, an internal Facebook presentation bluntly stated, “Our algorithms exploit the human brain’s attraction to divisiveness.” If left unchecked, the algorithm would serve users “more and more divisive content in an effort to gain user attention & increase time on the platform,” according to Facebook’s own researchers. In fact, an internal analysis in 2016 found that 64% of all extremist group joins were due to Facebook’s recommendation tools – chiefly the “Groups You Should Join” and “Discover” features that algorithmically suggested content. In other words, Facebook’s automated recommendations were actively steering users toward extreme political communities.
Such findings were not outliers; they were confirmed in experiments. In 2019, Facebook researchers created a fake account of a conservative American woman (“Carol Smith”) to see how the platform would shape her experience. Within two days, the account began receiving algorithmic recommendations for QAnon conspiracy groups. Even without the user proactively seeking it, Facebook “force[-fed] extremist right-wing content” to the account, which soon turned the feed into “a barrage of extreme, conspiratorial, and graphic content,” as one report described. This internal experiment – later dubbed “Carol’s Journey to QAnon” – provided incontrovertible proof that Facebook’s algorithm was contributing significantly to radicalization. Yet, according to leaked documents, Facebook’s leadership was slow to act for fear of reducing engagement metrics or angering certain political groups.
Facebook’s algorithms not only push people toward polarizing groups but also amplify inflammatory content in the News Feed itself. Under the hood, the platform in 2018 revamped its feed to prioritize content that generates “meaningful social interactions” (comments, shares, reactions). The change was touted as a way to strengthen personal connections, but in reality internal research found it was “cultivating outrage and hatred”. Content that evoked strong reactions – often incendiary posts rife with misinformation, anger, or fear – reliably drove the most engagement. As a result, “the algorithm accordingly amplifies this content across the platform, rewarding divisive content like misinformation, hate speech and incitements to violence,” one analysis concluded. Facebook’s own integrity team members warned that the new ranking system was rewarding sensationalism and divisive posts over factual, nuanced reporting. In effect, the platform’s design created algorithmic echo chambers: it would preferentially show users content that aligned with their pre-existing views or that provoked strong emotional responses, reinforcing biases and partitioning audiences into isolated ideological communities. This dynamic has been observed not just in the U.S. but globally – in countries like Myanmar, India, and Ethiopia, Facebook was found to have been used to foment division and even incite ethnic violence, often amplified by these same engagement-driven algorithms.
YouTube: The Recommendation “Rabbit Hole”
YouTube, with its algorithmic video recommendations, has frequently been accused of leading users down “rabbit holes” of increasingly extreme content. The site’s powerful recommendation engine is designed to maximize watch time and keep viewers clicking video after video – a goal that critics say often favors shock value and controversy. Case studies have shown how YouTube’s algorithm can gradually radicalize individuals. A famous example is Caleb Cain, a young American man whose descent into far-right extremism was chronicled by The New York Times. Cain described how he started out watching relatively mainstream political videos, but “he became radicalized by videos made by a community of far-right creators — many of which were put in front of him by YouTube’s recommendation algorithm.” Over time, YouTube’s autoplay and sidebar suggestions pulled him into a vortex of conspiracy theories, anti-immigrant propaganda, and misogynistic content. “I was brainwashed,” Cain later said of the experience.
Empirical research supports these anecdotes. In Brazil, a 2019 academic study by the Federal University of Minas Gerais found that YouTube’s algorithm actively promoted content from “alt-right” micro-celebrities, correlating with a rise in online radicalization. The study showed a clear “tendency for users to progressively watch more and more extreme content after prompts from the platform’s algorithm.” In practice, someone might begin with an innocuous video on a political topic, but YouTube’s recommendation system – learning from their clicks – could soon suggest slightly more provocative videos, then even more extreme ones. Many have observed this gradual ratcheting-up of extremity, wherein each recommended video is a bit more inflammatory or fringe than the last, in an effort to sustain user interest. Journalists dubbed this the “Alternative Influence Network” or an “alt-right pipeline,” whereby viewers were subtly guided from mainstream content into far-right communities.
It should be noted that not all researchers agree on the degree of YouTube’s algorithmic radicalization effect; some recent studies suggest the platform has adjusted in recent years to downrank the most egregious “borderline” content. Nonetheless, evidence of YouTube’s role in spreading polarizing and extremist material is strong. A Mozilla Foundation investigation in 2021 (“YouTube Regrets”) collected reports from users in multiple countries and found the algorithm frequently recommended misinformation and hate content; tellingly, about 12% of the video recommendations that volunteers reported were “objectionable or harmful,” including political disinformation and extremist propaganda. And a team at University of California, Davis in 2022 performed a systematic audit, concluding that YouTube’s video recommendations can indeed lead users down a rabbit hole of increasingly extremist political content – especially for users who show an initial interest in such material. Internationally, similar concerns have been raised: in Germany and France, authorities worried about YouTube’s role in far-right radicalization; in Brazil, the algorithm was reported to heavily push content favoring a hardline political agenda. In short, YouTube’s engagement-maximizing model, left unchecked, tends to favor sensational and polarizing content, which can incrementally nudge susceptible viewers toward extreme ideologies.
Twitter (X): Amplifying Outrage and Partisanship
Twitter – recently rebranded as X – has a different format but exhibits similar algorithmic tendencies. Twitter’s feed algorithm (the “For You” timeline) was introduced to replace the purely chronological feed for most users, using machine learning to boost tweets likely to engage the user. Studies indicate this engagement-weighted algorithm systematically amplifies outrage and partisan animosity. In a 2023 randomized experiment, researchers compared Twitter’s algorithmic feed to a chronological feed for hundreds of users. The results were striking: the algorithmic ranking “tends to amplify emotionally charged content, particularly that which expresses anger and out-group animosity.” The study found that of the political tweets shown by Twitter’s algorithm, 62% contained angry sentiments and 46% evinced out-group hostility, compared to 52% and 38% (respectively) in the chronological feed. By privileging posts that spark quick, intense reactions – often tweets dunking on the opposing party or expressing moral outrage – the platform ends up showing users a more vitriolic version of political discourse than they would see organically.
Crucially, this experiment provided causal evidence that Twitter’s engagement-based ranking polarizes users’ perceptions. After reading tweets chosen by the algorithm, participants had “more positive perceptions of their in-group and more negative perceptions of their out-group, compared to the chronological baseline.” In other words, consuming the algorithm-curated feed made people feel better about their own side and angrier at the opposing side – a classic marker of affective polarization. Notably, Twitter’s algorithm did not simply trap users in one-sided echo chambers; it actually showed more content from opposing viewpoints (out-group) than the chronological feed did. However, the opposing content it showed tended to be the most extreme, anger-inducing examples, leading to backlash and reinforcement of tribal identities. The outcome is a confrontational atmosphere where each side mainly encounters the other side’s most inflammatory statements. This aligns with other research showing that negative partisan content (“outrage tweets”) gets disproportionately amplified on social media because it drives replies and quote-tweets. Indeed, a Yale University study of 12.7 million tweets found that users learned over time to post more moral outrage because outrageous tweets were rewarded with more “likes” and retweets – a feedback loop that Twitter’s algorithms help reinforce.
Since Elon Musk’s takeover of Twitter in 2022 (and its rebranding to X), the platform’s algorithmic transparency has been hotly debated. Some internal data released by the company suggested that the algorithm boosts tweets based on user interactions and might have had a built-in bias toward certain topics or users. Musk at one point made the algorithm’s code public, revealing an emphasis on engagement metrics like replies. Analysts worry that under such an approach, provocative content that elicits high engagement (even if it’s angry reactions) will continue to be elevated, incentivizing politicians and influencers on Twitter to take more extreme stances or post incendiary remarks for viral attention. In sum, Twitter’s design – from trending topics that favor controversial hashtags, to its engagement-based feed ranking – often rewards the loudest and most polarizing voices, contributing to a coarsening of political dialogue online.
Algorithmic Incentives: Sensationalism Over Substance
Across these platforms, a common pattern emerges: content that triggers strong, immediate reactions is favored by algorithms, often at the expense of nuanced, factual content. This creates powerful incentives for content creators – whether professional news outlets, independent influencers, or ordinary users – to craft more sensational and extreme posts. Because social media algorithms reward high engagement, publishers learn that outrage equals attention. The same Yale University study published in Science Advances demonstrated that on platforms like Twitter (now X), “social media’s incentives are changing the tone of our political conversations online”. Users in the study “learned to express more outrage over time because they were rewarded” with likes and shares for doing so. In practice, a person who might initially post a measured political opinion sees it go unnoticed, while a more angrily worded post on the same topic gets a flood of reactions – thus positively reinforcing the outraged behavior. Over thousands of micro-iterations, the overall tenor of online political content shifts toward more extreme language and polarizing claims.
This incentive structure affects not just individual behavior but also journalistic and political institutions. Mainstream media organizations, for example, face pressure to gain visibility on algorithm-driven feeds, which can lead to more clickbait headlines or partisan framing to drive virality. Meanwhile, fringe websites and hyper-partisan pages that specialize in sensational or misleading content often outperform sober, fact-based reporting on engagement metrics. A well-known MIT study found that false news spreads “farther, faster, deeper, and more broadly” than true news on social platforms, especially in the political domain. False or sensational stories are often more novel and emotionally charged – fueling curiosity, fear, or anger – which makes people more likely to share them impulsively. The algorithms, seeing those shares and clicks, then amplify that content further. As a result, misinformation and extreme narratives can circulate widely before corrective voices even appear. The ease of sharing also means users can form communities around these incendiary narratives, reinforcing each other’s beliefs (the echo chamber effect). Over time, content creators on platforms like YouTube or Facebook may pivot their style to cater to these engagement rewards: for instance, some YouTubers have admitted to adopting increasingly extreme personas or conspiracy-laced content because that’s what the algorithm promoted to their audiences, as revealed by The New York Times piece earlier mentioned.
Internal documents from Facebook underscore this dynamic. One Facebook whistleblower, Frances Haugen, testified that the company’s algorithms “amplify divisive content” by design – a system known as “engagement-based ranking.” The Verge also reported that a 2018 internal Facebook report revealed that it’s not that Facebook set out deliberately to promote hate or falsehoods, but that “the company’s leadership knows its algorithms exploit the human brain’s attraction to divisiveness”. Content that makes users react (especially with anger or excitement) simply keeps them on the platform longer and generates more revenue, creating a business incentive to let this cycle continue. As The Wall Street Journal’s Facebook Files reporting revealed, executives were repeatedly warned that algorithmic tuning was driving polarization – one 2018 memo flatly stated that Facebook’s feed was dividing people – yet they resisted fixes that might reduce engagement metrics. The end result is a system that rewards outrage over dialogue, and sensationalism over substance. The voices that rise to the top of feeds are often those that make the most extreme statements or emotional appeals, which pressures even moderate figures to amp up their rhetoric to be heard.
Notably, these patterns are not unique to the United States. Around the world, whenever engagement metrics rule, sensational content thrives. Researchers have documented parallel phenomena on newer platforms like TikTok. For example, an analysis by the Institute for Strategic Dialogue found that TikTok’s algorithm (circa 2020) was also promoting far-right and extremist accounts to users in multiple countries. And multiple comparative studies have confirmed that the “engagement = extremism” trend is present across Europe, Australia, and Asia. In authoritarian contexts, such amplification can be exploited deliberately: state-sponsored trolls or propagandists craft emotionally charged disinformation to splinter societies, knowing the algorithms will help it go viral. Even in stable democracies, the net effect of algorithmic sensationalism is a coarser, more tribal public sphere, as people consume divergent information feeds that validate their side and demonize the other. This creates fertile ground for demagogic politics and weakens the common factual basis needed for healthy democratic debate.
Illustrative Case Studies of Radicalization
The abstract concepts above become very tangible when looking at individual stories of people who fell down social-media-fueled radicalization pipelines. We’ve already mentioned Caleb Cain’s journey via YouTube and the “Carol” experiment on Facebook, but there are many real-life cases echoing similar patterns. For example the man who showed up to a Washington DC pizza shop convinced it was a Democrat pedophilia front adopted many of his conspiracy theories by conversing with like-minded believers on social media. Even after it was clear their was no sex dungeon in the basement of the pizza shop. Despite face-to-face evidence disconfirming his online conspiracy theories, the man was not able to escape the conspiracy community and was shot dead after pulling a gun on an officer. These issues are not unique. Hundreds of the individuals charged in the January 6th Capitol attack showed evidence of heavy engagement with extremist content on platforms like Facebook, YouTube, Parler, and Twitter in the preceding months.
Another case comes from the realm of QAnon, the sprawling pro-Trump conspiracy theory. QAnon found its earliest adherents on fringe message boards, but it went mainstream via Facebook, YouTube, and Twitter, which allowed it to rapidly recruit from the general population. One older woman, for example, joined Facebook wellness group in search of alternative health tips; within weeks Facebook’s algorithm suggested QAnon style conspiracy groups that had co-opted the wellness and anti-vaccine rhetoric. She followed the suggestion and soon became a vocal conspiracy theoriest, estranging her from her family. Twitter also played a role by elevating QAnon hashtags into trending topics, and YouTube served up countless “Q” explainer videos to curious users. The cross-platform nature of this radicalization shows how a holistic problem it is – not just one platform’s issue, but an ecosystem where each algorithmic feed can reinforce the others’ effects.
Internationally, consider Myanmar as an extreme case study. Facebook was so dominant in Myanmar that it was practically synonymous with the internet for many citizens. In the lead-up to the Rohingya crisis (2016-2017), Facebook’s platform was inundated with hate-filled posts against the Rohingya Muslim minority – posts which spread unchecked rumors were repeatedly recommended to new audiences. An independent investigation later found that Facebook’s algorithmic systems had amplified calls to violence in Myanmar, directly fueling a possible genocide. One Burmese nationalist group’s Facebook page, rife with ethnic slurs and dehumanizing memes, gained huge algorithmic traction through shares and comments, helping to normalize hate speech on the platform. This illustrates how quickly online polarization can translate into real-world harm when platform oversight is lax. Similar patterns have been documented in Sri Lanka, Ethiopia, and India, where viral false rumors on social media (often amplified by automated recommendations) incited mob violence or inter-communal tensions. These international cases serve as a grim warning: political polarization accelerated by social media is not just about people disliking each other more – in fractured societies it can escalate into unrest or violence.
Policy Recommendations to Mitigate Polarization
Addressing the polarizing effects of social media will require action on multiple fronts – from platform design changes to regulatory measures and user-focused interventions. Below is a set of policy recommendations aimed at curbing algorithm-driven polarization:
● Reform Engagement Algorithms: Platforms should adjust their content ranking algorithms to de-prioritize extreme sensationalism and reduce the weight given to outrage-based engagement metrics. As one group of scholars suggests, companies could “stop optimizing for certain engagement signals (such as comments, shares, or time spent) in sensitive contexts” – for example, political content – and instead monitor for when engagement-based ranking is disproportionately amplifying divisive posts. In practice, this might mean capping the boost an emotionally charged post can get, or introducing friction (like a prompt asking “Are you sure you want to share this article? It contains unverified claims.”). The goal is to break the automatic reward cycle for rage-inducing content.
● User Control and Choice: Give users more control over how their feeds are curated. A straightforward step is to offer algorithm-free chronological feeds as an easy option on all platforms. The European Union’s new Digital Services Act (DSA) now requires large social platforms to provide a recommender system “not based on profiling,” such as a basic chronological feed. U.S. platforms could voluntarily implement this globally. Additionally, users should be able to customize their content ranking preferences – for instance, sliders to dial down “political content” or to ensure a diversity of viewpoints. Increasing user autonomy can lessen feelings of being trapped in an algorithmic bubble.
● Transparency and Audits: Implement transparency measures that allow researchers and regulators to inspect how algorithms are operating and what content is being amplified. Platforms could publish regular reports on the most widely viewed or recommended content (some have started doing this) and the prevalence of harmful content (Facebook’s Community Standards Enforcement Reports are an example, though they could be more granular). Independent audits are crucial: companies might monitor for “conflict-relevant side effects” of their algorithms, as experts have urged, and share data with academic researchers to verify whether the algorithms are steering users toward extremes. Regulators should mandate data access for vetted research on algorithmic impacts.
● Promote Diverse and Cross-Cutting Content: To counteract echo chambers, platforms can tweak algorithms to promote cross-cutting content that a user might not typically see but which is highly relevant to civic dialogue. For example, Facebook internally toyed with a project called “Common Ground” that would have injected more non-partisan, bridge-building content into feeds. While that project was shelved, the idea remains valid. YouTube could ensure that if a user watches several hyper-partisan videos in a row, the next recommendation includes a reliable news video providing a different perspective. Similarly, Twitter (X) could adjust its trending topics and “Who to follow” suggestions to highlight a mix of viewpoints. The intent is not to “force” anyone to change their beliefs, but to at least make people aware of other narratives and reduce the demonization of the out-group.
● Strengthen Content Moderation of Extremes: Content moderation is a blunt tool, but still necessary. Platforms should continue removing content that explicitly violates rules (hate speech, direct incitements of violence, etc.), as such content is often the most polarizing and harmful. More subtly, they can down-rank or label provably false information (especially that which maligns groups or election outcomes) to dampen its spread. While moderation alone can’t catch everything and has pitfalls, it does set baseline norms. Efforts like third-party fact-checking partnerships (used by Facebook) and community-driven moderation (Reddit’s model) can complement algorithmic solutions by flagging divisive misinformation early.
● Digital Literacy and User Empowerment: On the demand side, educating users is key. Schools, libraries, and public institutions should teach basic digital media literacy, including how algorithms work and the psychological tricks of social media. If users recognize that “the algorithm is trying to provoke me with this outrageous post,” they may be more critical about engaging with it. Communities could promote campaigns to “slow down” online – encouraging people to verify news before sharing and to engage respectfully with different opinions. Some civil society groups are already helping people who have been “deep in” online echo chambers (e.g., support forums for families of QAnon believers) – these should be supported as they help individuals exit from extreme online rabbit holes.
● Regulatory Measures and Oversight: Government policy can create accountability. In the U.S., lawmakers are considering a range of proposals: from reforming Section 230 protections to remove immunity for algorithmically amplified content in certain cases, to requiring algorithmic impact assessments for large platforms. Any regulation must be careful to balance free expression with harm reduction. One promising approach is to treat social media algorithms similar to how we treat other products with externalities – require risk assessments and mitigations. For instance, under Europe’s DSA, the largest platforms must assess systemic risks (like polarization) and report on how they are addressing them. U.S. regulators (or state attorneys general) could similarly pressure companies through investigations and lawsuits – indeed, by 2023 over 40 states had sued Meta alleging its algorithms harm young people’s mental health and amplify harmful content. Such legal pressure can prod platforms to prioritize safety over short-term engagement. Finally, the creation of a Digital Regulatory Agency or empowering the FTC to oversee algorithmic transparency, could institutionalize oversight in the long run.
● Conflict-Sensitive Platform Design: Borrowing from peacebuilding fields, platforms could integrate “conflict sensitivity” into their design philosophy. This means anticipating how features might inflame social tensions and adjusting accordingly. For example, algorithms might detect when online discourse around an election or a hot-button issue is reaching a fever pitch and automatically pause recommending any highly polarizing content on that topic for a cooling-off period. Platforms could also work with conflict transformation experts to design interventions that encourage dialogue – such as prompting users who engage in a heated thread with a reminder to consider factual sources or the human on the other side. While experimental, these ideas push tech companies to take responsibility for the broader societal impact of their design choices, not just engagement metrics.
Implementing these measures will not be easy – there are trade-offs between curbing polarizing content and preserving open expression, and between user autonomy and platform curation. However, the status quo – in which algorithms “prioritize distribution based on engagement” without regard to societal division – is proving increasingly dangerous for democracy. The recommendations above seek to realign the incentives and structures of social media toward healthier discourse.
Conclusion
Social media is not the sole cause of America’s rising political polarization, but it undeniably acts as an accelerant. The most influential platforms – Facebook, YouTube, Twitter/X – have built algorithmic systems that, in their pursuit of engagement and growth, inadvertently (or sometimes knowingly) promote more extreme and divisive content. Ample evidence from internal documents, independent research, and personal stories shows how these algorithms can push people into echo chambers, amplify their worst impulses, and even radicalize them over time. The issue extends beyond any one country: from the U.S. to Europe to Asia, wherever social media penetrates, similar patterns of algorithm-fueled polarization have emerged, adapted to local contexts. Yet, as in other eras of disruptive technology, society is now developing correctives. There is growing consensus that engagement-maximization at all costs is an irresponsible design philosophy. A combination of smarter platform policies, informed regulation, and greater public awareness can help shift social media toward a model that connects people without cynically preying on their divisions.
In the coming years, success will be measured by whether online discourse becomes more civil and fact-based or whether our digital public squares further splinter into warring camps. The stakes are high. A democracy’s health relies on shared truths and the ability of citizens to disagree without dehumanizing one another. Social media can be harnessed to strengthen these democratic foundations – but only if we reckon with and reform the algorithmic architectures that have so far rewarded outrage and polarization. By implementing the kinds of changes outlined in this report – from transparency and user choice to algorithmic tweaks that value long-term civic integrity over short-term clicks – we can begin to mitigate the polarizing effects of social media. The alternative is to continue down the current path, where the fabric of shared society grows ever more frayed by viral indignation. For the sake of our democracy, the time to “fix the feed” is now.
Notes: This is my own opinion and not the opinion of my employer State Street or any other organization. This is not a solicitation to buy or sell any stock. My team and I use a Large Language Model (LLM) aided workflow. This allows us to test 5-10 ideas and curate the best 2-4 a week for you to read. Rest easy that we fact check, edit, and reorganize the writing so that the output is more engaging, more reliable, and more informative than vanilla LLM output. We are always looking for feedback to improve this process.
Additionally, if you would like updates more frequently, follow me on x: https://x.com/cameronfen1. In addition, feel free to send me corrections, new ideas for articles, or anything else you think I would like: cameronfen at gmail dot com.