Final Thesis Anjelica Singer Revised
Final Thesis Anjelica Singer Revised
DEPARTMENT OF JOURNALISM
ANJELICA N. SINGER
SPRING 2022
A thesis
submitted in partial fulfillment
of the requirements
for a baccalaureate degree
in Journalism
with honors in Journalism
Russell Eshleman
Associate Teaching Professor
Head of the Department of Journalism
Honors Advisor
Cynthia Simmons
Associate Teaching Professor
Bellisario College of Communications
Faculty Reader
Social media algorithms have detrimental consequences that affect both the individual and
society. The negative effects result from the algorithmic, personalized, curated news feed on
social networks that insulates social media users into filter bubbles with like-minded individuals.
Filter bubbles decrease the fundamental notion of the marketplace of ideas and create extremist
and polarizing views. Social media algorithms also automatically place users in niche content
areas based on engagement and depending on one’s intention or subconscious interest, this has
My work within this thesis contains a literature review of the negative effects of
algorithms and the components of those algorithms, such as the inner workings of filter bubbles.
Chapter 5 How Filter Bubbles Affect Journalism, the Foundation of our Democracy 16
I would especially like my thesis supervisor, Patrick Plaisance, to help make this thesis
possible. Professor Plaisance’s lessons within his Media Ethics course inspired me to research
the negative effects of social media algorithms. His help and guidance have led me to the correct
research areas, and his research on the ethics of social media and how it shapes our journalistic
I attribute the completion of my thesis to the love and support from both of my parents,
Mark and Heidi Singer, who cheered me along the way and provided me with the confidence to
write and complete my thesis. I would also like to thank Steve Homza for always being there for
me during the late nights of researching, writing, and every up and down of this process.
1
Chapter 1
As social media users scroll through their social media feed, they will soon notice its
social media users, it can quickly become dangerous and have negative effects. Social media
algorithms—computer programmed, mathematical rules that gauge how users interact online and
dictate how a sequence is operated (O’Brien, 2022)— learn more about social media users
through their interactions. The user may find themselves eventually trapped in a bubble. Once a
user is in the bubble, it is difficult to “pop.” Filter bubbles are personalized algorithms that
narrow and personalize content to a user’s engagement over time (Dahlgren, 2021). Eli Pariser
originally coined the term filter bubble in 2009 (Pariser, 2011), and they are defined as a “unique
universe of information for each of us,” except this universe is online. Filter bubbles are “devoid
of attitude-challenging content,” in which “individuals only see posts that they agree with”
The personalized algorithm narrows the type of information a social media user
consumes in social networks based on their searches, “likes,” and comments—otherwise known
as their engagement. When a user clicks on external links to view particular articles or websites,
the algorithm understands that choice as a “signal” of interest, as Pariser mentioned in his 2011
TED Talk. Yet the severity of some platform’s analysis of algorithms stretches beyond a user’s
searches, likes, and comments, and it goes beyond social media platforms. Any use of a
2
computer could track every technological move. According to Jeff Chester, the executive
director of the Center for Digital Democracy, “They’re tracking where your mouse is on the
page, what you put in your shopping cart, what you don’t buy. A very sophisticated commercial
surveillance system has been put in place” (Plaisance, 2014). The algorithm, especially on
Facebook, tracks users’ “friends” on the network, any group they join, and the pages they like
and follow (Hagey & Horwitz, 2021). Because of the individualized algorithm, a user is more
likely to see increasingly similar content to the original “signal”—the original search, like, or
comment. Thus, one’s entire feed could eventually contain homogenous content. In other words,
the social media user is trapped in a filter bubble, which increases in size with time. Pariser
defines this as being “trapped” in a loop or a bubble. The Daily Dish (2010) defined a filter
bubble as “a personal ecosystem of information that’s been catered by these algorithms to who
they think you are.” Pariser (2011) originally defined this term during the rise and creation of
social media.
“Internet filters looks at the things you seem to like – actual things you’ve done, or the
things people like you like – and tries to extrapolate. They are prediction engines,
constantly creating and refining a theory of who you are and what you’ll do and want
next. Together, these engines create a unique universe of information for each of us –
Filter bubbles are also described as “algorithmic filtering, which personalizes content
presented on social media” (Zimmer et al., 2019). Filter bubbles are typically paired with the
term “echo chambers,” which has a different connotation but the same denotation: “a
phenomenon in which a person is exposed to ideas, people, facts, or news that adhere to or are
3
consistent with a particular political or social ideology” (Lum, 2017). However, the slight
difference between echo chambers and filter bubbles is that filter bubbles primarily define the
chambers are created from a psychological perspective of human behavior that exists online and
offline (Thwaite, 2017). In other words, echo chambers are psychological preferences that
coincide with confirmation bias and have existed since the creation of human thought. While
confirmation bias and echo chambers can have a role in daily life regarding what broadcast or
print news one chooses to consume, filter bubbles are limited to the information social media
users consume—although, through confirmation bias, one can put themselves in an echo
chamber with any type of media consumption. Confirmation bias is the psychological term that
is “seeking or interpreting of evidence in ways that are partial to existing beliefs, expectations, or
a hypothesis in hand” and “the best known and most widely accepted notion of inferential error
The filter bubbles social media users find themselves in go beyond what posts we see
from friends and families—it affects online advertisements, too. A great deal of the data
collected from users is used toward commercial exploitation for profit (Etter & Albu, 2021). The
data that algorithms collect for advertisements are justified as an exchange of goods, and it is
precisely how social networks and search engines, like Google, take in revenue. Take Google
mail, otherwise known as G-mail, as an example. Google can scan emails for keywords and sell
the information gathered to advertisers because, in exchange, Google is providing a free e-mail
account (Plaisance, 2014). Likewise, because Facebook is free, it turns users’ data the algorithm
collected into micro-targeted advertisements based on engagement and search history. “If you’re
not paying, you’re not the customer—you’re the product,” said Business Insider reporter Ben
4
Gilbert (2018). One will also see advertisements on Instagram, a similar social media network
owned by Facebook, catered to the data the algorithm contracted from engagement. According to
the Facebook Files, internal documents leaked to the Wall Street Journal in 2021, expose
Facebook’s immorality. The engagement sold to advertisers accounts for the bulk of Facebook’s
revenue, $86 billion (Nover, 2021). While this exchange may seem harmless and justified
because of the free content social media users receive, social media users, and all users of the
World Wide Web, become further locked in a filter bubble that shapes our perception. For
example, if one searches or “likes” a particular political party online, it is more likely that they
will only see advertisements of that party—insulating them further from other viewpoints.
“While much of our concern focuses on such privacy claims, some argue we should be
worried about the opposite problem: alienation through the exploitation of information for
purposes that are not intended,” said Don W. Davis Penn State Professor in ethics, Patrick L.
Plaisance (2014).
5
Chapter 2
diverse information one is presented with online — segregated from different ideologies
(Kitchens et al., 2020). The algorithmic filter bubbles contribute to and align with confirmation
bias, which is when individuals actively seek information that will only support and reaffirm
their own beliefs—even if the information is not entirely factual. When individuals are in an
insular filter bubble, they will be exposed to information from “like-minded individuals” who are
also in the same filter bubble, and previously held beliefs and suspicions become confirmed
(Kitchens et al., 2020). Because victims of filter bubbles find themselves in online communities
with people who share similar views and diverse conversation is limited, the concept of the
The marketplace of ideas, rooted in the First Amendment and John Stuart Mill’s theory of
free speech, “refers to the belief that the test of the truth or acceptance of ideas depends on their
competition with one another and not on the opinion of a censor, whether one provided by the
government or by some other authority” (Schultz & Hudson, 2017). Mill believed that competing
ideas would help divulge false information from the facts. However, competition dwindles due to
filter bubbles, and social media users become dependent on the opinions curated by a censor—
the algorithm. Without competing ideas, victims in filter bubbles are subjected to developing
extremist views and the potential development of mob mentality, created by the “influence of a
large group” (Brennan, 2021). In this case, the “large group” comprises an immeasurable number
of like users who are likely to constantly reassure one another if one’s views are within the same
6
filter bubble. For The United States of America, a country that values freedom and individualism
and relies on the free flow of discussion within the marketplace of ideas (Schultz & Hudson,
2017), and in a country that preaches against censorship, users are algorithmically clumped
together in filter bubbles with like-minded beliefs without a choice. Filter bubbles then cause
extreme political ideologies and beliefs due to continuous reassurance through posts and articles
In 2018, Facebook Co-Founder and CEO Mark Zuckerberg changed its newsfeed
algorithm. The decision was met with outrage from reporters who have relied on social media
platforms, like Facebook, as their primary way of distributing their news. The new algorithm
would highlight posts from close friends and family and decrease the number of posts one sees
from public brands and news media. In addition, according to The Wall Street Journal’s
“Facebook Files,” the “news feed” feature within the network, the area that is mainly affected by
the algorithm and becomes personalized, “accounts for the majority of the time Facebook’s
nearly three billion users spend on the platform” (Peterson-Salahuddin & Diakopoulos, 2020).
Adam Mosseri (2018), the then-Facebook Head of News Feed, said in a press release:
“As we make these updates, Pages may see their reach, video watch time, and referral
traffic decrease. The impact will vary from Page to Page, driven by factors including the
type of content they produce and how people interact with it.”
Since it is not uncommon for people to have like-minded ideologies with their family,
and especially their close friends, the 2018 Facebook algorithm has led social media users to
become further trapped into filter bubbles that confirm their biases and the original algorithms
based on engagement. In addition, the new algorithm can hinder social media users from seeing
7
news and information from reputable sites and encourages the spread of misinformation by
family and friends to have a greater emphasis in the “news feed.” When friends and family share
news on social media, users will likely devote more attention. Since information from a trusted
source, as people are more likely to trust their friends and family, is seen as “more credible and
Whether a user will believe information is correct or not does come down to trust. With
trust, one is less likely to question the information presented to them (Hutchens et al., 2021).
Intertwined with confirmation bias, those in filter bubbles are more likely to trust the information
they see from friends and family since it is human nature to seek and interpret information that
When social media platforms first materialized, there was a mix of expectations. One of
the appealing factors was the personalization of content and the close social interactions resulting
“Consumers were assumed to be exposed to content that felt more personal and within
their domain of interest. If in the past, for example, national newspapers did not delve
into local matters although local newspapers did, social platforms were meant to be the
ultimate source for relevant and personalized content.” (Berman & Katona, 2020)
Contrastingly, there were users like Eli Pariser who did not want to be trapped in a
personalized bubble. The previously conceived idea was that the internet and social media should
encourage the spread of opinions and viewpoints, not limit them (Bozdag & van den Hoven,
2015).
In Pariser’s 2011 TED Talk, “Beware of Online ‘Filter Bubbles,’” Pariser said that it is
dangerous to not have our worldview broadened, as a result of personalized algorithms. When
Zuckerberg was questioned on his new algorithm in 2018, Pariser said Zuckerberg said in
response, “A squirrel dying in front of your house may be more relevant to your interests right
now than people dying in Africa.” That quote specifically struck Pariser because just as social
media was on the rise in the late 2000s, he felt like he would have a close connection to the
entire world while living in a rural area—but that has not been the case. Pariser has found that
because of filter bubbles, social media users who are victims of this algorithm have acquired a
curiosity and motivation to learn; fewer surprises; decreased creativity; innovation and
9
exploration; and a decreased diversity of ideas and lack of understanding of the world”
(Dahlgren, 2021). Instagram, a similar social network site like Facebook and owned by
Facebook, has a ranking algorithm with the purpose of seeing “the moments you care about
first.” The algorithm, therefore, pre-judges what a user might “care about” based on the
engagement. Social media users miss 70 percent of their feed that does not coincide with their
Despite Pariser aligning more with progressive politics, he said, in his TED Talk, that he
has still tried to broaden his political views and hear what people of the opposite ideology are
thinking. The only problem was that Pariser noticed that conservatives slowly stopped appearing
on his Facebook newsfeed because he was only clicking on his liberal friends’ posts and links, as
their similar political beliefs were what he was subconsciously and naturally attracted to. Thus,
Pariser was trapped in a political filter bubble, which only further divided the United States of
America and slowly helped undermine democracy. The consequence is that the individual is
therefore deprived of at least some of their political autonomy for the sake of the social media
algorithm. Michigan State political scientist Arthur Welzer said, “To the extent that we view
ourselves as helpless pawns of an overarching and immovable force, we may renounce the moral
and political responsibility that is crucial for the good exercise of what power over technology
Google also runs its search engine through a tailored algorithm. This individualized
algorithm is perhaps more dangerous than social media algorithms—since Google is a place
where internet users will visit to specifically learn and search for information and news, besides
just connecting with family and friends on social media. Pariser (2011) said that if people search
for the same topic on Google, it is likely that different people will receive different search results
10
because of their past searches and the websites they previously clicked on. Google’s
determination of one’s location even plays a role in the results one will see (Normark &
Oskarrson, 2018). Even if one is logged out of their Google account, the search engine’s signals
would still find a way to categorize the individual and determine the type of computer used and
the browser that one is on. According to Pariser, those categorizations will help determine and
shape what search results will be presented. Google, however, claims its “automated systems”
use language and location, including, of course, the language posed in the Google search and the
expertise of the sources and the content of pages, to help offer “relevant and reliable” sources.
Eric Schmit, the former CEO of Google, said about his company’s algorithm, “It will be
very hard for people to watch or consume something that has not in some sense been tailored for
them (Jenkins, 2010). Google claims its mission is to “organize the world’s information and
make it universally accessible and useful” and yet, the company claims, “That’s why Search
makes it easy to discover a broad range of information from a wide variety of sources”
(Thompson, 2019). It can be argued that the information one receives on Google cannot
technically be considered “universal,” if the results vary from user to user. Even though
websites with tailored algorithms may have the well-intended purpose of showing users what
they “would want to see” and what aligns with their interests, these algorithms are detrimental,
because internet users may be consuming information that they do not necessarily need to see—
or even worse, hiding important information that one must need to see in order to be a well-
informed citizen. “Instead of a balanced information diet, you can end up surrounded by
Because our society has become so reliant on technology and social media to become
“connected” with the rest of the world, it is no surprise that social media dictates how we
11
perceive the world to the point where its power harms our moral agency and dictates any agency
we may have—as the filter bubbles predetermine what we value. In other words, we become
ignorant of the power social media and technology hold over us.
Over time, customized news feeds that only reflect personalized interests and beliefs
divide the nation, especially politically. When social media users are constantly reassured of
their beliefs within their filter bubble, they adopt the mentality that their views are the only views
and that their views and beliefs are the only correct way of thinking, creating an “us versus
them” mentality, even outside of the digital world. Online filter bubbles then tend to mimic
reality and translate into everyday lives, especially because social media have now become some
of the main ways individuals communicate with each other. This digital avoidance of online
posts that one does not agree with translates into a natural reflex to avoid anything or anyone
who disagrees with pre-conceived beliefs. “Your application for credit could be declined not on
the basis of your own finances or credit history, but on the basis of aggregate data—what other
people whose likes and dislikes are similar to yours have done” (Andrews, 2012).
impervious to facts,” which “can erode our ability to relate to one another and recognize shared
democratic interests” (Plaisance, 2014). CNN’s Chris Cillizza noticed this phenomenon on
Twitter, where extremists’ Tweets were receiving heightened engagement, which may have been
“skewing political parties away from reality” (Leetaru, 2019). A study conducted by Anna
Normark and Rebecca Oskarsson (2018) found that the most profound effect of filter bubbles is
the fostering of opinions and confirmation bias that can undermine democracy.
Suppose dialogue between differing ideologies is generated on social media, like in the
comment section of neutral journalistic sources about a controversial, politically debated topic.
12
In that case, the comments are typically filled with personal attacks against those with clashing
beliefs. The ability to be anonymous on social media also creates a hostile environment, as social
media users can express their thoughts without a filter or identity. The importance of having a
marketplace of ideas for the foundation of our democracy and the need for public discussion is
apparent more than ever, as online discussions have turned into hateful comments with users
guarded against considering other points of view because of conditioning from confirmation bias
and filter bubbles. This phenomenon is seen on social media sites like Facebook, mainly because
the individualized expression is encouraged. A so-called healthier social media design, like
their own opinion without creating a hostile environment. David Karp, the developer of Tumblr,
believed the “reader-comment section” and the “reply section” on social media “can bring out
the worst in people” (Plaisance, 2014). Karp’s solution allowed users to comment on a post only
through a re-blog. In other words, the Tumblr user would share a post to “comment” on it, but
their comment would only appear on their page. While this helps limit the divisiveness within
social media, as seen through comments on Facebook, this method still seems to contribute to the
extermination of the marketplace of ideas and further insulates users into filter bubbles.
It is also not unlikely that social media users will comment on posts and articles that
would not typically fall within their filter bubble, as anyone can comment on a public profile —
to possibly defend their viewpoint or attack others—and anyone can search a publicly posted
news article. With confirmation bias and selective exposure, however, it may be unlikely that
such a user would even search for an article that would challenge beliefs that fall outside of their
Aligning with confirmation bias and selective exposure—"the tendency for people both
consciously and unconsciously to seek out material that supports their existing attitudes and
opinions and to actively avoid material that challenges their views” (Chandler & Munday,
2011)— social media not only creates ideological polarization, but it also cultivates fake news.
As the curated, personalized feed decreases the chances of a user coming across information
outside of their ideological bubble and current beliefs, one might feel more inclined to share false
information online simply because such incorrect information falls within their beliefs. They
trust the source enough not to check the facts. The effect of polarization has been more apparent
within politics in recent years, and it relates to the phenomenon of the consumption of fake news
(Spohr, 2017). However, there is a difference between a user sharing fake news because one
believes the information contained in a post or news article and one sharing fake news with full
awareness of the falsity of the content. Yet, whether the sharer’s purpose is to spread fake news
intentionally, its content could still be seen by those in a filter bubble with congruent ideology.
Fake news is defined as information that contains “phony news stories maliciously spread
by outlets that mimic legitimate news sources” (Zimmer et al., 2019). A result of fake news is the
spread of misinformation (the unintentional spread of fake news) and disinformation (the
conscious, malicious spread of fake news). One would spread fake news, or partake in deception,
to benefit oneself or even a political party. Similarly, another widely used term,
“misperceptions,” is defined as “beliefs that are inconsistent with the best available evidence,
14
including both the acceptance of false claims and the rejection of true claims” (Garrett & Bond
2021).
Closely looking at the 2016 Presidential Election, a study collected by YouGov examined
how partisan media use, like Fox News, a primarily conservative news source, and MSNCB, a
primarily liberal news source, is related to the consumers’ trust in a media source and the
misperceptions partisan media can cause. YouGov’s study shows that trust and the use of
partisan media lead to misperceptions that favor the consumer’s “in-party” or ideology.
Consequently, the extent and frequency to which partisan media is trusted or is consumed relate
to the number of misperceptions one believes. One of the purposes of spreading false claims
within partisan media is to either favor their political party or villainize other political parties.
The relationship between how much the user trusts and uses partisan media runs parallel to their
active avoidance of out-party media (media outside of the consumer’s political party) to continue
A study conducted by Guess, Nyhan, and Reifler found that selective exposure to social
media through filter bubbles, specifically on Facebook, was the primary source of fake news
consumption (Guess et al., 2018). This was, in part, because social media users were not fact-
checking the news they were consuming. However, it was also noted that through this study, this
group of social media users, who were trapped in a filter bubble with other like-minded
individuals, were more likely to support former President Donald Trump’s “populist political
Two opposing theories focus on the cultivation of fake news: 1.) Social media algorithms
place users in echo chambers and filter bubbles of personalized beliefs and deepen the divide
15
between opposing opinions 2.) The cause of fake news stems from psychological and behavioral
economics that has always been deeply rooted within society and active in our daily lives. Such
psychological behavior, like the act of selective exposure and confirmation bias, encourages the
user to seek out information that already supports how they already feel about a topic. The
difference is that the psychological theory is not limited to online use. This phenomenon also
occurs offline in the consumption of which print publication to read and which news station to
absorb (Spohr, 2017). It can even reach the extent of only “friending” or following social media
users with similar political views or, likewise, “defriending” or unfollowing users who post
information that challenges the user’s beliefs. Hence, the clumping of users with like-minded
beliefs can be done individually, consciously, and psychologically. An algorithm cannot simply
dictate who and what a user follows, although an algorithm can decrease the frequency of
opposing beliefs on one’s social media news feed. However, the psychological behaviors and
tendencies of selective exposure and confirmation bias are amplified and exploited by
algorithms.
Chapter 5
Nicholas Diakopolous (2020), found that journalists define social media algorithms, and
especially the distributive, limited one created by Zuckerberg in 2018, as “filters that decide
whether or not their audiences see content based on a variety of factors, including but not limited
Editors were once called the “gatekeepers” of society, as they were the ones responsible
for what information and news would be released and distributed to society. According to
Pariser’s TED Talk (2011), social media has created a new type of “gatekeepers”: algorithmic
ones that control the information one sees online and on social media. Social media users also
play a role in the “gatekeeping” process through engagement and filter bubbles to decide which
news and information to disseminate to their friends and family (Ferrucci & Tandoc, 2017). Just
as editors were once labeled as the only “gatekeepers” in society, Pariser said the “torch” has
now been passed to “human gatekeepers and to algorithmic ones.” The detrimental effect is that
“Algorithms don’t have the embedded ethics that editors did” (Paiser, 2011).
“So, if algorithms are going to curate the world for us, if they’re going to decide what we
get to see and what we don’t get to see, then we need to make sure that they are not just keyed to
relevance,” Pariser said. “We need to make sure that they show us things that are important,
production (Ferrucci & Tandoc, 2017). News editors want to ensure that their content is seen and
pushed through the filtered algorithms to reach viewers, and as a result, some content is now
becoming optimized online. The optimization of content by editors can, once again, be labeled as
editor thinks is “newsworthy” enough to display on social media and intriguing enough to be
shared by users. Once the news article is posted, it is up to the algorithms and users to decide
how much exposure the news content will receive. Social media users can engage and amplify
online stories (Peterson-Salahuddin & Diakopoulos, 2020), and then the engagement will be
tracked through algorithms. For some news outlets, the level of engagement a news story
(Harcup & O’Neill, 2017) instead of the traditional newsworthy traits: timeliness, geographic
The more engagement a journalistic work receives, the more a journalist is willing to
report a “follow-up” on the story (Ferrucci & Tandoc, 2017). This audience influence can be
dangerous because it can persuade journalists to ignore other newsworthy stories. Just because
social media users do not seem interested in certain content based on engagement does not
necessarily mean that such information is not important enough to be disseminated and reported.
The news on social media is now, essentially, being curated by the audience, who mostly do not
have any journalistic training. In the series of interviews with 18 journalists by Peterson-
Salahuddin and Diakopolous, they found that for journalists to make sure their content is getting
distributed by friends and family on Facebook, journalists optimize their work, which could
18
sometimes go against the principles and ethics of journalists, created by the Society of
1. Seek Truth and Report It—which includes the support of the open exchange of views
2. Minimize Harm
influence coverage”
processes to audiences,” and to “encourage a civil dialogue with the public about
“Members of the Society of Professional Journalists believe that public enlightenment is the
forerunner of justice and the foundation of democracy. Ethical journalism strives to ensure the
free exchange of accurate, fair, and thorough information. An ethical journalist acts with
integrity.”
However, if news editors and journalists base which stories they report on the level of
engagement, that goes against the “Act Independently” principle. The notion of a “free press”
One of the most important aspects of journalism is its vital role in The United States of
America. Its function as “The Fourth Estate” is an essential component of democracy. According
information that is accurate, fair, and thorough.” Although journalism is not defined by
technology, but rather its role as the “function news plays in the lives of people,” it can be deeply
affected and altered by technology—in a way that can hinder “accurate, fair and thorough”
As social media has become more popular, media platforms such as Twitter and
Facebook have become the primary outlets of news consumption—especially for political news
consumption and especially when political figures routinely use such media to spread their
messages. More than eight in 10 Americans (86%) said they consume news from a smartphone,
computer, or tablet “often” or “sometimes,” according to a Pew Research Center (Shearer, 2021)
study conducted in 2020. Fifty-two percent of Americans said they would rather receive their
news through an online platform—with 11% preferring social media. The study found:
“About two-thirds of U.S. adults say they get news at least sometimes from news
websites or apps (68%) or search engines, like Google (65%). About half (53%) say they
get news from social media, and a much smaller portion say they get news at least
Consequently, the press has also moved, although not entirely, to social media to fill a
user’s feed with the latest updates to keep up with the evolving media technology. Instead of
expecting news articles to be trapped within niche filter bubbles, one of the hopes was that
reporters and news outlets’ “Tweets” and posts would encourage commentary and discussions
from readers (Leetaru, 2019)—a connection to the world like the effect Pariser hoped social
media would have. But as commentary dwindled and “retweets” –the act of sharing information
20
on Twitter—became the main form of social media engagement on the platform, users only
further trapped themselves in a filter bubble. By only “retweeting” specific news articles from
favorited, perhaps biased, news sources, Twitter digested the data from such engagement and
would only suggest similar content to the social media user. Additionally, because of the
dwindling discourse on Twitter (it is noted as a “shrinking platform”), social media users are
barely absorbing any new ideas different from their own. It is “less and less reflective of actual
society,” according to Forbes reporter Kalev Leetaru. The shared ideas are becoming more
“insular.”
The news media have also faced other criticism founded on radical and extreme
ideologies. Many individual journalists and even entire news organizations have been accused of
biased reporting, and even the reporting of fake news to slant their angle toward a political
perspective to “covertly push an agenda” (Plaisance, 2014). It seems as if the more extreme a
user is, or the further they are stuck in filter bubbles, the more they are willing to have a distrust
of the news media. Trust in journalism within the United States is crucial to democracy,
according to a Pew Research Center study focusing on the Americans’ trust in the news media.
Associate director of journalism at Pew Research Center, Katerina Eva Matsa, and director of
internet technology, Lee Rainie, claim that news media trust is as low as ever due to four factors
(Rainie & Matsa, 2022). The first factor relates to how Americans primarily receive their news,
which is online. The second factor is that the news industry has placed increased importance on
its digital advertisement revenue. The third factor reveals that Republicans and Democrats trust
and gather information from “very different sources.” The fourth factor is how fake news
confuses Americans about which sources to trust. A survey found that 67% of U.S. adults said,
“made-up news and information causes a great deal of confusion.” “Made-up news and
21
information” and the distrust in news media also caused 68% of Americans to have confidence in
the government, 54% of confidence in each other, and 51% of confidence in “political leaders’
According to Rainie, social media and fake news have only increased the complexity of
trust between news consumers and news producers. When asking news consumers whether they
trust the media, “a lot of people answered ‘no’ to that question.” However, those same
consumers would later admit that they only trusted some sources. “And so, in a way, their trust
has become disaggregated and divided,” Raine said. Because trust in the news media is very
selective, there becomes a greater incentive to consume information from the sources they trust,
Frances Haugen, a former data scientist at Facebook and a designer of algorithms, also
known as the “Facebook Whistleblower,” shed light on detrimental algorithms and other harm
that the social network has caused. Haugen shared confidential documents with The Wall Street
“Facebook knows, in acute detail, that its platforms are riddled with flaws that cause
harm, often in ways only the company fully understands. That is the central finding of a
Wall Street Journal series, based on a review of internal Facebook documents, including
When Haugen testified before a Senate subcommittee, she claimed that the social network
was undermining democracy simply because of its algorithm that spreads misinformation for
profit. It was labeled a “historic crisis.” Haugen testified that Facebook knowingly allowed its
platform to condone “more division, more harm, more lies, more threats and more combat” that
has also resulted in “actual violence” that “kills people” (Allyn, 2021).
The Facebook Files supported Haugen’s claims and proved how the social media network
helped divisive content spread and limited content in harmful ways. Facebook is the most used
application, with 416 million downloads in just 2021 (Curry, 2022). In Media Ethics: Key
Principles for Responsible Practices, Plaisance pointed out that Facebook does have such
significant popularity because we can easily “communicate instantly with selected groups of
people” (Plaisance, 2014). “Yet, we seldom think about how Facebook has subtly influenced
(Allen, 2011). The ninth commandment states, “Thou shalt think about the social consequences
of the program you are writing or the system you are designing.” The tenth commandment says,
“Thou shalt always use a computer in ways that ensure consideration and respect for your fellow
humans.” According to The Wall Street Journal investigation, the findings highlighted that
Facebook knew of the social consequences of their platform and that their platform causes harm
without addressing or fixing the issues. The Wall Street Journal’s Facebook Files (Dow Jones &
Company, 2021), reported mainly by Jeff Horwitz, consist of information broken down into
seventeen parts that “offer perhaps the clearest picture thus far of how broadly Facebook’s
problems are known inside the company, up to the chief executive himself.” The Facebook Files
revealed that the social network’s goal to connect people has not been achieved. The documents
show that researchers of Facebook have recognized its platform’s harmful effects on teen mental
health, political discourse, and human trafficking, for example, yet turned a blind eye to protect
its business.
The third article in the series, by Keach Hagey and Jeff Horwitz, “Facebook Tried to
Make Its Platform a Healthier Place. It Got Angrier Instead” (Hagey & Horwtiz, 2021), is a
specific example of Facebook’s awareness of its flaws and their lack of addressing such flaws. It
discusses Facebook’s 2018 new algorithm, that had the purpose of creating “meaningful social
interactions,” according to an internal report. Zuckerberg claimed his goal for the 2018 algorithm
was for Facebook users to strengthen their connections with friends and family instead of
“professionally produced content, which research suggested was harmful to their mental
Except Facebook’s goal was not achieved. Zuckerberg said that this algorithm change
was almost like a “sacrifice,” as he expected users’ time on the social networking site to
decrease, but that their time on the platform would be more meaningful. The decision to change
the algorithm was not as sacrificial as Zuckerberg claimed it to be, all because of his fear that
“original broadcast posts,” which The Wall Street Journal described as “the paragraph and photo
a person may post when a dog dies” as an example, added to the decline of Facebook’s
engagement. The documents revealed that Zuckerberg’s real goal was to increase the reduction
in engagement. In doing so, Facebook would amplify posts that harnessed more comments and
emotions—even if such comments consisted of arguments. Even if it was filled with hatred, the
system. According to The Wall Street Journal, a “like” was worth one point, a reaction or
sharing without a reply to an invite was worth five points, and a “significant comment,” message,
sharing, or an “RSVP” was worth 30 points. This would help determine what future, similar
content a user would see on their news feed, and the content had additional “points” if
interactions were between shared groups and friends. Even the type of reactions to a post varies
within the point system. An “anger reaction” is worth five points, for example, compared to the
normal “like” reaction. This point system helps spread controversial subjects, as those are the
subjects that someone would most likely leave an “anger” reaction on. Each time an “anger
reaction” is used, five points are given to the post—boosting its MSI, “meaningful social
according to the documents The Wall Street Journal reviewed. “Comments get longer as users
defend their positions and use reactions buttons other than the like button.” Facebook encourages
divisive arguments over controversial topics, as “The argument spurs longer comment posts,
The chief executive of Buzzfeed, Jonah Peretti, blamed Fakebook’s effort for amplifying one
of Buzzfeed’s posts, “21 Things That Almost All White People Are Guilty of Saying.” This post
received serious traction on Facebook because of the controversial topic—with 13,000 shares and
16,000 comments filled with divisive arguments—compared to other posts with more light-hearted
topics, such as self-care and animals, that had “trouble breaking through” the algorithms. It was
found that publishers and political parties were then optimizing and sensationalizing their posts,
fully aware that in doing so, the algorithm would also amplify their content on the social network.
Aware of this moral concern, Facebook data scientists cited Perretti’s complaints in a memo,
saying, “Our approach has had unhealthy side effects on important slices of public content, such
The 2018 algorithm that focused on reshared materials from family and friends instead of
“professionally produced content” increased the number of “misinformation, toxicity and violent
content” among users’ newsfeeds. Political parties took advantage of this, as those in Europe
reportedly changed their policies to have more presence on the network. Without naming the
specific political parties, an internal Facebook report stated that parties, even those feeding off
the algorithm for their one benefit, “worry about the long-term effects on democracy.” In an
interview, Lars Backstrom, Facebook’s vice president of engineering, said that having an
tried to mitigate exploited content and stop the algorithm from sharing fake news, which
simultaneously helped divide the nation. Zuckerberg, however, “resisted some of the proposed
fixes” in fear that the decrease of sensationalized content would deter users from the network, as
people are naturally drawn to such content. In an internal memo, it was revealed that Stepanov
gave Zuckerberg multiple options on how to address the spread of fake news and divisive
content—since the boost increases the likelihood of users sharing the content. But Stepanov said
that Zuckerberg would not even attempt the approach because “We wouldn’t launch if there were
Almost a year and a half after Zuckerberg’s interaction with Stepanov that pointed his
attention to the divisive issues Facebook creates, the social network announced its plans of
“gradually expanding some tests to put less emphasis on signals such as how likely someone is to
comment or share political content.” This decision was not because of Stepanov, but it was
instead addressed because of the January 6th insurrection at the Capitol in Washington D.C.
Facebook was criticized for allowing misinformation to spread on its platform by protestors who
could not accept the results of the 2020 presidential election, which encouraged users to storm
the Capitol. Of course, other factors were inciting the supporters of former President Donald
Trump. Still, if Facebook had addressed its algorithm flaw earlier—something they were aware
of— the supporters would not have been as capable of sharing misinformation or plans that
If Facebook were to take further measures to confirm that its algorithm was not creating and
encouraging a place for hatred, sensationalized and divisive content, extremist views, and the
27
spread of fake news, it would affect not only their company but also their advisers—their most
significant source of revenue—and their publishers. Peretti of BuzzFeed argued that the
algorithm creates divisiveness. “MSI rankings [aren’t] actually rewarding content that drives
meaningful social interactions,” Peretti said in an email to Facebook. Peretti said that for his
content on BuzzFeed to gain popularity through reshares, his staff felt “pressure to make bad
content or underperform.”
In the second article of the Facebook Files, “Facebook Knows Instagram is Toxic for Teen
Girls, Company Documents Show,” by Georgia Wells, Jeff Horwitz, and Deepa Seetharam, it
was revealed that the curated algorithms on Instagram, an application owned by Facebook, are
harmful to teens’ mental health. Haugen revealed that Zuckerberg was aware that his application
was causing such disorders in teen girls, since 2019. In March 2020, internal research showed
that “The tendency to share only the best moments, a pressure to look perfect and an addictive
product can send teens spiraling toward eating disorders, an unhealthy sense of their own bodies
and depression” (Wells et al., 2021). To intensify the issue, a user’s “Explore Page” on the
application that contains photos and videos—content can be easily edited and usually depicts
unrealistic beauty standards—is personalized by algorithms, and it “can send users deep into
content that can be harmful.” For example, the article highlighted 19-year-old Lindsay Dubin,
who was interested in exercising. After a few searches on Instagram that fell within that
category, her Explore Page was filled with photos of “how to lose weight” and the “ideal body
likes, comments, and shares, that does not necessarily have to be the case for every application.
The application TikTok—while still relatively new, as it merged with the application Musical.ly
in 2018 (G. Smith, 2021)—has a “secret algorithm” created by the China-based parent company,
ByteDance, that is like no other. “The algorithm on TikTok can get much more powerful, and it
can be able to learn your vulnerabilities much faster,” Guillaume Chaslot, data scientist,
algorithm expert, and former Google engineer, told The Wall Street Journal (2021). TikTok, a
platform with an estimated 1 billion monthly users (Associated Press, 2022), further stands out
from other applications. It is used primarily for entertainment and not connecting with personal
friends (B. Smith, 2021). While scrolling through TikTok’s never-ending “For You Page”—
almost synonymous with a “homepage” with continuous new content— users may encounter
others referring to a “side” of the application that they are “on.” They have landed on this “side”
through the application’s algorithm, and “side” refers to seeing multiple videos of the same
subject. The “For Your Page” contains the videos that TikTok recommends based on how one
interacts with the application (Haskins, 2019)—although it does not take physical actions (such
as liking, commenting, and sharing) to “interact” with the videos, and notify the algorithm
The Wall Street Journal conducted a study using 100 fake accounts, otherwise known as
“bots,” to watch “hundreds of thousands of videos” on TikTok to test how its algorithm works.
The Wall Street Journal found that, while shares, likes, and follows help determine the content of
one’s “For You Page,” so does “how long you linger over a piece of content.” In other words, the
29
amount of time one spends on a particular video, whether one pauses on it, hesitates to view the
video, takes the time to read the comments, or re-watches the video, influences what TikTok will
present on one’s “For You Page” in the future—even if one did not physically interact with the
video in the more traditional ways of liking, commenting, and sharing. Similarly, the same
method applies to filtering out what users dislike by quickly swiping past videos, which signals
Through their fake accounts, the Wall Street Journal found that when an account is new,
TikTok immediately tries to gauge the user’s interests by showing them a plethora of different,
popular content. The types of videos can be anything from religious content, dancing videos, and
videos about heartbreak. The one fake account that The Wall Street Journal highlighted while
presenting their findings was called @kentucky_96. They input the bot’s age as a 24-year-old,
and TikTok also received an IP address from the state of Kentucky—and those were the only two
factors the algorithms were able to receive. Each bot was assigned interests, and
@kentucky_96’s interest was “sadness and depression.” However, the only way this particular
bot would express that interest was through re-watching, pausing, or clicking on related hashtags
that were related to sadness and depression. None of the bots liked, commented, or shared the
Part of TikTok’s addictive algorithm is its capability to draw users in by showing them a
mix of content with “millions of views.” But as the algorithm gets a sense of a user’s likes and
dislikes, such a variation begins to dwindle. The interests of The Wall Street Journal’s bots were
determined in less than two hours, and the fastest determination was less than 40 minutes. This
determination places users in “niche content areas,” and they contain highly specific content for
enthusiasts of whatever interest, all determined by the algorithm. Chaslot, an “advocate for
30
algorithm transparency,” said, “On YouTube, more than 70% of the views come from the
recommendation engine. So, it’s already huge. But on TikTok, it’s even worse. It’s probably like
90-95% of the content that is seen that comes from the recommendation engine.”
For @kentucky_96, while muddling past videos of people from Kentucky—as the
algorithm picked up on the bot’s programmed location—the bot eventually found itself in its
expected niche category, or “side” of TikTok. TikTok presented the bot with a popular video
fitting right within its programmed interest with just 15 videos into first using the application, or
less than three minutes. The video had the hashtags #sad and #heartbreak, in which the bot
watched the 35-second video twice—signaling to the algorithm that it is of interest to the user.
Another video with the hashtag #sad was shown 23 videos later. After scrolling past 19 more
videos, @kentucky_96 re-watched a video about “heartbreak and hurt feelings” and kept
scrolling past anything that did not fall within its programmed interests. Not to confuse the
algorithm that the bot is interested in heartbreak and relationships, @kentucky_96 paused on
videos about mental health and “lingers” on videos with the hashtag #depression. When the
algorithm finally detected the bot’s niche interest within 224 videos and 36 minutes of total
watch time, 93% of the videos shown to @kentucky_96 were about sadness and depression.
However, a TikTok spokeswoman reviewed this study and claimed that the results were
inconclusive and nonrepresentative because humans “have diverse interests, unlike bots.”
Chaslot disagrees and said this model is like YouTube, where the detection of “depressing
content,” for example, is used to create engagement that will further suggest more depressing
“The algorithm is pushing people towards more and more extreme content, so it can push
them toward more and more watch time,” Chaslot said. “The algorithm is able to find the piece
31
of content that you’re vulnerable to, that will make you click, that will make you watch, but it
doesn’t mean you really like it, and that it’s the content that you enjoy the most. It’s just the
content that’s the most likely to make you stay on the platform.”
A document labeled “TikTok Algo 101,” produced by TikTok’s engineering team used to
explain how the algorithm works to non-technical employees in Beijing (B. Smith, 2021),
revealed a simplistic version of the equation the application uses to “score” videos to further
According to the document, “The recommender system gives scores to all the videos based on
this equation and returns to users videos with the highest scores.” TikTok’s algorithm is so
advanced that “like bait”—videos that specifically ask for likes—is identified, and its content is
not included within the user’s curated, niche “For You Page.”
There are many consequences to a robust algorithm like TikTok’s, which can “make you
wallow in your darkest box without ever needing to eavesdrop on you or collect any personal
information about you.” One of the consequences is that as a user gets deeper into a rabbit hole
of niche content, the videos shown tend to have fewer views, as they are catered to “enthusiasts”
of a particular subject. According to The Wall Street Journal, the lower the view count a video
has, the less vetted the videos are by moderators to determine whether they violate TikTok’s
Terms of Service, compared to “viral” videos. This means that videos within niche areas could
contain harmful content without ever being detected, and as a result, “fake news” can spread
amongst these niche areas. The Wall Street Journal’s bots that had a general interest in politics
were shown videos about election conspiracies and the far-right political movement, QAnon,
32
The Wall Street Journal‘s study also proved that algorithms could subconsciously hurt a
user’s mental health. If one is feeling depressed, for example, like what @kentucky_96 was
interested in, they could end up in a “rabbit hole” of depressing content without ever liking a
video on TikTok. As one will sometimes watch content based on what they are subconsciously
feeling for relatability purposes, like sadness or depression, being drowned with videos about
suicide or depression does not positively affect mental health. The morality of this particular
algorithm is thrown out to “keep you there as long as possible” (NYT) for the sake of TikTok’s
ultimate goals: retention (whether a user comes back to the application) and time spent on the
According to the New York Times, analysts “believe algorithmic recommendations pose a
social threat,” and the “TikTok Algo 101” document further confirmed their beliefs. The
document revealed that algorithmic recommendations that latch on to users’ interests are used to
increase addiction through micro-targeting. In early March of 2022, U.S. state attorney generals
from California, Florida, Kentucky, Massachusetts, Nebraska, New Jersey, Tennessee, and
Vermont launched an investigation into TikTok’s effects on mental health (Associated Press,
2022). Their main concerns surround the algorithm’s promotion of eating disorders, self-harm,
and even suicide, particularly for children and teenagers. The company responded, pointing out
the safety and privacy protections they offer for teens and users under 18—but these features do
not seem to stop their algorithm from detecting interests. In September of 2021, which is Suicide
Prevention Awareness Month, videos flooded TikTok with the hashtag #suicidepreventionmonth
(Rosenblatt, 2021). Around the same time, TikTok announced they were implementing mental
health resources, such as a crisis hotline and information on how to handle a crisis, especially
33
when someone clicks on #suicide within the application. According to NBC News, the hashtag
#MentalHealthMatters has been viewed more than 13.5 billion times on TikTok.
Dr. Angela Guarda, the director of the eating disorders program at Johns Hopkins
Hospital and an associate professor of psychiatry and behavioral sciences, suggested that the
algorithms on TikTok to cater a recovery-orientated feed for “at-risk” users instead of feeding
into what may harm their mental health (Wells et al., 2021). Guarda said that if the algorithm
keeps showing “thinspiration videos” to someone who has an eating disorder or is at risk of
developing an eating disorder, the algorithm could “drive someone further into their illness.
Chapter 8
Studies have found that other social media influences besides algorithmic filter bubbles
also have negative effects and shape behavior. Facebook’s spokesman Andy Stone said that
Facebook’s algorithm, especially its change in 2018, is not the cause of the “world’s divisions.”
Instead, Stone claims that partisan issues within the United States have been on the rise, growing
for “many decades, long before platforms like Facebook even existed” (Dow Jones & Company,
2021). There are concerns that users only seek political information that supports their previously
held political opinions and are avoiding challenging information, otherwise known as news
avoidance, both purposely and through filter bubbles. Still, Peter M. Dahlgren’s study challenges
the filter bubble theory (Dahlgren, 2021). Instead of filter bubbles, Dahlgren believes the cause
of limited political knowledge from social media is not from the curated algorithm, but from
human psychology consisting of confirmation bias and selective exposure. However, there is no
sources there are due to the rise of the Internet, two significant concerns arise: 1.) “Users tend to
seek information that confirms their existing beliefs, attitudes or interests 2.) “Internet services,
such as social networking sites and search engines, try to use algorithms to serve up increasingly
more supporting information to attract users.” Combined, these two concerns create a new
35
problem: Internet and social media users will eventually see no information that challenges their
current beliefs.
Dahlgren believes that the theory of limited news consumption, as a result of filter
bubbles, aligns with studies about selective exposure in human psychology that determine which
information users will select to consume. Dahlgren argues that the root of political polarization,
There are nine counterarguments of filter bubbles Dahlgren has found to support his
1. “People often seek supporting information, but seldom avoid challenging information
2. A digital choice does not necessarily reveal an individual’s true preference
3. Although people may prefer to interact with individuals who hold the same beliefs, they
also interact with those who do not
4. Different media forms satisfy different areas of information”
And Dahlgren’s encapsulating argument is that “It is not clear what a filter bubble is.”
With popular outlets such as Twitter and Facebook, the question of whether our exposure
to news through social media affects the extent of political knowledge has been a recent
scholarly interest. The study, “Why don’t we learn from social media? Studying effects of and
mechanisms behind social media news use on general surveillance political knowledge, political
communication,” by Patrick van Erkel & Peter Van Aelst, investigated the extent of users’
knowledge about current events, or their so-called general surveillance knowledge from social
media through an online survey (van Erkel & Van Aelst, 2021). While the study was conducted
in Belgium, the social media networks studied, Twitter and Facebook, are the same social
networks in the United States, and the results reflect how someone from the U.S. consumes
news. The study found that citizens do not gain additional political knowledge from consuming
news on social media, unlike absorbing news from traditional media, such as newspapers and
36
broadcast journalism. Also, users may accidentally encounter news on social media, while their
primary purpose for logging on might not have been consuming the news—leading to
information overload, which is another consequence of social media that detrimentally affects
when people are confronted with a massive amount of information created on social media which
exceeds the capacity they can handle” (Eliana et al., 2020). The effects of information overload
Van Erkel and Van Aelst’s study defines two types of political knowledge: static and
surveillance. Static political knowledge is the range of factual information about politics stored
in long-term memory. Surveillance knowledge is the extent of how informed someone is based
on day-to-day politics and short-term developments. When consumers absorb their news directly
from traditional platforms, such as television, radio, newspapers, and even online websites
(which is different from the social media format), the public gains more knowledge about topics
that professional journalists have extensively covered. But with social media networks, the news
shared does not contain work exclusively by professionals. Instead, social media includes
information shared by known others, called so-called user-generated content. Even though most
of the news shared on social media comes from traditional media, some is provided by amateur
users or alternative media, that lack credibility and factuality. With filter bubbles—especially on
Facebook— a user’s previous activity determines users’ timelines, limiting the kind of news they
see. Van Erkel and Van Aelst reaffirmed that news on social media is personalized and filtered,
The study asked 993 respondents six multiple-choice questions about national news and
foreign news to measure general surveillance political knowledge about topics that both had
37
coverage in traditional media and were shared on Facebook. They were then asked how
frequently they used media platforms to consume news from twenty-five specific news sources.
If respondents used Facebook for news more than one or two days a week and used four or less
of the twenty-five sources, they were categorized as “Facebook reliant.” Respondents who use
Facebook one or two days a week or less but use five or more traditional news sources were
defined as having a “traditional news diet.” “A low news diet” consists of those who use
Facebook one or two days a week or less and use less than five other news sources. Respondents
also scaled how overwhelmed they felt when browsing their social media feed to measure
information overload.
The study found that citizens do not gain more political knowledge by following political
news on social media. However, the study found no evidence that filter bubbles contribute to
this. Instead, it concluded that information overload is the leading cause of decreased political
knowledge, since the respondents did not indicate that their news feed is homogeneous.
However, this information overload theory mainly pertains to those who consume Facebook
news in addition to other traditional sources. Users who also view numerous headlines on their
Facebook feed feel as if they are informed. Still, this feeling is called a false heuristic inference,
“where one may have a feeling of following the news without actually doing so.” Consequently,
those who have this phenomenon might not feel the need to search for additional political
There are some shortcomings of this study. It is undetermined whether Facebook has
characteristics that hinder learning or if its users lack the motivation to research other content.
The study also relied on the respondents’ self-report for their media consumption and their
determination of whether they believe their feed is homogeneous. As a result, the study lacked
38
complete accuracy since it is normal for users to overestimate how much media they consume. In
addition, users typically lack awareness about whether they are trapped in filter bubbles—since
However, some positive effects correlate with receiving news on social media. For those
who would not normally consume news in traditional ways, social media could help decrease the
“knowledge gap” (van Erkel & Van Aelst, 2021). When friends and family share news on social
media, users might pay more attention to it—since information coming from a trusted source is
seen as “more credible and more likely to be recalled,” even though it might not be the most
factual. From a long-term standpoint, recent studies that focus on the effects of following the
news through social media on (surveillance) political knowledge does not favor the positive side.
Either the studies show that social media doesn’t further political learning, or there is always a
negative connection.
Based on the findings of van Erkel and Van Aelst’s study, even though information
overload seems to be the leading cause of decreased political knowledge, with the integration of
scholarly articles, there is a connection that remains the same across multiple studies: social
media, especially Facebook, hinders the amount of factual political information and knowledge a
consumer has. This research contributes to the critical argument that information overload from
social media is the reason for the decrease in political ability while refuting that filter bubbles
and confirmation bias play a role. It is useful to note other elements of social media that can also
Conclusion
The personalized, niche content social media algorithms curated for social media users
can be favored for its intended features, as it caters to one’s likes and avoids one’s dislikes. But
they also can be detested for the same features—as it limits challenging ideas that are
fundamental in a functioning society that thrives on different points of view. Algorithms and
their effects, such as filter bubbles that further enhance confirmation bias, have negative
concept of the marketplace of ideas, divide society by developing extremist views and harm
social media users at the individual level—such as feeding into their subconscious thoughts,
including feelings of sadness and depression, and further worsening their mental health.
from psychological behaviors, like confirmation bias, which has developed since the creation of
human thought. Echo chambers, for example, are the natural effect of such psychological
behaviors. They describe the natural tendency to place oneself in a “bubble” of information that
would only confirm their beliefs. The workings of echo chambers and confirmation bias fuel
filter bubbles, besides the technological coding. Social media algorithms typically base one’s
“news feed” on users’ engagement, consisting of likes, comments, and shares. But as an internal
TikTok document revealed, powerful algorithms, like their own, can personalize the content one
sees just by how long a user “lingers” over a particular video. Filter bubbles on certain
applications, like TikTok, could be unavoidable unless a user actively tries to “throw” the
algorithm off. However, that can be difficult since algorithms are created through the natural
bubbles and algorithms but rather from natural psychological behavior, it is fair to say that our
natural psychological tendencies and behavior are enhanced and further exploited by the
algorithms and filters bubbles. Social media users are not complete slaves to their cognitive
biases and can develop other viewpoints. However, extremist, dangerous views can establish and
build on top of previously conceived biases due to the concealment of a filter bubble. One of the
consequences of social media that was explored was decreased political knowledge. Van Erkel
and Van Aelst concluded that information overload and the passive consumption of content on
social media caused decreased learning rather than algorithms being the main cause. Discerning
the root cause of negative effects from social media, whether they be information overload or
Two major social network platforms, Facebook and TikTok, both had internal documents
containing secrets of their algorithms that contributed to harm. While TikTok refuted all claims,
Facebook, as seen in The Wall Street Journal’s Facebook Files, was aware of their algorithms’
harm, such as increased polarization and mental health deterioration. Yet, CEO Mark Zuckerberg
chose to ignore these problems for the sake of revenue and profit.
Despite complete empirical evidence supporting the filter bubble theory because of the
complexity of its ecosystem, whether filter bubbles are indeed the cause of algorithms, the
curators of these algorithms need to consider the ethical implications of their creation. The
negative effects of social media algorithms will continue to happen unless something more is
done to combat these issues, as the creators of the algorithms seem unreliable and unable to find
solutions on their own. Looking toward the future, perhaps legislation could end algorithms, or
41
social media users need to approach such platforms with more awareness of the effects and the
Allen, S. (2011, September 1). The ten commandments of computer ethics. CPSR. Retrieved
February 4, 2022, from http://cpsr.org/issues/ethics/cei/
Allyn, B. (2021, October 5). Here are 4 key points from the Facebook whistleblower's testimony
on Capitol Hill. NPR. Retrieved October 8, 2021, from
https://www.npr.org/2021/10/05/1043377310/facebook-whistleblower-frances-haugen-
congress
Andrews, L. (2012, February 4). Facebook is using you. The New York Times. Retrieved
January 7, 2022, from https://www.nytimes.com/2012/02/05/opinion/sunday/facebook-is-
using-you.html
Associated Press. (2022, March 3). States start probe of TikTok's impact on young users' mental
health. CBS News. Retrieved March 19, 2022, from
https://www.cbsnews.com/news/tiktok-states-probe-impact-young-users-mental-health/
Berman, R., & Katona, Z. (2020). Curation algorithms and filter bubbles in social
networks. Marketing Science, 39(2), 296–316. https://doi.org/10.1287/mksc.2019.1208
Boston University. (2018, December 18). Filter bubbles, polarization and fake news-how social
media behaviors influence people's political decisions. College of Communications, Center
for Mobile Communication Studies. Retrieved February 2, 2022, from
https://sites.bu.edu/cmcs/2018/12/18/filter-bubbles-polarization-and-fake-news-how-
social-media-behaviors-influence-peoples-political-decisions/
Bozdag, E., & van den Hoven, J. (2015). Breaking the filter bubble: Democracy and
design. Ethics and Information Technology, 17(4), 249–265.
https://doi.org/10.1007/s10676-015-9380-y
Brennan, D. (2021, October 25). What is mob mentality? WebMD. Retrieved November 27,
2021, from https://www.webmd.com/mental-health/what-is-a-mob-mentality
Chandler, D., & Munday, R. (2011). A dictionary of media and communication (1st ed.). Oxford
University Press.
Curry, D. (2022, January 11). Most popular apps (2022). Business of Apps. Retrieved March 19,
2022, from https://www.businessofapps.com/data/most-popular-apps
Dahlgren, P. M. (2021). A critical review of filter bubbles and a comparison with selective
exposure. Nordicom Review, 42(1), 15–33. https://doi.org/10.2478/nor-2021-0002
43
Schultz, D., & Hudson, D. (2017, June). Marketplace of ideas. The First Amendment
Encyclopedia. Retrieved February 17, 2022, from https://www.mtsu.edu/first-
amendment/article/999/marketplace-of-ideas
Dean, W. (2017, July 18). What is the purpose of journalism? American Press Institute.
Retrieved December 7, 2021, from https://www.americanpressinstitute.org/journalism-
essentials/what-is-journalism/purpose-journalism/
Dow Jones & Company. (2021, October 1). The Facebook files. The Wall Street Journal.
Retrieved February 7, 2022, from https://www.wsj.com/articles/the-facebook-files-
11631713039
Eliana, A., Ajija, S. R., Sridadi, A. R., Setyawati, A., & Emur, A. P. (2020). Information
overload and communication overload on social media exhaustion and job performance.
Sys Rev Pharm. Retrieved March 3, 2022, from
https://www.sysrevpharm.org/articles/information-overload-and-communication-overload-
on-social-media-exhaustion-and-job-performance.pdf
Etter, M., & Albu, O. B. (2020). Activists in the dark: Social media algorithms and collective
action in two Social Movement Organizations. Organization, 28(1), 68–91.
https://doi.org/10.1177/1350508420961532
Ferrucci, P., & Tandoc, E. C. (2017). Shift in influence: An argument for changes in studying
gatekeeping. Journal of Media Practice, 18(2-3), 103–119.
https://doi.org/10.1080/14682753.2017.1374675
Gilbert, B. (2018, April 23). Facebook just published a message for its users: No, you're not the
product. Business Insider. Retrieved February 7, 2022, from
https://www.businessinsider.com/facebook-advertising-users-as-products-2018-4
Guess, A., Nyhan, B., & Reifler, J. (2018). Selective exposure to misinformation: Evidence from
the consumption of fake news during the 2016 US presidential campaign. European
Research Council, 9(3), 4.
Hagey, K., & Horwitz, J. (2021, September 15). Facebook tried to make its platform a healthier
place. it got angrier instead. The Wall Street Journal. Retrieved February 9, 2022, from
https://www.wsj.com/articles/facebook-algorithm-change-zuckerberg-
11631654215?mod=article_inline
Harcup, T., & O’Neill, D. (2016). What is news? Journalism Studies, 18(12), 1470–1488.
https://doi.org/10.1080/1461670x.2016.1150193
44
Haskins, C. (2019, August 15). How does TikTok's 'for you' page work? users have some wild
theories. Vice. Retrieved March 3, 2022, from
https://www.vice.com/en/article/xwezwj/how-does-tiktoks-for-you-page-work-users-have-
some-wild-theories
Hutchens, M. J., Hmielowski, J. D., Beam, M. A., & Romanova, E. (2021). Trust over use:
Examining the roles of media use and Media Trust on Misperceptions in the 2016 US
presidential election. Mass Communication and Society, 24(5), 701–724.
https://doi.org/10.1080/15205436.2021.1904262
Jenkins, H. W. (2010, August 14). Google and the search for the future. The Wall Street Journal.
Retrieved March 4, 2022, from
https://www.wsj.com/articles/SB10001424052748704901104575423294099527212
Kitchens, B., Johnson, S. L., & Gray, P. (2020). Understanding echo chambers and filter
bubbles: The impact of social media on diversification and partisan shifts in news
consumption. MIS Quarterly, 44(4), 1619–1649.
https://doi.org/10.25300/misq/2020/16371
Leetaru, K. (2019, July 20). The social media filter bubble's corrosive impact on democracy and
the Press. Forbes. Retrieved February 18, 2022, from
https://www.forbes.com/sites/kalevleetaru/2019/07/20/the-social-media-filter-bubbles-
corrosive-impact-on-democracy-and-the-press/?sh=5a2a1b85ad42
Lum, N. (2017, January 27). The surprising difference between "Filter bubble" and "Echo
chamber". Medium. Retrieved March 3, 2022, from https://medium.com/@nicklum/the-
surprising-difference-between-filter-bubble-and-echo-chamber-b909ef2542cc
Mosseri, A. (2018, January 11). News feed FYI: Bringing people closer together. Meta for
Business. Retrieved February 3, 2022, from
https://www.facebook.com/business/news/news-feed-fyi-bringing-people-closer-together
Normark, A., & Oskarsson, R. (2018). Individualizing without excluding: Ethical and technical
challenges: filter bubbles and their effects on society (dissertation). Uppsala University
Publications. Retrieved from https://doi.org/from
http://urn.kb.se/resolve?urn=urn:nbn:se:uu:diva-351925
Nover, S. (2021, October 7). Why the facebook whistleblower doesn't want the company broken
up. Quartz. Retrieved March 4, 2022, from https://qz.com/2070290/why-the-facebook-
whistleblower-doesnt-want-the-company-broken-up/
45
O'Brien, C. (2022, January 25). How do social media algorithms work? Digital Marketing
Institute. Retrieved March 3, 2022, from https://digitalmarketinginstitute.com/blog/how-
do-social-media-algorithms-work
Pariser, E. (2011, March). Beware online "filter bubbles". TED. Retrieved October 17, 2021,
from https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles
Pariser, E. (2011). The filter bubble: What the internet is hiding from you. Penguin Books
Limited.
Peterson-Salahuddin, C., & Diakopoulos, N. (2020). Negotiated autonomy: The role of social
media algorithms in editorial decision making. Media and Communication, 8(3), 27–38.
https://doi.org/10.17645/mac.v8i3.3001
Plaisance, P. L. (2014). Media ethics: Key principles for responsible practice. SAGE
Knowledge (2nd ed.). SAGE Publications Inc. Retrieved January 9, 2022, from
https://dx.doi.org/10.4135/9781544308517.
Rainie, L., & Matsa, K. E. (2022, January 5). Trust in America: Do Americans trust the
police? Pew Research Center. Retrieved February 47, 2022, from
https://www.pewresearch.org/2022/01/05/trust-in-america-do-americans-trust-the-police/
Rosenblatt, K. (2021, September 24). TikTok has new mental health resources for its users. Some
experts say it's a good start. NBCNews.com. Retrieved February 25, 2022, from
https://www.nbcnews.com/pop-culture/pop-culture-news/tiktok-has-new-mental-health-
resources-its-users-some-experts-n1279944
Shearer, E. (2021, January 12). More than eight-in-ten Americans get news from digital devices.
Pew Research Center. Retrieved February 7, 2022, from https://www.pewresearch.org/fact-
tank/2021/01/12/more-than-eight-in-ten-americans-get-news-from-digital-devices/
Smith, B. (2021, December 5). How TikTok reads your mind. The New York Times. Retrieved
February 28, 2022, from https://www.nytimes.com/2021/12/05/business/media/tiktok-
algorithm.html
Smith, G. (2021, May 8). The history of TikTok: From Musical.ly to the number 1 app in the
world. Dexerto. Retrieved March 28, 2022, from
https://www.dexerto.com/entertainment/the-history-of-tiktok-1569106/
SPJ code of ethics. (2014, September 6). Society of Professional Journalists. Retrieved March 4,
2022, from https://www.spj.org/ethicscode.asp
Spohr, D. (2017). Fake news and ideological polarization. Business Information Review, 34(3),
150–160. https://doi.org/10.1177/0266382117722446
46
The Daily Dish. (2010, October 10). The filter bubble. The Atlantic. Retrieved January 6, 2022,
from https://www.theatlantic.com/daily-dish/archive/2010/10/the-filter-bubble/181427/
Thompson, A. (2019, February 13). Google's Mission Statement and vision statement (an
analysis). Panmore Institute. Retrieved March 5, 2022, from http://panmore.com/google-
vision-statement-mission-statement
Thwaite, A. (2017, December 26). Echo Chambers and filter bubbles - what's the difference
between the two? The Echo Chamber Club. Retrieved March 1, 2022, from
https://archive.echochamber.club/index.html%3Fp=1698.html
van Erkel, P. F., & Van Aelst, P. (2020). Why don’t we learn from social media? studying effects
of and mechanisms behind social media news use on General Surveillance Political
Knowledge. Political Communication, 1–19.
https://doi.org/10.1080/10584609.2020.1784328
Wall Street Journal. (2021, July 21). How Tiktok's algorithm figures you out | WSJ. YouTube.
Retrieved March 29, 2022, from https://www.youtube.com/watch?v=nfczi2cI6Cs
Wells, G., Horwitz, J., & Seetharaman, D. (2021, September 14). Facebook knows Instagram is
toxic for teen girls, company documents show. The Wall Street Journal. Retrieved February
6, 2022, from https://www.wsj.com/articles/facebook-knows-instagram-is-toxic-for-teen-
girls-company-documents-show-11631620739?mod=article_inline
Zimmer, F., Scheibe, K., Stock, M., & Stock, W. G. (2019, June 30). Fake news in social media:
Bad algorithms or biased users? Journal of Information Science Theory and Practice.
Retrieved January 6, 2022, from https://doi.org/10.1633/JISTAP.2019.7.2.4
ACADEMIC VITA
Anjelica Nicole Singer
anjelicasinger@gmail.com
EDUCATION:
FIELD EXPERIENCE:
AWARDS: 1. Academic Excellence Scholarship 2. The Ostar-Hutchison Daily Collegian Scholarship 3. Paul
Levine Journalism Scholarship 4. Robert R. Gentzel Scholarship 5. Winifred Cook Journalism Scholarship 6. J.W.
Van Dyke Memorial Scholarship 7. School Communications Alumni Award 8. Helen Eckstein Study Abroad
Scholarship 9. Nov. 2, 2021, Centre County Report newscast placed first in the nation at the BEA Festival of
48
Media Arts competition 10. Nov. 2, 2021, Centre County Report newscast placed first at the 2022 Collegiate
Keystone Media Awards
CLUBS: Empowering Women in Law (EWIL), PSN-TV, The Society of Professional Journalists