My Personal Reflections
What was meant to be one episode of a show you missed on the steaming app becomes a binge session. A new show pops up in your recommendations, perfectly aligned with your tastes. You click, intrigued, and hours later, you’re still watching. It feels like the algorithm knows you better than you know yourself. But was this a helpful nudge toward something you’d enjoy—or a calculated manipulation to keep you watching longer?
Algorithms have become the architects of our digital experiences, shaping what we see, buy, and believe. But as their influence grows, so does the need to ask: When does influence cross the line into manipulation? Drawing on the work of Yasha Levine, Cathy O’Neil, Tristan Harris, and Safiya Umoja Noble, this essay explores the distinction between persuasion and manipulation, why governing algorithms is so difficult, and what steps we can take to protect human agency.
Manipulation vs. Persuasion: What’s the Difference?
At first glance, persuasion and manipulation might seem similar—they both aim to influence behavior. But philosophers like Michael Klenk argue that the difference lies in how that influence operates1. Persuasion respects autonomy by engaging a person’s rational capacities; it presents information openly, allowing individuals to make informed choices. Manipulation, on the other hand, bypasses critical thinking by exploiting cognitive biases or emotional vulnerabilities.
For example:
Persuasion: A fitness app reminds you to exercise by showing how consistent workouts improve health.
Manipulation: The same app uses guilt-inducing notifications (“You’ve fallen behind everyone else!”) to pressure you into action without giving you space to reflect on your goals.
Tristan Harris warns that algorithms often lean toward manipulation because their primary goal is engagement—not empowerment2. Social media platforms, for instance, exploit psychological triggers like fear or curiosity gaps to keep users scrolling. This is what Harris calls “the race to the bottom of the brainstem,” where algorithms prioritize clicks over meaningful interaction.
Cathy O’Neil adds that manipulation becomes particularly dangerous when it’s hidden behind opaque systems3. In “Weapons of Math Destruction,” she explains how biased algorithms in hiring or lending decisions can shape people’s lives without their knowledge or consent. Safiya Umoja Noble extends this critique by showing how search engines reinforce harmful stereotypes4, manipulating cultural narratives in ways that disproportionately harm marginalized groups.
Why It’s Hard to Draw a Line
The distinction between persuasion and manipulation seems clear in theory but becomes murky in practice. What feels like helpful guidance to one person might feel exploitative to another. It all depends on context and individual perception.
Consider this example:
A single parent might appreciate targeted grocery coupons that help them save money.
Someone else might feel uncomfortable knowing their shopping habits are being tracked so closely.
Yasha Levine’s “Surveillance Valley” reminds us that this tension isn’t new. The internet was designed as a tool for surveillance during the Cold War, and today’s algorithms inherit this legacy of control5. Platforms don’t just observe our behavior—they shape it, often in ways we can’t see or fully understand.
This subjectivity makes governing algorithms incredibly challenging:
Individual Variability: What one person sees as persuasion might feel like manipulation to another.
Opacity: Most users don’t know how algorithms work or why they’re shown certain content.
Scale: Even small manipulations can cause harm when applied across millions of users.
As philosopher Carissa Véliz argues, manipulation flourishes when systems lack transparency6. Without clear insight into how algorithms operate, users are left vulnerable to influence they cannot recognize or resist.
Solutions That Balance Autonomy and Innovation
Navigating the challenges of algorithmic influence requires thoughtful approaches that preserve human agency while embracing technological progress. Here are several strategies that could help strike this balance:
1. Adaptive Transparency
Transparency needs to go beyond generic disclosures like “This app collects your information.” Platforms can offer tailored, actionable explanations for specific actions, making algorithmic processes visible and comprehensible.
For example, a streaming service could explain why it recommends certain shows by saying, “You liked X because it shares themes with Y.
This kind of transparency empowers users by helping them understand how their choices are being shaped.
2. Manipulation Audits
Independent evaluations of algorithms could ensure ethical standards are upheld. These audits would examine:
Whether users understand how they’re being influenced.
Whether algorithms exploit cognitive biases like urgency or fear unfairly.
Whether systems perpetuate harmful biases such as racial or gender discrimination.
Regular audits would provide accountability and help mitigate manipulation risks.
3. Redefining Algorithmic Goals
The objectives driving algorithm design need to shift from engagement-based metrics—like maximizing screen time—toward more ethical goals that prioritize user well-being.
For instance, social media platforms could focus on fostering meaningful interactions, such as connecting with friends or engaging in constructive dialogue, rather than encouraging passive consumption of content.
4. Bias Impact Assessments
Platforms could conduct regular assessments to evaluate how their algorithms affect marginalized communities, similar to environmental impact reports for businesses.
These assessments would help identify and address systemic biases embedded in algorithmic decision-making, ensuring that technology serves all users equitably.
These solutions are not easy to implement. However, it is the work that is required for a digital ecosystem that respects individual autonomy while fostering innovation and inclusivity.
Living in the Gray Zone
The line between persuasion and manipulation isn’t fixed. It is highly dependent on context, culture, and individual values. This makes it nearly impossible to create one-size-fits-all solutions for governing algorithmic systems. As Yasha Levine explains, the internet was originally designed as a tool for surveillance and control during the Cold War. While this legacy shapes how platforms operate today, it doesn’t mean we’re powerless within it.
Navigating this gray zone requires both individual awareness and systemic accountability:
As individuals, we can question why certain recommendations feel urgent or emotionally charged.
As a society, we can demand greater transparency and ethical oversight from tech companies.
Algorithms are powerful tools, but they should serve us, not steer us. That means holding platforms accountable for how they shape our choices and insisting on systems that prioritize transparency and respect for human agency.
So next time you encounter a personalized ad or curated playlist, don’t just accept it at face value. Ask yourself: What does this reveal about how I’m being understood—and how I’m being guided? The more we question, the harder it becomes for manipulation to thrive in the shadows. Reclaiming control begins with refusing to be a passive participant.
Klenk, M. (2024). Rethinking manipulation: The indifference view of manipulation. Open for Debate. https://blogs.cardiff.ac.uk/openfordebate/rethinking-manipulation-the-indifference-view-of-manipulation/
Harris, T. (2019). Optimizing for engagement: Understanding the use of persuasive technology on internet platforms [Testimony before the U.S. Senate Committee on Commerce, Science, and Transportation]. United States Senate. https://www.commerce.senate.gov/2019/6/optimizing-for-engagement-understanding-the-use-of-persuasive-technology-on-internet-platforms
O'Neil, C. (2016). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown Publishing Group.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. NYU Press.
Levine, Y. (2018). Surveillance valley: The secret military history of the internet. PublicAffairs.
Véliz, C. (2020). Privacy is power: Why and how you should take back control of your data. Bantam Press.
A deeply needed reflection—thank you, Cecilia. You’ve named the subtle edge where influence becomes intrusion, and the urgent call to reclaim our agency within systems that increasingly act without consent. May we move toward technologies that honor inner sovereignty, not bypass it.