The Manipulation Problem: When AI Steers the Human
Designing for Trustworthy AI: The Warnings We Ignored : Manipulation : Authority : Information : Trust
Redesigning Design for AI: publishing Oct 3, 2025 Design : Psychology : Collaboration
Capstone: publishing Oct 9, 2025 The Leadership Problem — Designing for Humanity
The Problem We Face
Executives today are caught in a trust crisis. Customers, employees, and regulators are no longer tolerating manipulative digital practices: subscription traps, confusing consent flows, and simply avoiding the brands that persistently use these tactics.
This is the result of design decisions that have, predictably, become real strategic risks. Deceptive practices have resulted in lawsuits, brand damage, and a smattering of regulatory crackdowns. More importantly, they risk losing the confidence of the very people their products are meant to serve. In short, these tired practices are just bad business.
Why It’s Happening
The roots of manipulation lie in the same principles once celebrated for making technology usable. Over time, those principles were co-opted and weaponized.
Ease of Use → Removal of Friction. What began as designing smooth pathways became the practice of steering users toward choices that serve the business, not the customer.
Persuasive Design → Behavioral Coercion. Insights from psychology—nudges, defaults, reinforcement loops—have been sharpened into mechanisms of control.
Personalization → Exploiting Vulnerabilities. Data-driven targeting turned from relevance to exploitation, tailoring experiences to user weaknesses.
Engagement → Addiction. Metrics meant to measure value became incentives to maximize time-on-platform, regardless of human cost.
In other words, the choice to weaponize the principles of human-computer interaction have become normalized business practices. The same science that should empower autonomy and agency is commonly inverted into tools of manipulation.
How We Fix It
There is good news — the same fields that diagnose manipulation also point toward solutions. Manipulative interaction design is optimized for more and ongoing engagement which, as we now can prove, is not optimal for human well-being. It’s common to see user consent workflows that are legally compliant but practically meaningless. Products designed to elicit short-term growth at the cost of long-term trust. These mistakes are now amplified by AI systems that adapt at scale. Recommendation engines are already exploiting cognitive biases to keep users hooked. Large language models can generate synthetic persuasion that feels conversational but is designed to steer decisions and capture human emotion. Hyper-personalized nudges can narrow choices so subtly that the erosion of human autonomy is barely perceptible.
The task now is not to lament those failures, but to build systems of accountability, design practice, and governance that prevent them from repeating in the AI era. Ok, so we need a mindset shift and the social sciences show us this shift requires intentional structures and standards.
Embed AI Ethics into Governance
Governance cannot remain a siloed compliance function. Why? Good governance schemas are what designers are craving—they serve as the guardrails by which ethical design decisions can be made. Organizations need cross-functional AI oversight boards—design, data science, psychology, legal, compliance—that review how AI systems influence behavior. Require algorithmic transparency reports that document optimization goals, training data assumptions, and trade-offs. How do we know this is best practice? Regulations like the EU AI Act, NIST’s AI Risk Management Framework, and emerging FTC guidance are already moving in this direction. If we haven’t learned this yet, we need to now—Big Tech will build whatever they can despite whether or not they should. If we want to create balance in how AI is developed and deployed then we need non-technical perspectives at all levels. The organizations that operationalize these practices now will not only stay ahead of compliance but also set higher trust standards.
Example
A governance board rejecting a recommender system design that maximizes engagement minutes if it also increases compulsive use among teens.
Conduct AI Manipulation Audits
Auditing covers the entire AI stack, including the UI, workflows, and most importantly the algorithmic logics that shape behavior. Where do recommendation engines create addictive loops? Where do personalization systems systematically disadvantage certain groups? Where might generative AI cross the line between simple assistance and covert persuasion? Map these points across sign-ups, consent mechanisms, cancellations, and recommendations. Treat them not as “growth tactics,” but as predictable failure modes that endanger trust.
Example: An audit that flags when an AI assistant recommends its own ecosystem products disproportionately, revealing conflicts of interest baked into optimization.
Create Manipulation-Resistant Design Standards
The immediate task is to create practical design safeguards that reduce the risk of AI-driven manipulation. Done well, these standards require systems to document intended human outcomes, test explicitly for manipulative failure modes, and include well-being metrics alongside engagement or growth metrics. The goal is not to optimize behavior but to ensure AI systems cannot exploit vulnerabilities at scale.
Example: A product team implementing a “consent flow checklist” that flags when an AI-driven onboarding experience buries opt-out options or uses coercive defaults. This checklist becomes a standard gating requirement before deployment—just as accessibility and security QA tests are today.
The organizations that embrace this shift will do more than avoid risk. They will lead the next era of digital trust. In a marketplace where algorithms can manipulate at scale, trust in AI systems will not only be a regulatory requirement — it will be the decisive competitive advantage.
What’s Next
This piece connects to a broader question: if design has been weaponized before, what happens when AI begins to design, persuade, and manipulate at scale? I’ll explore that in my article on algorithmic authority: The Authority Problem: When Machines Make the Call