The Warnings We Ignored: What HCI Taught Us About Manipulation—and Why It Matters for AI

The Essays — All Essays and the Playbook will be published October 3rd, 2025

Designing for Trustworthy AI: The Warnings We Ignored : Manipulation : Authority : Information : Trust

Redesigning Design for AI: Design : Psychology : Collaboration

Capstone: The Leadership Problem — Designing for Humanity

The story of digital technology has often been framed as one of progress: more connection, more empowerment, more possibility. If you trace the history of Human-Computer Interaction (HCI), another story emerges—a series of clear, evidence-based warnings about how technology could erode trust, exploit human vulnerabilities, and destabilize society.

We ignored those warnings with social media. We’re on the verge of ignoring them again with AI.

I came to this firsthand. In 2006–2008, while I was in grad school studying Human-Computer Interaction and Human-Centered Design, Facebook was exploding across campuses and spilling into the mainstream. In our classrooms, the rise of social platforms was unfolding in real time as case study after case study. We could see, almost week by week, how these companies were operationalizing what the social sciences already knew about human behavior and cognition. Social affirmation, desirability, craving, and constant feedback loops weren’t accidents—they were deeply thought-out design decisions engineered into the product. 

What struck me then—and still strikes me now—is how quickly design principles meant to empower people were inverted into techniques for manipulation.

From Connection to Exploitation

HCI began as a field dedicated to usability and accessibility. Over time those same principles became the blueprint for manipulation. Features designed to help users (personalization, frictionless flows, persuasive nudges) were systematically weaponized to harvest attention, collate customer data, and maximize profit.

In the 1960s Joseph Weizenbaum sent a warning. His chatbot ELIZA revealed how quickly humans anthropomorphize even simple systems¹. He saw the risks: machines could be used to exploit trust while obscuring accountability and displacing human judgment. Specifically because humans naturally attribute human-like qualities to conversational systems, making them vulnerable to manipulation through interface design². He argued that certain human decisions—those involving moral choice, interpersonal respect, or therapeutic discretion—should never be ceded to machines³]. Technical (think AI) systems can obscure human responsibility while legitimizing potentially harmful decisions⁴. 

My biggest takeaway from Weizenbaum—which has become a personal guiding principle—is that design is a moral choice. How we choose to design interactions carries ethical weight and can potentially undermine human autonomy⁵. 

Fast-forward to today where social media platforms show Weizenbaum’s concerns realized at scale:

  • Dark Patterns make cancellation harder than sign-up.

  • Algorithmic Curation drives outrage and polarization because more engagement results in more profit.

  • Data Extraction happens invisibly which shifts privacy into a transaction that customers aren’t consciously making.

Social Media as a Case Study in Ignored Warnings

By the late 1990s, HCI researchers had already documented the harms we now accept as the cost of digital life: anonymity fueling toxic behavior⁶, heavy internet use leading to loneliness⁷, and design biases eroding privacy⁸. 

When growth was on the line, the industry chose to ignore or obfuscate the research⁹. Social media companies became engagement-at-all-costs-driven business models that weakened social ties¹⁰, polluted our information environment¹¹, and destabilized democratic processes through the use of mis-and-disinformation¹².

AI Raises the Stakes

If social media showed us what happens when warnings go unheeded, AI is showing us what happens when those same manipulative architectures are supercharged.

AI systems don’t just nudge—they hyper-nudge. Under the guise of personalization, they predict, adapt, and exploit vulnerabilities in real time. Wrapped in human-sounding voices and sycophantic behaviors, they present themselves as “useful” and “desirable,” while bending human cognition toward the systems’ goals.

We already see algorithms making consequential decisions in hiring, healthcare, and criminal justice—domains where design choices require a humanity-first mindset. The design logic of the social media era has become something larger: a systemic erosion of autonomy, the intentional manipulation of thought, and the quiet normalization of behavioral influence at scale.

The Leadership Imperative

For executives and policymakers history is unambiguous. The way we choose to design AI systems presents us with humanity-centered choices. When we build, configure, or deploy AI systems we’re going beyond selling products—we are affecting human behavior and influencing societal norms. 

The question isn’t whether AI will be impactful to humanity. It already is. The question is whether leaders will look to recent history to reclaim and apply HCI principles for their original purpose by choosing to design technology that strengthens rather than exploits humanity.

This piece opens my new series, Designing for Humanity in the Age of AI—practical essays for executives and policymakers who need solutions to go beyond slogans.

The full series and Leadership Playbook will launch Friday, October 3, 2025.

References

[1] Berry, D. M. (2023). The limits of computation: Joseph Weizenbaum and the ELIZA chatbot. Weizenbaum Journal of the Digital Society, 3(3). https://ojs.weizenbaum-institut.de/index.php/wjds/article/view/106/96

[2] Lambert, J. (2023). Lessons and warnings from the original chatbot – ELIZA. Cprime Blog. https://www.cprime.com/resources/blog/lessons-and-warnings-from-the-original-chatbot-eliza

[3] Tarnoff, B. (2023, July 25). Weizenbaum’s nightmares: How the inventor of the first chatbot turned against AI. The Guardian. https://www.theguardian.com/technology/2023/jul/25/joseph-Weizenbaum-inventor-eliza-chatbot-turned-against-artificial-intelligence-ai

[4] Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman and Company.

[5] Weizenbaum, J. (1966). ELIZA—a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9(1), 36–45. https://doi.org/10.1145/365153.365168

[6] Greenfield, D. N. (1999). Psychological characteristics of compulsive Internet use: A preliminary analysis. CyberPsychology & Behavior, 2(5), 403–412. https://doi.org/10.1089/cpb.1999.2.403

[7] Moody, E. J. (2001). Internet use and its relationship to loneliness. CyberPsychology & Behavior, 4(3), 393–401. https://doi.org/10.1089/109493101300210303

[8] Ehn, P., & Löwgren, J. (1997). Design for Quality-in-use: Human-Computer Interaction Meets Information Systems Development (pp. 299–313). North-Holland. https://doi.org/10.1016/B978-044481862-1.50078-9

[9] Di Salvo, P. (2022). Leaking black boxes: Whistleblowing and big tech invisibility. First Monday, 27(12). https://doi.org/10.5210/fm.v27i12.12670

[10] Berghel, H. (2020). New perspectives on (anti)social media. Computer, 53(3), 77–82. https://doi.org/10.1109/MC.2019.2958448

[11] Hemanth, S. V., Sinha, A., Sathua, A., Kumari, D., & Kumar, D. (2024). Social media and misleading information in a democracy: A mechanism design approach. International Journal of Engineering Technology and Management Sciences, 8(3). https://doi.org/10.46647/ijetms.2024.v08i03.022

[12] Kreiss, D., & McGregor, S. C. (2018). Technology firms shape political communication: The work of Microsoft, Facebook, Twitter, and Google with campaigns during the 2016 U.S. presidential cycle. Political Communication, 35(1), 21–37. https://doi.org/10.1080/10584609.2017.1364814