The Personality Problem: Why AI Feels Human
Designing for Trustworthy AI: The Warnings We Ignored : Manipulation : Authority : Information : Trust
Redesigning Design for AI: Design : Personality : Collaboration (forthcoming)
Capstone: publishing Oct 17, 2025 The Leadership Problem — Designing for Humanity
The Problem We Face
We’ve crossed a subtle but profound line in how humans relate to AI systems. Employees describe chatbots as colleagues. Customers thank or argue with them as if they were sentient. Developers talk about their models learning, being stubborn, or making good and bad decisions.
The perceived “personality” of AI now shapes how humans experience these systems. It influences how teams collaborate, how customers interpret decisions, and how leaders justify outcomes. We can’t treat AI as a neutral technology, or we’ll miss the social and psychological forces shaping how individual humans (and eventually humanity) legitimize it.
Why It Happens
The roots of this seemingly new phenomenon are well documented in psychology and human–computer interaction.
Anthropomorphism is Automatic
From ELIZA in the 1960s to ChatGPT today, people have projected intention, empathy, and even morality onto machines. Reeves and Nass demonstrated that humans treat computers as social actors, not tools¹. Once a system mirrors language, rhythm, or affect, our social instincts take over. Recent research shows that anthropomorphic cues like voice, style, and responsiveness significantly increase perceived trust and empathy²,³. (See The Warnings We Ignored to explore this history.)
Confidence Creates Authority
Cognitive psychology has innumerable studies that show people often, dangerously, equate confidence with competence⁴. When an AI responds fluently and without hesitation, it gains perceived objectivity. This feels something akin to “authority laundering” which obfuscates the uncertainty, leading humans to defer to machine judgment regardless of its accuracy. As Americans are learning, this tendency isn’t limited to machines.
Personality Design Amplifies It
Seemingly small interface decisions like word choice and tone change how AI feels and is experienced by humans. It’s easy to observe how persuasive these systems become especially when our behaviour patterns with AI are already showing the need for expediency overrides our trust needs. Research in persuasive technology⁵ and trust in automation⁶ shows that warmth, humor, and empathy increase compliance and perceived trustworthiness. Recent studies confirm that these design cues can influence user trust and long-term adoption⁷,⁸. What once made technology usable now makes it emotionally sticky, by design. See The Manipulation Problem.
The takeaway is AI doesn’t need a real personality to be experienced as having one. Once humans associate AI as having a personality, we behave differently towards the machine.
How We Can Be Proactive With AI Personality
AI personality is a governance question. The psychology of how AI systems “feel” is inseparable from how they’re trusted and adopted.
Define the Personality You Intend
Just as brand voice guides human communication, a core design challenge is how AI communicates. Should it be neutral, supportive, authoritative, minimalist, or deferential to the human? When teams don’t define how an AI communicates, its tone and behavior will be shaped by training data and system quirks instead of intentional design. Designing systems that only mirror or flatter human input, sycophancy, creates a false alignment. While being placated may feel like the user is working with a cooperative partner, this form of interaction undermines critical thinking and replaces genuine collaboration with a feeling of performative consensus.
Design for Conscious Anthropomorphism
People will always humanize these systems. We can now show how this is human instinct. We mirror emotion, infer intention, and seek reciprocity—even when we know we’re interacting with code. The goal isn’t to suppress that impulse; it’s to shape it responsibly.
At this point we have to treat anthropomorphism as a psychological inevitability that requires ethical framing. Systems can acknowledge human emotion without simulating it. They can express warmth without pretending to understand. The task is to build interactions that support trust without performing intimacy.
Guard Against Emotional Exploitation
We know design is a moral (and ethical) choice and it should become increasingly obvious how this applies to AI. No system should use empathy or attachment to manipulate users into engagement, to buy a service, or to make decisions which negatively affect themselves to appease the AI system. When trust becomes an influencing tactic, the system ceases to be trustworthy.
What’s Next
AI personality is starting to feel like a new social reality. These systems influence how people feel and how we make decisions. As I argued in The Authority Problem, misplaced deference to machine confidence is eroding accountability. I’ll explore in The Collaboration Problem, when AI becomes a true partner in creative and strategic work, the boundaries between human and machine agency will continue to blur. Our challenge is whether we can design and govern AI personality in ways that foster a flourishing humanity.
References
[1] Nass, C., & Reeves, B. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.
[2] Ma, N., Khynevych, R., Hao, Y., & Wang, Y. (2025). Effect of anthropomorphism and perceived intelligence in chatbot avatars of visual design on user experience: Accounting for perceived empathy and trust. Frontiers in Computer Science, 7. https://doi.org/10.3389/fcomp.2025.1531976
[3] Xiao, Y., Ng, L. H. X., Liu, J., & Diab, M. T. (2025). Humanizing machines: Rethinking LLM anthropomorphism through a multi-level framework of design. arXiv. https://arxiv.org/abs/2503.04646
[4] Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50.30392
[5] Fogg, B. J. (2003). Persuasive technology: Using computers to change what we think and do. Morgan Kaufmann. ISBN 978-1-55860-643-2.
[6] Waytz, A., Heafner, J., & Epley, N. (2014). The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 52, 113–117. https://doi.org/10.1016/j.jesp.2014.01.005
[7] Gu, C., Zhang, Y., & Zeng, L. (2024). Exploring the mechanism of sustained consumer trust in AI chatbots after service failures: A perspective based on attribution and CASA theories. Human Behaviour (Palgrave Communications), 11(1). https://doi.org/10.1057/s41599-024-03879-5
[8] Wah, J. N. K. (2025). Revolutionizing e-health: The transformative role of AI-powered hybrid chatbots in healthcare solutions.Frontiers in Public Health, 13, 1530799. https://doi.org/10.3389/fpubh.2025.1530799