The Trust Problem: Why Compliance Isn’t Enough for AI
Designing for Trustworthy AI: The Warnings We Ignored : Manipulation : Authority : Information : Trust
Redesigning Design for AI: publishing Oct 3, 2025 Design : Psychology : Collaboration
Capstone: publishing Oct 9, 2025 The Leadership Problem — Designing for Humanity
The Problem Leaders Face
For organizations deploying AI, regulatory compliance should be table stakes. In reality, most operate in a patchwork environment. The EU AI Act and GDPR have set strong precedents abroad. U.S. companies face much looser guardrails, fragmented and sector-specific rules, and what appears to be ever-evolving FTC guidance.
The result is uneven accountability and a reliance on voluntary compliance and frameworks. Yet even where regulation is strongest, compliance doesn’t equal trust. Customers, employees, and partners are seeking proof of legal adherence, sure, but what they really want is confidence that these systems can be trusted. Without that confidence adoption slows and consumer skepticism deepens and the billions invested in AI become a more uncertain investment.
Why It Happens
Trust is more fragile and more complex than a regulatory checkbox.
Psychology shows us that trust is a cognitive process. People assess risk, weigh cues, and decide whether to be vulnerable. Daniel Kahneman’s dual-process research shows fast, framework-based judgments (System 1) infer trust from fluency, authority signals, and confident tone, while slower, reflective reasoning (System 2) updates only with effort¹. Because of loss aversion and negativity dominance, when a product or service has a single biased output or misleading explanation years of accumulated trust can be lost overnight. The implication then is to design for graceful failure. Because you’ve done user research with real humans you’ll know: which uncertainties you need to surface; how to set expectations upfront; able to provide plain-language rationale and simple appeals.
Sociology reminds us that trust functions at the institutional level. Consumers don’t think about a single product or experience, rather it’s about whether people see an organization (or an industry) as legitimate and aligned with social values. Hardin’s research showed trust often depends on “encapsulated interests,” where people believe institutions act in ways consistent with their own interests². Part of the human condition is that you need to trust those around you. Trust is what helps people work together when they can’t reasonably check everything out for themselves³. We know a foundational rule for organizational survival is for people to see them as capable, honest, and playing by the same rules as the rest of society⁴. This means consumers don’t just evaluate a single AI product; they assess whether the organization (or sector) embodies competence, transparency, and fairness. A failure at one company can cascade into distrust across an industry.
HCI and design research reinforce that trust is shaped at the point of interaction. Interfaces that obscure decision logic or overstate capabilities invite suspicion. Decades of work on trust in automation shows that opacity and over-hyping (making outrageous claims) erode user confidence. Conversely transparency, clarity, and opportunities to contest outcomes build earned trust in the humans that use systems with these attributes⁵. Nass and Reeves’ deeply interesting (and highly recommended) work on The Media Equation demonstrated how readily people naturally attribute human-like qualities to technology, projecting competence, authority, or even fairness onto systems. Conversational systems in particular create a sense of relationship and trust, making users more likely to accept outputs uncritically⁶. The most alarming meta-analysis has confirmed that anthropomorphic cues like simulating the human voice and personality significantly increase user trust, often beyond what is warranted by whether the system is reliable or not. Interfaces (read: designed systems) that obscure decision logic and/or exaggerate capability already invite suspicion⁷. These design choices exploit human tendencies to over-trust systems framed as competent, fair, or authoritative. In short, the machines are designed to systematically exploit humans—at scale.
How We Fix It
Trust can’t be legislated—but it can be designed, governed, and earned. In a fragmented regulatory environment, leaders face a choice: 1) chase compliance across jurisdictions or 2) set their own higher, consistent standards.
From Compliance to Maturity
Start with regulatory adherence, but evolve toward organizational practices that make trust measurable. Think independent audits, ethical review boards, and transparent reporting. Don’t treat fragmented laws as ceilings. Instead treat the toughest standards (GDPR, EU AI Act) as the organizational baseline and evolve toward practices that make trust measurable and verifiable. AI maturity is achieved when trust is operationalized instead of waiting for legislation.
Example: A U.S. tech firm applies EU AI Act requirements globally, even before enforcement, publishing annual algorithmic risk reports across all markets. This signals to partners and regulators that it’s prepared for any jurisdiction.
From Maturity to Differentiation
Treat trust as a strategic asset and a competitive advantage. Companies that proactively demonstrate fairness, explainability, and human oversight gain a competitive edge with customers and partners who are wary of AI hype. In markets where regulation is weak, proactive fairness, explainability, and oversight can build adoption where competitors are faced with skepticism. In strong regulatory environments, exceeding compliance earns credibility. In both cases the message is that trust is a brand asset, not a burden.
Example: A consumer electronics company integrates “AI Nutrition Labels” into its devices, showing users in plain language how the onboard AI makes decisions. This transparency becomes a selling point, especially in markets without regulation requiring it.
From Differentiation to Industry Standard
Lead the field. The organizations that lead on trust set the bar for others and often influence the next wave of regulation. Instead of waiting for clarity, they create it. They proactively shape future regulation through their own actions and establish themselves as credible voices in an uncertain landscape. Establishing open frameworks, sharing methodologies, and demonstrating accountability makes regulators more likely to adopt your approach as the norm. Organizations that understand scale, grasp this concept.
Example: A multinational bank develops and open-sources its AI Fairness Toolkit, which becomes widely adopted by peers. Regulators reference it in guidance, cementing the bank’s leadership position.
What’s Next
Compliance might keep you out of court. Trust will keep you in business.
As I argued in The Warnings We Ignored, regulation alone couldn’t stop social media from destabilizing confidence in institutions. With AI, the stakes are higher. That’s why in my next piece, The Design Problem: When Human-Centered Design Becomes the Lever of Trust, I’ll explore how rethinking HCD can give organizations the tools to build systems that don’t just avoid harm—but actively strengthen trust.
References
[1] Gardner, L. A. (2012). Thinking, Fast and Slow by Daniel Kahneman. Journal of Risk and Insurance, 79(4), 1143–1145. https://doi.org/10.1111/J.1539-6975.2012.01494.X
[2] Hardin, R. (2002). Trust and Trustworthiness. Russell Sage Foundation. http://www.jstor.org/stable/10.7758/9781610442718
[3] Gambetta, D. (1988) Can we Trust Trust? In: Gambetta, D., Ed., Trust: Making and Breaking Cooperative Relations,. Blackwell, New York, 213-237.
[4] Suchman, M. C. (1995). Managing legitimacy: Strategic and institutional approaches. Academy of Management Review, 20(3), 571–610. https://doi.org/10.2307/258788
[5] Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
[6] Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge University Press.
[7] Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Parasuraman, R. (2011). A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. https://doi.org/10.1177/0018720811417254