The Design Problem: When Human-Centered Design Becomes the Lever of Trust

The Essays Home

Designing for Trustworthy AI: The Warnings We Ignored : Manipulation : Authority : Information : Trust

Redesigning Design for AI: Design : Psychology (forthcoming) : Collaboration (forthcoming)

Capstone: publishing Oct 17, 2025 The Leadership Problem — Designing for Humanity

The Problem We Face

Human-Centered Design changed everything about how we build technology. It taught us to start with people and to see needs, emotions, and lived experience as the real material of innovation. In our work as designers, that idea anchored our work in an effort to make things more usable, useful, and desirable.

It already feels like AI is actively rewriting these rules. Because systems aren’t static tools anymore—they learn, adapt, and act with a kind of autonomy that feels far less like software and more like thoughtful collaboration. AI systems make judgments, often with hidden trade-offs, and quietly reshape how humans and humanity think, decide, and relate to one another. In short, AI is dramatically accelerating what we already know about humans and technology based on the decades of research explored and codified in socio-technical theory. The real difference now is scale. We’re no longer studying these dynamics from afar, instead we’re living inside a co-evolutionary system that is reshaping us as we, hopefully, shape it.

As humans, we’re still at the center and we bring the data. I believe our task now is to design for something different. Our task is to preserve human autonomy and to ensure that humans shape the machines as much as they shape us.

As I wrote in The Warnings We Ignored, the lenses that shape my view are the theories and practices behind human–computer interaction, information science, and human-centered design. Seeing through these lenses has made this shift impossible to miss. What started as a discipline about how humans use machines has turned into something far messier and far more interesting. How do we design for when human judgment, machine behavior, and institutional trust evolve together inside the same system?

Any human-centered designer will tell you that the design process is a starting point—the real unlock is when we focus on design purpose. The old standards of personas (jobs-to-be-done, etc), usability tests, and aesthetically pleasing interfaces still have a place. Still it’s beginning to feel like the way these methods were created for products meant to be stable and knowable. AI isn’t either of these. These systems have the potential to adapt in ways that can make harm look like success (aka Groks AI’s dictator worship problem). AI systems can be seamless and still manipulative (usable but not useful). Efficient and still corrosive to trust (useful but not desirable). This paradox we face are systems that function perfectly within their own logic while fracturing the human systems around them.

Why It Happens

The question I keep coming back to is: How do we design for an era when intelligence is distributed across people, machines, and institutions, when the human is no longer the sole driving actor shaping the outcomes of their interactions with technology?

Opacity and Complexity

Traditional design assumes that systems can be made legible. AI resists that. Deep models, unclear or obfuscated reasoning, and emergent system behaviors (what it’s learned or regular updates to the model) make even designers unsure of why a decision occurred. The design problem shifts from making the system easy to use to making the system’s logic intelligible and contestable. This exposes the very real tension between usability and desirability. Or to say it another way, when we make things super simple to use humans will begin to trust whatever the AI system tells us.  

Adaptivity and Drift

AI systems continuously learn from their environments and from humans. The large frontier models we’re building now co-evolve with the data, behavior, and programmatic incentives. As with any feedback loop, the feedback the AI receives can strengthen performance or completely distort it. A model that appears ethical and balanced in testing can behave very differently once exposed to the messy dynamics of real use. Again, see Grok AI. 

Traditional design assumes general stability and a cycle of learn, build, test, release, measure, repeat. This shifts when systems continue learning after deployment, design becomes an ongoing act of calibration. Fixed artifacts are still here but not the primary thing being designed. We’re stewarding adaptive systems that should require monitoring, adjustment, and, yes, human-in-the-loop intervention to keep the systems aligned with human intent.

Collaboration and Autonomy

AI is not a passive tool—it suggests, persuades, and sometimes acts. The designer’s job now includes defining boundaries of authority and recovery after failure. This is not interface design; it’s the design of partnership.

Systemic Externalities

An AI that delights the individual user might still harm the collective. Classic HCD optimizes for the user experience; humanity-centered design must account for downstream and intergenerational effects—equity, environment, information integrity, and power distribution.

From a systems perspective, AI turns design into a governance act. Each interface, model, and data decision alters social infrastructure. Psychology tells us people trust what they can predict and question what they cannot. Sociology reminds us that legitimacy is earned when systems are perceived as fair, accountable, and aligned with collective values. HCD must now absorb both lessons.

How We Fix It - 3 Shifts

To design responsibly in the age of adaptive intelligence, leaders must evolve HCD into Humanity-Centered Design which becomes a discipline that designs not only for users, but for the ecosystems that surround them. I want to be unambiguous: I welcome this shift because it pushes design back to where it belongs, which isn’t at the surface of the screen but deep in the system where values, power, and consequences are literally encoded.

Shift From Human Use to Human–AI Partnership

Design Practice: Participatory AI Design

Move beyond designing interfaces for users and start designing relationships between humans and systems. Treat AI as a collaborator whose behavior is (systematically required to be) well-defined, trained, and trusted by humans. This requires new workflows that specify how things like authority and decision rights and escalation paths (contestability) function in mixed human–machine teams.

Naturally the stakeholder net widens and so participation must extend far beyond only end-users. Impacted groups—and there are many—each have a stake in how AI behaves. Participatory AI Design makes those voices part of the entire lifecycle. It joins human-centered design with governance through co-creation and genuine oversight by the people these systems affect.

Example: A city mobility program develops its route-optimization AI through workshops with drivers, residents, and urban ecologists. The system’s objectives (e.g.efficiency, safety, emissions) are co-defined and all stakeholders can review trade-offs before deployment.

Shift From Usability to Socio-Technical Sensemaking

Design Practice: Design for Interpretability and Context

AI requires understanding outputs and specifically how those outputs came to be. The challenge to designers is to build explainability as a core experience. Human-Centered Explainability (HCXAI) ensures that explanations map to how humans reason. That is, showing provenance and how outputs were derived like linking to source materials/references.

In design, sensemaking is a collaborative process of interpreting complex and often ambiguous information to form shared understanding to guide future actions. So clearly sensemaking requires seeing AI within its social system. Designers can expand their toolkits by using tools like socio-technical mapping, second-order effect analysis, and lived-experience research to anticipate downstream impact. These tools aren’t new at all yet I’ve rarely seen these methods used when designing any computational system. 

Example: A hospital builds an explainable diagnostic AI that shows not only confidence levels but which patient data most shaped each recommendation. Clinicians can annotate, override, or flag anomalies. With consistent use, this positive feedback loop evolves into a true virtuous cycle which refines both the system and the practice of care.

Shift From Static Products to Co-Adaptive Systems

Design Practice: Design for Reflexivity and Re-Alignment

AI systems evolve. Design has to evolve with them. Co-adaptive systems use feedback from people and things like performance data to stay aligned over time. Leveraging feedback loops and practicing reflexivity is what good designers already do. It’s the instinct to pause, to question, and to adjust. Importantly, to notice when our own assumptions and experiences (read: bias and blind spots) are shaping the work. Thoughtfully designed feedback loops placed between intent and impact is the system designers’ secret insight weapon. Because of AI’s scale, that self-awareness can’t only occur in a product team’s weekly retrospective, it has to be designed into the system itself. Reflexivity becomes structural: monitoring the system’s drift, revisiting goals, and course-correcting towards human alignment.

I can’t see another way to design AI systems than to intentionally focus on governance by design. Always clear audit trails. Always version control for model ethics. Always real paths for rollback when harm appears. As systems learn, the humans are in the loop to guide it.

Example: A logistics company builds a small cross-functional review team to stress-test its routing AI before rollout. When a flag that the system is choosing to optimize for speed at the expense of safety, the team adjusts both the model and the performance incentives behind it. Feedback loops become part of normal operations instead of something triggered only when things go wrong.

Design as Governance

Evolving HCD into Humanity-Centered Design is a strategic shift. Design now functions as both a diagnostic lens, of course, and also as a governance instrument for trustworthy AI. 

  • Build design ethics and foresight teams that explore unintended consequences before deployment.

  • Embed continuous feedback loops that capture human experience, adaptation strategies, and emergent harms.

  • Establish co-governance mechanisms that link design, policy, and oversight.

  • Foster a culture of curiosity where revising, pausing, or reversing AI systems is recognized as thoughtful and responsible.

These practices re-orient design from a creativity function into an humanity-aligned operating system.

As an aside, the above is a brief view into extensive work my firm has done connecting AI risks to AI governance controls. See our AI Risk & Governance Crosswalk paper.

What’s Next

Human-Centered Design isn’t obsolete but it may have outgrown its original frame. As we design for humanity, the designer’s canvas has expanded to include systems, institutions, and thinking at the generational level. The future of trustworthy AI depends on whether leaders can reimagine design as a continuous, participatory act of governance.

In The Collaboration Problem: Building Human–AI Partnerships that Elevate Humanity, I’ll explore what happens when AI goes beyond a designed system and acts as a collaborator in the act of designing itself. This new partnership will test the assumptions we hold about creativity, authority, and human judgment.

Previous
Previous

The Trust Problem: Why Compliance Isn’t Enough for AI (Copy)