Designing for Humanity in the Age of AI
A leadership series on what the social sciences predicted about and how to lead differently in the age of AI.
We were warned. Decades of research in psychology, sociology, HCI, and information science showed how technology would shape human behavior, attention, and trust. We built the systems anyway.
This series revisits what the social sciences tried to teach us and translates those lessons into action: how leaders can design, govern, and deploy AI systems that are not just compliant, but worthy of human confidence.
Available now
The first half of a new series, Designing for Humanity in the Age of AI, is now available.
This opening arc: Designing Trustworthy AI, unpacks the immediate risks leaders face as AI systems scale into business and society: manipulation, misplaced authority, polluted information, and fragile trust.
Series 1: Designing Trustworthy AI
This opening series unpacks the immediate risks leaders face as AI systems scale into business and society—manipulation, misplaced authority, polluted information, and fragile trust. It exposes how the social sciences predicted many of today’s challenges and offers practical ways to restore accountability, transparency, and confidence in AI-driven organizations.
Includes: The Warnings We Ignored, The Manipulation Problem, The Authority Problem, The Information Problem, and The Trust Problem.
Series 2: Redesigning HCD for AI
We explore how design itself must evolve as AI becomes adaptive, opaque, and deeply social. Traditional Human-Centered Design optimized for usability; now leaders must design for human agency and alignment. These essays show how psychology, collaboration, and systems thinking can reorient design toward humanity in an age defined by machine partnership.
Includes: The Design Problem, The Personality Problem, and The Collaboration Problem.