The Information Problem: When AI Fractures Reality
Designing for Trustworthy AI: The Warnings We Ignored : Manipulation : Authority : Information : Trust
Redesigning Design for AI: publishing Oct 3, 2025 Design : Psychology : Collaboration
Capstone: publishing Oct 9, 2025 The Leadership Problem — Designing for Humanity
The Problem We Face
Executives and policymakers are now operating in polluted information environments where credibility itself has become fragile. Polarization, echo chambers, and misinformation no longer just destabilize politics—they undermine markets, fragment communities, and erode institutional trust.
For organizations, this shows up as reputational risk, dis-and-misinformation campaigns, and fractured stakeholder ecosystems. For governments, it translates into weakened legitimacy and governance. The shared challenge is one where leaders must navigate systems where information itself has become a contested terrain.
Why It Happens
This is not an unforeseen and random byproduct of the internet. It’s the predictable outcome of how we designed information systems—and how AI now amplifies those design choices.
Search & Personalized Recommendation → Engineered Realities.
The field of Information Science demonstrates that while recommendation and search systems are surfacing content, they are actively shaping knowledge environments. By optimizing for engagement, they amplify novelty, outrage, and confirmation bias.Personalization → Fractured Facts.
AI-curated feeds deliver differentiated results/content, essentially differing versions of truth, making social cohesion increasingly difficult. The result is fracturing shared reality into algorithmically curated, personalized micro-worlds.Scale & Speed → Fragile Legitimacy.
Sociology helps explain the downstream effects: once groups are fed different versions of truth, social cohesion weakens. Trust in common facts collapses, making dialogue and coordination nearly impossible. Generative AI accelerates the production of misinformation, disinformation, and synthetic media—making it harder for institutions and individuals to distinguish credible knowledge from manufactured noise.
The field of Human-Computer Interaction has long warned that curation is never neutral. Information Science has shown that ranking and recommendation are societal choices. Sociology tells us once groups live in different epistemic realities that coordination collapses. The result is we inhabit realities engineered not for accuracy or balance, but for attention extraction which fractures truth into competing realities where outrage outperforms understanding.
How We Fix It
We’ve achieved the stage where information ecosystems are being seen as strategic and governance terrain. We’ve seen disinformation campaigns sink (or boost) market valuations in a few hours. Generative AI is accelerating the volume and believability of synthetic content, making polluted information and information environments an everyday operating risk.
For executives, this means information integrity is now a core business function, as essential as cybersecurity or supply chain resilience. For policymakers, it means governance must extend into the shaping of epistemic environments — the conditions in which citizens form beliefs and act on them.
We’ve learned from previous general purpose technologies that having an intentional strategy has advantages.
Design for Balance, Not Just Engagement
Push teams to broaden optimization goals beyond clicks or watch time. What gets measured, gets managed—and designed. Create and consistently engage metrics for diversity, credibility, resource quality, etc. This won’t eliminate bias, but it elicits a design principle for our work: be a countering force to the systematic tilt toward outrage and extremity. Some AI systems are already experimenting with serendipity scores or exposure diversity—through experimentation these type of metrics are proving enough value to become commonplace standards.
Example: A news platform adjusting its recommender system to ensure no single user’s feed is drawn from fewer than five distinct outlets, reducing echo-chamber reinforcement.
Curate with Accountability
Curation is never neutral—so prioritize and own it. Build transparent policies about how content is ranked, recommended, and moderated, and make them visible to users, employees, and regulators. Require AI explainability that shows not just why a result appeared, but how recommendation criteria were set. Accountability builds trust—even when people disagree with individual outcomes.
Example: A streaming service publishes a quarterly “recommendation transparency report,” disclosing how its algorithms balance engagement, diversity, and content standards.
Treat Information Environments as Public Goods
Information systems are shared infrastructure. No single firm or government can fix them alone. Good information is curated by collaborating across sectors—tech companies, regulators, academia, and real human beings feeling the affect of our current information ecosystem—to define shared norms for credibility and accuracy. The regulatory landscape is spotty but the EU’s Digital Services Act, the UK’s Online Safety Bill, and some minor FTC actions in the U.S. all point to increased accountability for platforms that are shaping knowledge environments.
Example: A coalition of universities, media outlets, and platforms creating a shared credibility index that feeds into AI curation systems, ensuring cross-sector legitimacy in what counts as trustworthy information.
What’s Next
Misinformation and polarization aren’t new problems. But the scale and speed of today’s fractured information systems make them existential for institutions. As I argued in The Warnings We Ignored, the roots of this problem were visible decades ago. If AI can fracture reality itself, what does that mean for rebuilding trust? The next challenge is cleaning up polluted information and restoring the foundations of trust in digital systems and the institutions that depend on them. I’ll explore that in my article on The Trust Problem: Why Compliance Isn’t Enough for AI.