Start with what might be called the epistemic layer—how we come to know things. People are increasingly relying on AI to know what is true, what is happening, and whom to trust. Search is already substantially AI-mediated. The next generation of AI assistants will synthesize information, frame it, and present it with authority. For a growing number of people, asking an AI will become the default way to form views on a candidate, a policy, or a public figure. Whoever controls what these models say therefore has increasing influence over what people believe.
Technology has always shaped the way citizens interact with information. But a new problem will soon arise in the form of personal AI agents, which can change not only how people receive information but how they act on it. These systems will conduct research, draft communications, highlight causes, and lobby on a user’s behalf. They will inform decisions such as how to vote on a ballot measure, which organizations are worth supporting, or how to respond to a government notice. They will, in a meaningful sense, begin to mediate the relationship between individuals and the institutions that govern them.
We’ve already seen with social media what happens when algorithms optimize for engagement over understanding. Platforms do not need to have an explicit political agenda to produce polarization and radicalization. An agent that knows your preferences and your anxieties—one shaped to keep you engaged—poses the same risks. And in this case the risks may be even more difficult to detect, because an agent presents itself as your advocate. It speaks for you, acts on your behalf, and may earn trust precisely through that intimacy.
Now zoom out to the collective. AI agents and humans could soon participate in the same forums, where it may be impossible to tell them apart. Even if every individual AI agent were well-designed and aligned with its user’s interests, the interactions of millions of agents could produce outcomes that no individual wanted or chose. For example, research shows that agents displaying no individual bias can still generate collective biases at scale. And setting aside what agents do to each other, there is what they do for their users. A public sphere in which everyone has a personalized agent attuned to their existing views is not, in aggregate, a public sphere at all. It is a collection of private worlds, each internally coherent but collectively inhospitable to the kind of shared deliberation that democracy requires.
Taken together, these three transformations—in how we know, how we act, and how we engage in collective governance—amount to a fundamental change in the texture of citizenship. In the near future, people will form their political views through AI filters, exercise their civic agency through AI agents, and participate in institutions and public discussions that are themselves shaped by the interactions of millions of such agents.
Today’s democracy is not ready for this. Our institutions were designed for a world in which power was exercised visibly, information traveled slowly enough to be contested, and reality felt more shared, if imperfectly. All of this was already fraying long before generative AI arrived. And yet this need not be a story of decline. Avoiding that outcome requires us to design for something better.
On the informational layer, AI companies must ramp up existing efforts to ensure that models’ outputs are truthful. They should also explore some promising early findings that AI models can help reduce polarization. A recent field evaluation of AI-generated fact checks on X found that people with a variety of political viewpoints deemed AI-written notes more helpful than human-written ones. The paper is yet to be peer-reviewed, but that is a potentially revolutionary finding: AI-assisted fact-checking may be able to achieve the kind of cross-partisan credibility that has eluded most manual human efforts. Greater understanding of and transparency about how models make these assertions and prioritize sources in the process could help build further public trust.
On the agentic layer, we need ways to evaluate whether AI agents faithfully represent their users. An agent must never have an agenda of its own or misrepresent its user’s views—a technically daunting requirement in domains where users may have not explicitly stated any preferences. But faithful representation also cannot become an accessory to motivated reasoning. An agent that refuses to present uncomfortable information, that shields its user from ever questioning prior beliefs or fails to adjust to a change of heart, is not acting in the person’s best interest.
24World Media does not take any responsibility of the information you see on this page. The content this page contains is from independent third-party content provider. If you have any concerns regarding the content, please free to write us here: contact@24worldmedia.com