Weekend Edition: Upgraded agents are coming
Most discussion about generative AI tools has centered around the challenges and opportunities posed by synthetic content. Now that these agents are taking actions for us, what impacts might ceding that agency have on us and on society?

We live in a moment when our status quos and conventional thinking are perhaps as dangerous and insufficient as at any point in human history. All over society we need new, creative, adaptable thinking all while we are getting pushed into more and more homogeneous community spaces and experiences that make us less creative and more habituated and fortified in our thinking.
Into this moment, enter OpenAI's latest. This week in DC, OpenAI put on a big show of their version of an "agentic technology" called Operator – a web automation tool driven by a new version of their LLM called a "computer-using agent" capable of not only conversing and generating content but completing complex tasks on behalf of users using an embedded web browser. Think instructing an agent "book me a hotel room using my loyalty account for next week in Boston near my conference." Operator would take on a series of actions to understand my calendar, find the conference, login to my hotel loyalty account, find a hotel nearby, lookup room options, and book it on my behalf. Intriguing for sure.
"This is going to be a big, big efficiency gain." Sam Altman, OpenAI CEO
https://www.axios.com/2025/01/31/openai-altman-dc-visit
A year and a half ago when ChatGPT1 was first opening to the public and people were salivating at the prospect of genAI accelerating synthetic content generation of all kinds (drafting emails, presentation templates...), I wrote an essay about what we might be giving up with No More Blank Pages🔒. As deployment of these tools begins to explode into more and more of our daily interactions and digital experiences and now begin to be able to take over complex tasks for us wrapped in the all-validating imprimatur of dramatic efficiency gains, what else might we giving up without noticing or at least talking about?
One of many important questions here as LLM-based AI tools are deployed as layers within more and more everyday digital experiences is what is getting more efficient? And is that at the cost of efficacy? And is that good? Or are we just stumbling ahead with the standard efficiency-is-a-moral-good framing of neoliberal market-based thinking and assuming those efficiency gains are a meaningful end in and of themselves?
We are seeing these tools deployed in subtle but important ways across the graph of interactions between us and information and us and each other.
- Search experiences: Summaries in place of search results. If we stop reading past summaries, we start to lose our discovery skills at parsing and discerning the value of information, and we are at the mercy of the conclusions and prioritization built in to whatever model is doing the summarization. There is no one internet and everyone's search results are already unique, but generated summaries also put pressure on the idea of a canonical answer and more pressure on even the possibility of shared reality. But even if that summarization is accurate and we agree with its bias, by weakening our capacity and capabilities for discernment, we become easier to manipulate over time.
- Social feeds: Summaries in place of feeds. We have lamented the power of algorithms in determining what ends up in our feeds and what is elevated and what is diminished by those algorithms. This debate has driven our conversation about context collapse, about filter bubbles, and about the unbridled power of social sorting that is driving the polarization of society and the disconnected realities of American civic life. The trade of our data and attention as the currency and inventory of the attention economy has unequivocally damaged our capacity for bridging in society. Giving up our agency and undermining our capacities and capabilities to discern the complexity in the world around us might end up being an even worse trade off. But summaries don't solve this problem, they add a layer of abstraction on top of an already damaged experience, creating summaries of actions and communities via a new set of algorithms prioritizing and separating us from the individual elements of story and interaction that make up the vibrance of these community spaces.
- If we only interact with the summary we start to lose the ability to identify themes and draw conclusions and build bridges between disparate elements of a conversation or community, and we cede our bridging capacity entirely to the tool. But if the tool isn't trying to bridge in the same way that our social feed algorithms have not, society loses even the possibility of bridging as we lose our individual capacity for it.
- We stop interacting with people in favor of reading about people. With fewer and fewer direct interactions, we're going back to Plato's Cave willingly, but instead of reading the shadows on the wall, some other entity is describing the shadows to us for us. We deal in aggregates and abstractions only with little to no direct experience of the information and individuals around us. We live in a world that is all maps, no places.
- Agentic tasking: LLM-based decisions. Taking over the "mundane" tasks of everyday planning is framed as a gift. But it also removes our sense of responsibility that is connected to the agency of everyday decision making. Without the capacity to hold and exercise the responsibility that comes with agency and ownership, we end up passive, compliant, and powerless. As a result, an odd sort of hyperefficient learned helplessness might emerge that creates a false sense of dependency that grows into an actual dependency.
We are accepting the slow disintermediation of all different types of interactions and relationships between us and other nodes in the graph and even between us and the content that flows through the edges of the graph by something – an agent a summary, a whatever. We are opting into a meta-reality contained and constrained by the agents around us that filter and interpret for us.
Our experience of reality is being disintermediated entirely, one small element at a time, in service of efficiency – slowly eroding and eliminating our direct experience of information or with people.
For some interactions, greater efficiency might be good, but for others and in relationships that important to the social fabric of community, efficacy might be the only thing that matters and efficiency might actually be a bug not a feature. We become fully isolated, fully lonely, fully apart.
And because the LLM is trained on our recent historical context, it is likely to strongly reinforce current status quos of separation and filter bubbles because that is what its corpus is most likely to suggest as the next correct token as it tries to predict what is most likely, not what is most correct or creative. LLMs are probabilistic not deterministic, and so what is likely is what is best.
If we are going to transform human society, we need unlikely links and experiences, new ideas and adaptable creative communities of thought and relationship that can transform society and adapt our world. As error is the engine of evolution, serendipity, divergence, and heterogeneity are our sources of creativity and adaptability in society. In the natural world humans are snacks for other things. We only survive together. We need to ask questions about the costs and tradeoffs of these tools and the disintermediations they create or exacerbate. How we answer these questions and how we decide to use them will dictate whether they end up as engines of augmentation of life and experience or replacements🔒 and whether they becomes sources rather than obscurers of the unlikely ideas and experiences we need. But we need to make those choices as openly and clear-eyed as possible. Disintermediated into our own small, weak, uncreative, antiseptic, isolated spaces, giving Al tools our agency for a more efficient experience... this might be either the saddest or the most dangerous choice humans have ever made – or both.
Last updated: 02 Feb 2025