Am I the UX for my AI?
What it means to be the human interface between an army of agents and the room where the decision gets made.
Am I the UX for my AI?
I have been going down the rabbit hole of AI tools, fully convinced of their power and the probability they will be transformative for the world of work I engage in. On the surface it feels like software, where I engage with agents in natural language and the agents are the enabler of my work. This is incomplete. As I engage further with this technology I can feel my own role shifting, and with it, my relationship with AI.
A core element of my role is to harness my team, a mountain of data, and our capacity to produce ideas and insights and make them consumable and actionable for our clients. This is an intensely human experience. So many decisions are dictated by the comfort and conviction of human beings, based on the trust and credibility of other human beings. Decisions of significance invariably happen in person, where the whites of one’s eyes matter as much as the quality of the case being presented.
What changes as I and our team adopt AI is that my team now includes an army of agents and rapidly growing capacity to generate ideas and content. Set the question of quality aside for a moment. Extrapolate forward, where the models and agents become increasingly competent, and I can imagine the army of agents becoming the dominant source of ideas and insights. Those ideas need a curator and an evangelist in the rooms where they can influence real people making real decisions. I am still needed, if only as an interface for my AI.
That is an oversimplification, but carries several implications relevant now.
Implication 1: The bottleneck is my ability to consume ideas.
For most of my career the scarce resource was production. Finding the data, doing the analysis, writing it up cleanly. AI has flattened that curve. I dictated the first version of this post at about 150 words per minute, 4-5x faster than I could type it, and Wispr Flow ensured the first draft already absorbed my verbal corrections and removed the filler words. Claude turned that draft into something readable and helped refine it into something serviceable before my coffee got cold. It is not perfect, but then, this is also the worst the AI will ever be.
Consider what just happened. My bandwidth from mind to paper expanded with AI-assisted dictation. Those ideas were refined and expanded with at least the same acceleration versus what would have happened in the recent past. Combined, that is 20x the output bandwidth.
I am not alone. Every analyst, PM, and consultant with an API key has the same superpower. The result is my overflowing inbox. The problem is that my input bandwidth has not expanded at the same rate. My ability to read, absorb, and hold something in working memory long enough to do anything with it is similar to five years ago. Call it the read-write asymmetry, where output bandwidth is expanding rapidly and input bandwidth is comparatively fixed.
Implication 2: Choose errors of exclusion over errors of inclusion.
If the flow of inputs coming at me is growing faster than my consumption bandwidth, my instinct is to become stricter about what I consume. That means avoiding errors of inclusion (losing time on the weaker content) and accepting errors of exclusion (missing great pieces to preserve the quality of the corpus I do consume). In practice that is a short list of writers, analysts, and primary documents that consistently deliver more ideas per word than anyone else, and a much harder no on everything else.
But strict filters kill the accidental find. The piece I never would have searched for that turned out to reframe a problem or create a connection I had not considered. There is serendipity in foraging that I do not want to lose. For now I am keeping fifteen or twenty percent of my read budget in deliberately adjacent territory. That allocation still needs a plan.
Implication 3: The bar for output has moved up.
When the cost of producing okay content drops toward zero, the things that make output great are exactly the things AI does not do by default: falsifiable claims, non-obvious reframes, and specific numbers anchoring an abstract argument. These tools allow the author to commit to a point of view rather than hedge, and to make a call rather than balance the debate. When supported with great arguments and analysis this is what I want to consume and I expect others do too.
Implication 4: The real work becomes developing taste.
Taste used to be built by making things. Years of writing badly, getting feedback, trying again, watching what works in front of an audience and what does not. If most of the making is now done by a system, where does that judgment come from?
I think the answer has two parts that have to be held in tension.
The first part is that you still have to do some of the work. It is inefficient, and the throughput pressure of AI-assisted speed pushes against it constantly. But there is a kind of understanding that only comes from sitting with a hard problem long enough to feel its actual shape: build the model from raw data once in a while, write the hard paragraph by hand, and run the analysis without the agent. There are lessons for evaluating work that you only earn by producing it.
The second part takes advantage of the technology and runs more cycles than were previously possible. Each attempt is a chance to make a call, see the result, ask what worked and what fell flat, and update. That also can be how taste gets built. The volume is the new training data for my judgment, if I actually use it that way. The risky default mode of AI-assisted work is to ship faster. Hit go, the output is good enough, move on. That improves throughput. Combine cycle count with reflection, grounded in the wisdom of having done related work, and you get an accelerated apprenticeship.
Where does this leave me?
I can feel my role evolving. Knowing the data and the operators and the history, the input-side expertise that used to be most of the value, has not gone to zero. But that responsibility is shifting elsewhere. The job is to shape what the system, my team of people and agents, produces into something a person can use in the room where the decision gets made. The judgment is still mine. The compression is the product. The taste that drives both is the thing I have to keep working on, using the same tools that threaten to erode it.
It feels like I am becoming the UX for my AI. The question is what it takes to be a good one.


