Year One of Superagency
What Living With AI Actually Feels Like
Gen AI, in its current form, is a power user technology.
That is why the public is split.
Insiders feel the AGI. The broader public sees a glorified search engine.
This is a skill issue.
Reid Hoffman (co-founder of LinkedIn) has a name for the upside of this moment: Superagency.
Superagency is what happens when a critical mass of individuals, personally empowered by AI, begins to operate at levels that compound throughout society.
The idea that you can do more, faster, and with greater confidence.
The idea that you create more options for yourself tomorrow than you had yesterday.
Can you feel it?
AI is a power user tech (for now)
The difference between someone’s optimism and someone’s cynicism is usually simple.
How much time do they spend playing with the tools?
Not only at work. Outside of work too.
Because humans aren’t one-dimensional. And this is a general-purpose technology.
It would be a crazy oversimplification to only use this technology for your job.
You might not see it’s true power. You will see a chatbot. You will see autocomplete. You will see “search, but faster.”
If you live with it, you start to see different things. You learn at a faster pace. You gain new powers. It begins to take action on your behalf.
You start to feel increased agency.
Coding Agents: When Code Stops Being “Software”
Claude Code changed how I think about code.
Not because it made me faster at writing software.
But because it made it obvious that code is just a way to apply structure to reality.
Once you see that, the surface area explodes.
I used to think of code as something you apply after a problem becomes “technical.” Claude Code flipped that. It showed me that you can start with code earlier—before the problem even looks like software.
Expense reporting. Animation pipelines. Personal branding systems. Presentations. Home automation. Content ops. Research workflows.
None of these are “software products.”
They’re messy, human, half-formed problems. And that’s exactly why coding agents work so well on them.
A coding agent doesn’t ask, “Is this a real app?” It asks, “Can we decompose, automate, or simulate this?”
That shift matters.
When you see everyday tasks as something code can handle, you stop asking for permission. You stop asking whether a tool exists. You build the scaffolding yourself.
Claude Code made this legible.
It showed me that:
Code can be lightweight and disposable.
Scripts can be creative tools and repeatable processes.
Automation can live at the edges of life, not just in production systems.
Most importantly, it made it clear that coding agents aren’t about engineering excellence.
They’re about agency.
They let you try ideas that you would typically avoid due to the high overhead. They collapse the distance between “this is annoying” and “this is solved.”
That’s why coding agents are the most powerful agents right now.
Not because they write perfect code.
But because they turn vague intent into working systems—fast enough that curiosity stays alive.
And, as it turns out, everything is a software problem.
My first practical swarm.
One system I rely on now that I did not rely on a year ago is multiplexing coding agents.
I run many coding agents, Claude Code, Codex, Gemini CLI at the same time. They stay on. They stay “in the project.” Each project has its own context.
This isn’t just about concurrency—it’s about keeping momentum alive across many different ideas.
Across all my projects. Personal and work. With powerful coding agents powered by frontier models like OpenAI Codex and Opus 4.5.
The result is that I try more things. I explore more paths. I build in directions I used to avoid.
I apply this to everything.
My ability to prototype systems has skyrocketed.
I’ve lost my bias toward specific programming languages.
I’m no longer intimidated by unfamiliar systems.
I ask plenty of questions, even if they seem silly.
This approach helps me find high-leverage ideas. My curiosity drives me, and I connect my thoughts with the abilities of the latest models.
But I still face a constraint.
I’m still the bottleneck.
I operate at a higher level now. But, my experience, confidence, and skills still hold me back.
The irony is:
The more I automate, the more responsibility I take on. My work shifts to a broader, more creative and strategic level.
You actually have to prompt the model a thousand times.
People underestimate the effort required to get good at wielding AI.
There is no other way. You need reps. You need to make a ton of stuff.
And you need to study people who do the same thing. People with an obsessive relationship to learning the tools.
10,000 prompts is the new 10,000 hours.
If a person doesn’t use the technology, they are unqualified to talk about it.
Follow people who play the next game.
In 2025, I began to follow real practitioners.
This shift alone improved my ability to explore.
Less noise. More contact with reality.
Wisdom comes from experience, and experience comes from being in the game.
That’s the good news.
It means a valuable new persona is emerging: the practitioner.
Not the commentator. Not the spectator. Not the theorist. Not the person with opinions about tools they don’t use.
The practitioner lives with the tools long enough for them to change what they see. They ship. They build scaffolding. They find leverage in systems.
They see the world differently.
So I started paying attention to people who live inside the tools.
Coding automation, AI agents, and frontier LLMs
Dexter Horthy is a builder and systems thinker working on how humans and AI agents collaborate inside real software systems. He created the term “context engineering” to describe how to design what an AI model sees while it works. This includes memory, tools, instructions, and limits. It helps agents reason, act, and safely fail in real-world settings.
Why I follow him: Dex focuses on where agents actually break in practice, and how to design systems that keep humans meaningfully in the loop.
AI creator Echo Hive shares hands-on coding experiments and workflows through video. He makes and shares experiments. These include custom GPT tools, agentic “hive” workflows, and generative pipelines. You can replicate the real architectures, prompts, and integrations he showcases.
Why I follow him: Echo Hive shares real experiments, not just opinions. He publicly tests and develops new models and agent patterns.
AI and Creative Workflows
Dave Clark is a creative technologist. He explores generative AI as a true visual and cinematic medium. His work combines text-to-image, video models, and design systems. Instead of one-off demos, he creates cohesive aesthetic worlds. His work uses models as tools in a studio. He iterates on mood, composition, and visual language to build taste, not just outputs.
Why I follow him: Dave treats AI like photographers treat cameras. He builds a point of view over time instead of chasing tricks.
Nem Perez is a creative technologist and filmmaker. He explores how generative AI impacts storytelling, collaboration, and creative direction. His work blends filmmaking, tools, and community. He uses AI to make story prototypes. It helps him manage collaborators and rethink the entire film production process.
Why I follow him: Nem treats AI as a new storytelling medium and production model, not just a visual effect.
Creative technologist using generative systems to build expressive, interactive media. Don works at the forefront of virtual worlds and creative workflows. He explores how science fiction turns interactive and real.
Why I follow him: Don blends craft with experimentation—and I’ve seen that curiosity up close, long before AI made it fashionable.
Momo Wang is an award-winning animator and filmmaker. She is also an artist and creative director. People celebrate her for mixing artistic vision with stories from different cultures. She created Tuzki, a famous illustrated bunny. Tuzki became a viral emoticon on major messaging apps. It has also expanded into merchandise, media, and brand collaborations.
Why I follow her: she pushes aesthetics forward, not just capabilities.
Joe Salvatore works as an AI-native editor and visual commentator. He packages fast-moving generative AI tools, model releases, and creative workflows. His breakdowns are sharp and focus on visuals. Joe’s feed helps designers and editors see what matters and how to use it.
Why I follow him: He makes sense of the chaos in AI releases. He shares clear signals, useful insights, and guidance for creative professionals.
Musical comedy creator behind “BBL Drizzy,” one of the first viral AI-native hit songs. His music video production skills with frontier video models are second to none.
Why I follow him: King Willonius shows how AI closes the gap between ideas and audiences. He delivers cultural moments with speed and taste.
Greg Beato is a creator and commentator working with generative AI and creative media. He’s co-author (with Reid Hoffman) of Superagency: What Could Possibly Go Right with Our AI Future. Check out Bro Botz, the coolest new AI music project written and produced by Greg himself.
Why I follow him: Greg actually plays with frontier models—writing, producing, and editing real music videos and characters with them. Watching him build worlds in public turns abstract capability into lived craft.
Business and entrepreneurship
Catherine Goetze, or CatGPT, is a great example of a creator-entrepreneur. She first builds an audience. Then, she shares her journey. Finally, she offers products directly to them. She makes AI fun, friendly, and focused on people with easy-to-understand content. At the same time, she tests how to turn attention into real businesses. Earlier this year she launched Physical Phones, a hardware experiment that turns nostalgic landline phones into Bluetooth companions for modern smartphones.
Why I follow her: Cat shows that personality, vision, and distribution matter just as much as technology—maybe even more.
Allie K. Miller is an AI entrepreneur, advisor, and educator. She turns advanced AI into practical business solutions. She creates playbooks and strategies for operators. Her work connects enterprise adoption, new model capabilities, and practical decision-making. It helps leaders shift from “AI curiosity” to real execution.
Why I follow her: Allie links what models can do to what organizations can use. She focuses on enterprise and large-scale businesses.
Intelligence is now on tap. What will you do with it?
Superagency occurs when people, empowered by AI, begin to redefine the boundaries of what is possible. They improve their own lives first, then influence teams, companies, culture and eventually society at large.
So yes: 2026 won’t be defined by intelligence. It will be defined by Superagency, and by who actually learns to wield it.
What will you create in 2026?



