Salesforce says companies will spin up hundreds of AI agents per employee this year. Most will sit idle.
I've seen this movie before.
For three years, I sat in on about three VC pitch meetings a week. Founders pitching their startups for investment. I watched so many products that had no business existing get funded anyway. Candy Crush on the blockchain. Blockchain advertising. People just slapped "Web3" on top of things and walked out with millions of dollars.
Same energy now with AI.
Two Types of People
The divide isn't "AI believers vs AI skeptics." It's people chasing a narrative versus people chasing a solution.
Just like in blockchain, some people are adding AI to their business to check a box. "WE NEED AN AI STRATEGY!" hundreds of executives are screaming into a phone as you read this. They want the slide deck. They want the press release. They want to say they have the shiny new thing.
Then there are the others. The ones who see AI as a new tool that needs to be experimented with and used thoughtfully. Who understand that we don't yet know where the boundaries are. Who are willing to stomach through hallucinations to find nuggets of truth.
The first group will waste colossal amounts of money and look foolish. The second group will find the real value.
Why Experimentation Is Still Right
Here's the thing: we do need to spin up 50 agents nobody asked for.
This is a relatively new technology. The only way to find where it fits is to put it in different nooks and crannies and see what sticks. We should be making agents and watching how they interact. We should be asking AI to generate complex insights and seeing how they stack up to our own.
But we need thoughtful experimentation. Not checkbox experimentation.
The difference is whether you're trying to learn something or trying to say something.
The Tooling Problem
Nobody uses a screwdriver as a hammer and then gets mad at the screwdriver. You're an idiot if you do that.
But people do this with new technologies constantly. "It's nothing but hallucinations." "It makes things up." "It can't be trusted."
That's not the tool's fault. That's you not understanding the tool.
The sad part, the part that frustrates me most, is that it's 100% on us. It's on us to learn how to use this thing, just like it's on us to learn how to use any other tool. AI doesn't owe you anything. It's a capability. What you do with it is your responsibility.
Treat It Like the New Discovery It Is
Instead of seeing AI as this top-down force, something to ride while the moment is hot, I wish people would look at it the way they'd look at a newly discovered medicine or a newly discovered element.
Ask yourself:
- What can I do with this?
- What are the limits of this?
- How can I make this work for me?
- What do I need to learn?
Not: how do I get this into my organization as fast as possible just so I can say I have it?
Build Before You Deploy
Here's my actual advice: don't deploy an agent you've seen on Twitter. Build one.
A simple one. It should take two hours, maybe less.
Open Cursor, Google AI Studio, Codex, or Claude Code. Copy this exactly as shown:
AGENT.TXT
I want to make my own simple personal assistant agent. Ask me 5 questions, then build me something I can use in the next hour, not next week.
Copy. Paste. Prompt.
If you understand how it works, you'll understand how to use it. You'll know what it's good at and what it's bad at. You'll stop blaming the screwdriver.
The companies that win with AI won't be the ones who deployed the most agents. They'll be the ones who understood the tool before they tried to use it.



