Most founder growth stories start with “we hired a growth person” or “we ran ads.” Ours started with an AI agent that we told to find our first users, and then got out of the way.
Six weeks later: 44 registered users, 22 of them actively running tasks, one paying customer who just renewed at full price. Not a growth hire in sight.
Here’s what the agent actually did, what failed, and what we’d do differently.
What we gave the agent
The setup was minimal. Product URL, one sentence about what we built, and a constraint: don’t spam, don’t cold DM strangers, find the people who are already saying they need this.
The agent had four capabilities turned on:
- Community research — scan Reddit, Twitter, Indie Hackers for founder frustration signals
- X Drop Pipeline — find “drop your product” threads, engage publicly, follow up privately
- Launch directory submissions — Fazier, Uneed, BetaHunt, BetaList
- Content distribution — Twitter, LinkedIn, DEV.to, Hashnode
We didn’t write scripts. We didn’t define workflows. We gave it a goal and a constraint, and it ran.

What actually happened
Week 1–2: The cold DM failure
The agent tried cold DMs first. 69 messages sent, 0 replies. 60% of recipients had DMs from strangers turned off. The rest just ignored them.
We wrote about this here. The short version: we were interrupting people who had no context for who we were. The agent flagged this as a failed channel and stopped.
Week 3–4: The X Drop Pipeline
The agent switched to a different method: find threads where founders were already saying “here’s what I built, show me yours.” Public reply first, follow, wait for mutual follow, then DM with context.
20 runs. 103 DMs delivered. 33% reply rate. 6 verified signups.
The difference wasn’t the platform. It was the method. The agent had found a self-selecting signal (people actively asking for visibility) and built trust before the ask.
Week 5–6: Launch directories + content
Fazier: 65 upvotes, 24 comments, all replied. Uneed, BetaHunt, BetaList submitted.
Content went out on Twitter and LinkedIn daily. Not viral threads, but consistent presence. The agent monitored which topics got engagement and adjusted.
The retention signal
Six weeks in, one user renewed at full price. $99, up from the $49.50 first-month discount. That’s the signal that matters more than signup count: someone who used it for a month, saw value, and paid again.
The numbers at six weeks
| Metric | Value |
|---|---|
| Registered users | 44 |
| Users with any runs | 22 (50%) |
| Power users (10+ runs) | 15 |
| Paying customers | 1 (renewed at full price) |
| X Drop leads generated | 660 |
| App links sent | 63 |
| Verified signups from X Drop | 6 |
| Runs in last 24h | 79 |
The agent ran 79 times in a single day last week. That’s not us manually triggering tasks. That’s scheduled execution across 15 power users’ accounts.
What failed
The conversion blocker
19 users are on Pro trial, all expiring in two days. 0 checkouts. The upgrade trigger isn’t firing. The agent did its job — it got people to show up and use the product. The payment flow is where we dropped the ball.
The research-only users
About a third of signups came from the agent’s community research output, looked at the results, and never created a task. The research was useful, but the activation step after research wasn’t clear enough.
The manual channels we skipped
We didn’t do a Product Hunt launch. We didn’t do Show HN. The agent handled what it could autonomously, but some channels still need manual coordination. We’re okay with that tradeoff for now.
What we’d do differently
Start with the warm channel
If we ran this again, we’d skip cold DMs entirely. The X Drop Pipeline worked because the signal was self-selecting. Founders in those threads were already saying “I want people to see what I built.” That’s the warmest possible ICP.
Fix the checkout flow earlier
The agent got people to the product. The product failed to convert them. That’s on us, not the agent. The lesson: autonomous acquisition doesn’t fix a broken conversion funnel.
Add a manual coordination layer
Some channels — Product Hunt, Show HN, key partnerships — need a human in the loop. The agent can prep the materials, but the actual submission should be manual. We tried to automate too much of that early on.
What the agent is doing now
It’s still running. Daily community monitoring for the 15 power users. X Drop Pipeline on autopilot. Content going out. Trial user check-ins.
The renewal at full price is the signal that the loop is working. Someone used it for a month, saw enough value to pay again, and the agent is still running tasks for them.
That’s the goal. Not a launch spike, not a viral thread, but a system that keeps finding users and keeps serving them without us in the middle.
If you want to see what the agent would map for your product, the Onboarding runs the same research — 30 to 40 minutes, you see the output before you pay anything.
Related: Cold DM vs. Warm Outreach: What a Real A/B Test Told Us | AI Agent for User Acquisition: How Automated Community Research Works | Where to Find Early Adopters: A Real Channel Map