Shadow Inbox/blog
Subscribe
← back to indexblog / reddit / agency-40-calls-reddit-case-study
Reddit

Case study: how one agency booked 40 calls from Reddit in 60 days

An outbound agency booked roughly 40 calls in 60 days from Reddit alone. Full timeline, weekly numbers, the ban scare, and what they kept after.

A
ArthurFounder, Shadow Inbox
publishedApr 02, 2026
read8 min
Case study: how one agency booked 40 calls from Reddit in 60 days

Forty calls in sixty days, from Reddit alone, by a two-person team at a Bucharest-based outbound agency that runs LinkedIn-and-cold-email for B2B SaaS clients. They started skeptical — the agency partner who runs the channel told us, before

Forty calls in sixty days, from Reddit alone, by a two-person team at a Bucharest-based outbound agency that runs LinkedIn-and-cold-email for B2B SaaS clients. They started skeptical — the agency partner who runs the channel told us, before the test began, that he'd give it three weeks before pulling the plug.

He didn't pull the plug. Sixty days in, Reddit was their second-best channel by booked-call volume, behind LinkedIn and ahead of cold email. This is the full timeline: what they did week by week, the numbers at each step, the reply-rate jump in week three that changed everything, the ban scare in week nine, and what they kept versus dropped after the test ended.

The hard part wasn't finding the buyers. It was learning to write a reply that didn't sound like every other vendor in the thread.

The agency partner, post-test
40calls booked in 60 days
11.3%reply rate after week 3 retune
$2.7ktotal tooling spend
0cold-email tools used

The setup

The agency runs outbound for roughly a dozen B2B SaaS clients in the $2M-$15M ARR range. Their bread and butter is LinkedIn outreach plus templated cold email. They picked one client for this test: a developer-tools company selling to engineering managers. Average deal size around $18k ACV. Sales cycle around 45 days.

Why this client? Because the buyers were technical, which meant Reddit was where they actually hung out. Pitching a CFO tool in r/sales would have been a different exercise entirely.

The team: two people. One handled signal triage and reply drafting, the other handled outreach follow-up and call scheduling. They worked in 90-minute blocks twice a day, morning and late afternoon.

Week 1: subreddit selection and intent scoring

They started by mapping the subs where their client's buyers actually post. Roughly five subs made the final cut: r/devops, r/sre, r/kubernetes, r/ExperiencedDevs, and r/Engineeringmanagers. The first three were obvious; the last two came from looking at where their existing customers had posted historically.

They piped these into a signal monitor and set the initial filter wide: any post matching "looking for", "recommendations for", "alternative to", or "tools for" in the body or title. Wide filter, lots of noise, intentional. They wanted to see what the raw firehose looked like before tightening.

Volume that first week: roughly 180 candidate posts surfaced. Of those, the team manually scored about 60 as intent-positive. That's a 33% intent rate on a wide filter — better than they expected.

The week-1 reply attempts: 0. They spent the entire week reading and scoring, not engaging. This was deliberate. The partner had read the no-ban Reddit reply guide and decided the team needed a feel for the conversation tone before opening their mouths.

Week 2: first replies, 4.1% rate

Week 2 they started replying. The opener template was generic — something like "Hey, saw your post about X. We had the same problem at a previous gig and ended up with [vendor category]. Happy to share what we tried if useful."

Volume: 78 replies sent across the five subs over five working days. Responses: 3 actual back-and-forth conversations. Booked calls: 0.

Reply rate: 3.8%. The team called it 4.1% later when they recalculated against actual eligible replies (some of their 78 went to threads that were too cold, which they decided not to count). Either way, mediocre.

The post-mortem was brutal. They reread their own replies and realized the openers all sounded interchangeable. The reply was technically contextual — they were referencing the post — but it didn't feel contextual to the recipient. It read like a vendor reply.

Week 3: the keyword retune and the jump to 11.3%

Two changes went in at the start of week 3. The first was the keyword filter. They dropped "tools for" and "alternative to" from the primary surface and tightened to just "looking for" plus "recommendations for". This cut volume by roughly half — from 180 candidate posts a week to about 90 — but doubled the intent-positive rate from 33% to around 65%.

The second change was the reply template. Instead of "saw your post about X", they switched to quoting the specific pain phrase from the original post verbatim. Example: if the poster wrote "I'm tired of paying $800/mo for log aggregation that loses half my traces", the reply opened with "the lost-traces problem at $800/mo is exactly why we switched off [vendor]". Then the actual recommendation followed.

The numbers that week: 44 replies sent (lower volume because of the tighter filter), 5 conversations, 2 booked calls. Reply rate jumped to 11.3%.

That number — 11.3% — became the team's benchmark for the rest of the test. Anything below 9%, they retuned. Anything above 13%, they suspected they were over-fitting to one sub and double-checked.

The mechanism is simple: when someone reads your reply and recognizes their own words, the social cost of ignoring you goes up. They wrote that complaint. Now a stranger on Reddit is responding to that exact complaint and offering a thought. Replying back is the path of least resistance. That's the whole insight from the contextual cold message, played out in production.

Weeks 4-8: the steady cadence

For five weeks the team ran a tight loop. Every Monday morning, the partner reviewed the prior week's signal output and adjusted the filter if conversion had drifted. The two operators worked through about 60-80 surfaced posts per week, replying to the 40-50 that scored intent-positive after manual review.

Weekly numbers settled into a range:

  1. Posts surfaced: 70-90 per week.
  2. Replies sent: 35-50 per week (filtered down from surfaced).
  3. Conversations: 4-7 per week.
  4. Booked calls: roughly 5 per week.
  5. Show rate on calls: around 75%.

That cadence — 5 calls a week, week after week — is what got them to 40 booked calls by day 60. (Actual count was somewhere between 38 and 42 depending on how you count rebooks and reschedules; they round to 40.)

The conversion math from signal to call ran roughly: 70 surfaced posts → 40 worth replying to → 5 conversations → 4-5 calls booked, of which 3-4 actually showed.

Week 9: the ban scare and recovery

Day 58, one of the operators got a temp-ban from r/devops. Specifically the sub, not sitewide. The trigger: they had posted too many top-level comments across multiple threads in a 24-hour window, and one of them had a link in it (the link was to a public GitHub repo, not their client's product, but the auto-mod doesn't care).

Panic for about an hour. They thought the whole channel was burning down.

Recovery was straightforward. They (a) accepted the ban without appealing it, which would have escalated, (b) switched to DMs for that sub for two weeks, replying to the same posts but in private rather than as comments, and (c) slowed the comment cadence in the other four subs to no more than 3 top-level comments per account per day.

DM reply rates ran a touch lower — around 9% versus the 11.3% they'd been getting on public comments — but the calls kept booking. By day 65 the operator was back commenting in r/devops with no further issues.

The lesson the team took away: account hygiene is real but recoverable. Sub-level bans are not sitewide bans. Don't appeal, don't double down, just slow down for two weeks and the channel comes back. We covered the hygiene side in more detail in the no-ban reply guide.

What they did after the test

The agency kept Reddit as a channel for that one client and pitched it as an add-on service to two more. The internal review at day 70 produced a list of what stayed and what got cut.

What stayed (roughly 60% of the workflow):

  1. The signal monitor with the tight "looking for" plus "recommendations" filter.
  2. The verbatim-quote opener template.
  3. The 90-minute morning and late-afternoon work blocks.
  4. The single enrichment vendor for converting Reddit usernames to work emails.
  5. The Notion board for tracking each conversation through to booked call.

What got cut (the other 40%):

  1. The daily standup the team had been running. Switched to weekly. The work didn't need that much coordination.
  2. A second enrichment vendor they'd been A/B-ing. The cheaper one won on coverage; they consolidated.
  3. A LinkedIn touch they'd been adding 3 days after the Reddit reply. It didn't move the booked-call rate. (For why a LinkedIn touch can work in other sequences, see multi-channel sequencing when the trigger is a social post.)
  4. Replying to posts older than 36 hours. Conversion was sub-2%, not worth the time.

Total tooling spend across the 60 days: roughly $2.7k. No cold-email platform, no LinkedIn automation, no contact database subscription. Just a signal monitor, an enrichment lookup, a calendar tool, and a shared inbox.

What we'd watch if you copy this

This case worked because three things lined up: a technical buyer who actually uses Reddit, a client product with a clear "alternative to" competitor (which gave the keyword filter something to bite on), and a two-person team disciplined enough to read posts before replying.

If any of those three are missing, expect different numbers. A non-technical buyer (CFO, HR, marketer) hangs out on Reddit much less than an engineer. A product without a named competitor in the space ("we're the first AI X for Y") gives the keyword filter nothing to surface against. A solo founder doing this part-time will book in the range of 1-2 calls a week, not 5, off the same signal density.

The piece that's most copyable is the verbatim-quote opener. It's free, it requires no tooling, and it lifts reply rates everywhere social-post outbound is happening. The piece that's least copyable is the cadence — sustaining 5 calls a week from one channel takes real operator discipline, not just a good signal feed.

● FAQ

Was the agency paying for any tooling beyond Shadow Inbox?
Yes. Roughly $2.7k total over the 60 days: signal monitoring, an enrichment vendor for emails, and a scheduling tool. No cold-email platform, no LinkedIn sequencer. The whole stack ran out of a shared inbox and a Notion board.
Why did the reply rate jump from 4.1% to 11.3% in week 3?
Two changes. First, they tightened the keyword filter to 'looking for' plus 'recommendations' instead of all intent patterns. Second, they switched from a generic opener to one that quoted the specific pain phrase from the original post.
What was the Reddit ban scare in week 9?
One account got temp-banned from a single subreddit for posting too many top-level comments in 24 hours. Not a sitewide ban — just sub-level. They recovered by switching to DMs for that sub and slowing the comment cadence elsewhere.
Would this work for a solo founder, not an agency?
Yes, with caveats. The agency had two people splitting the workflow, which let them sustain a 5-call-a-week pace. A solo founder doing this part-time would realistically book in the range of 1-2 calls per week off the same signal volume.
What did they drop after the test?
Roughly 40% of the workflow: the daily standup ritual, the second enrichment vendor (they consolidated to one), and the LinkedIn touch in the sequence (didn't move the needle for this signal type). Kept the rest.
— share
— keep reading

Three more from the log.

How to reply on Reddit without getting banned
001 · Reddit

How to reply on Reddit without getting banned

Reddit reply strategy for founders: why most marketing advice gets you banned, how moderators actually think, and the disclosure pattern that earns upvotes.

Jan 09, 2026 · 10 min