Shadow Inbox/blog
Subscribe
← back to indexblog / linkedin / linkedin-intent-playbook
LinkedIn

The LinkedIn intent playbook: finding buyers before they post in your feed

LinkedIn buying intent is real, sparse, and hidden behind API limits. Here's the exact signals we watch, the signals we ignore, and the pipeline that makes the channel work.

A
ArthurFounder, Shadow Inbox
publishedApr 14, 2026
read14 min
The LinkedIn intent playbook: finding buyers before they post in your feed

LinkedIn looks like the wrong place to hunt for buying intent. The feed is a wall of self-congratulation, most engagement is theater, and the API will not let you read the things you actually want to read. All of this is true. It is also a

LinkedIn looks like the wrong place to hunt for buying intent. The feed is a wall of self-congratulation, most engagement is theater, and the API will not let you read the things you actually want to read. All of this is true. It is also a distraction from the fact that LinkedIn is where roughly nine in ten B2B buyers maintain their professional graph — and when a real buying window opens, the first public artifacts land there before they land anywhere else. Job changes. Comments on specific post types. Engagement pattern shifts. Company follows. The question is never whether the intent is present. It is whether you can separate four real signals from twenty that look real and aren't.

LinkedIn is where the buying committee assembles in public. Reddit is where one buyer asks the room. The signals are different; the logic is the same.

LinkedIn's surface is noisier than Reddit's. The intent is still there.

On Reddit, a buyer writes a 200-word thread describing their stack, their budget, and the three tools they've already tried. It is an explicit ask. You can run a relevance filter on it, run an intent classifier on it, and know within seconds whether the person is in-market. We wrote the full Reddit playbook around that dynamic — the sharp edge of a post-is-the-brief ritual.

LinkedIn doesn't do this. Nobody writes a 200-word thread saying "I'm evaluating CRMs under $80/seat, budget approved, deciding in 30 days." The closest analogue — a founder post asking "what are folks using for X these days" — gets a reply cloud of vendors, competitors, and influencers that drowns the OP's intent in noise. The post itself is occasionally a signal; the comment cloud under it is almost always one; the raw feed around it almost never is.

So the game on LinkedIn is different from the game on Reddit. The intent exists but it is distributed across smaller signals, each weaker than a Reddit post on its own. You read four of them at once, not one of them in depth. The pipeline has to aggregate.

The payoff, when you get the aggregation right, is the same buyer-window detection we run everywhere else. The channel isn't secretly easier. It is just a different shape — and that shape is what most operators misread.

The four signals that actually predict a buying window.

We ran a backtest on six months of signals across roughly 600 watched accounts for two B2B SaaS customers and scored every engagement type against whether a deal started within 90 days. Four signal categories consistently predicted a window. Everything else was noise or vanity.

Signal one: role changes into buying roles.

A title change from SDR to Head of Sales Enablement at a 40-person SaaS is the cleanest signal LinkedIn exposes. The new exec has a 60-to-90-day budget review window where they rebuild the stack. We see the pattern land roughly the same way across categories: title change in week zero, first vendor-evaluation post or comment in week three to six, purchase decision in week eight to fourteen. If your tool fits the new hire's mandate, their first month is your best outbound window of the year.

The detection is a profile diff. We pull the last-seen title every week, compare to the prior week, and fire a signal when the delta crosses a predefined set of keywords: head of, director of, vp, chief, plus the category keyword (revenue, growth, ops, enablement, security, whatever maps to your buyer). We weight by company size — a role change at a 40-person company is a much cleaner signal than the same change at a 4,000-person one because the approval surface is flatter.

Signal two: comments on specific post types — not likes, not reactions.

A comment on a founder post asking "what CRM under fifty seats are folks using?" is a buying signal. A like on the same post is not. The gap between "typed a sentence" and "tapped a heart" is enormous in intent terms, and most LinkedIn dashboards miss it because they lump both into engagement.

We index comments against three post templates: vendor-comparison posts ("what are you using for X"), stack-reveal posts from peers ("here's our current stack"), and pain-description posts from influencers in the buyer's category. A comment on any of the three that asks a clarifying question, shares a counterexample, or names a tool they've tried is high-intent. A comment that is a one-word reaction — "same", "this", "following" — is a shrug.

Signal three: engagement pattern shifts against the member's own baseline.

This is the hardest signal to build and the most predictive one. Somebody who's liked two posts a week for three months and suddenly starts commenting on five vendor-category posts in seven days is showing a behavioral shift. The absolute volume is low — you won't catch this with a volume threshold. The delta against their own history is what makes it readable.

We maintain a rolling 90-day engagement baseline per watched member: average comments per week, average likes per week, typical topic cluster. When the most recent 7- or 14-day window sits more than two standard deviations above the baseline on the vendor-category slice, we surface the account. It's a subtle signal on a single person. On a list of 600 members, it fires clean about forty to sixty times a month, and those forty to sixty are the ones most worth the operator's attention.

Signal four: company-level follows and team watches.

A single person following a competitor's company page is near-zero signal. Three people at the same company following a competitor within a 14-day window is meaningful. Pair it with a role change at that same company and the window probability, in our backtest, jumped somewhere in the range of three to five times over the single-signal baseline.

The logic: individual curiosity is private and cheap; team-level attention that clusters around a vendor is usually evaluation. The pattern shows up cleanest on the 30-to-200-person band where three follows represent a meaningful share of the decision-making surface.

4signals that actually predict a buying window
60–90days from role change to purchase decision in most categories
300–800accounts worth watching at any one time
baseline-shift threshold on individual engagement

The signals that look like intent and aren't.

The failure mode on LinkedIn is weighting noise as signal. Four categories of engagement look like intent and consistently don't convert in our data.

Content likes on their own. A like is a two-second gesture. On most members it means "I read half the post and found it pleasant." In aggregate, a member's like stream can help you build their baseline — but any single like is worth roughly nothing as a buying signal. Teams that run outbound off a saved-search like stream are, in practice, hitting a cold list with a prettier lipstick.

Generic reactions — Celebrate, Insightful, Love. Even cheaper than a like. These exist largely for the post author's ego; the reactor rarely remembers doing it an hour later. We explicitly filter these out of our engagement baseline. They inflate volume without carrying information.

Newsletter subscriptions and profile views. Both are passive. Both were heavily marketed by LinkedIn as intent surfaces in 2023 and 2024. Neither predicts purchase in any backtest we've run. A profile view might mean the buyer is evaluating you; it might also mean they accidentally tapped your avatar. The noise-to-signal ratio is too high to be useful as a trigger.

"Open to work" status changes. This is a career signal, not a buying signal. If your tool is career-adjacent (interview prep, resume tooling, coaching) it is real. For every other B2B category it is a distraction, because the person in that status is about to change companies and their current-company context is irrelevant.

The API limits are the point, not the problem.

Every operator who tries to build LinkedIn intent from scratch hits the same wall in the first week. The Marketing API doesn't expose member-level engagement. The Sales Navigator API caps lead alerts in the low double digits per day. The Partner Program is gated, and the approved use cases are narrow enough to exclude most of what you'd want to do. Third-party scrapers live one ToS update away from getting your account nuked, which is an expensive way to discover a contract clause.

The first instinct is to treat these limits as an obstacle to engineer around. That instinct is wrong. The limits are the entire reason LinkedIn intent still works, because they price the volume-senders out of the channel. If the API were wide open, the same flood of scraped-and-templated outreach that made cold email a smoldering ruin would land on LinkedIn within a quarter. The limits are a moat.

The correct response is to design a pipeline that accepts the limits as load-bearing constraints:

  1. Narrow the target list brutally — three to eight hundred accounts, not thirty thousand. The list is the most important artifact in the whole system.
  2. Poll each account at a humane cadence — once or twice a week, not continuously. Burn the compute budget on rich data, not high frequency.
  3. Spend compute on classification, not collection. The bottleneck is judgment, not extraction.
  4. Do the outreach as an operator, not a bot. The messages are written by a human; the system surfaces the signal, the human writes the reply.

Everything downstream of these four constraints is plumbing.

The pipeline we run on LinkedIn signals.

The shape is the same three-filter pipeline we run everywhere — relevance, intent, score — adapted to LinkedIn's signal density.

Target account list (300–800 companies in ICP)


Member graph pull (decision-makers + influencers per account, ~1×/week)


Signal detection layer
    ├─ Role-change watcher     (profile diff, title keywords + company size)
    ├─ Comment watcher         (post templates × member overlap)
    ├─ Engagement-shift watcher (per-member 90-day baseline, 2σ trigger)
    └─ Company-follow watcher  (org-level, ≥3 overlapping follows in 14d)


Intent classifier (LLM, ~200 tokens in, structured JSON out, temp 0)


Score → sorted queue with decay timer → operator dashboard

Two pieces of this are worth unpacking because they are where operators get it wrong.

The member graph pull is the most expensive step. LinkedIn does not give you a clean member-to-company edge at scale, so the graph is assembled from Sales Navigator saved lists, company-page employee lookups, and manual curation for the accounts that matter most. This is a one-time-per-account setup with a weekly maintenance cost, not a daily compute burn. Treat it that way. Any architecture that re-pulls the full graph daily is fighting LinkedIn's infrastructure and will lose.

The intent classifier is small and boring. It takes the raw signal (role change text, or comment body, or engagement-shift summary), the member's company context, and your ICP profile, and returns a three-field JSON: window_open: true | false, signal_type: role_change | comment | shift | company_follow, evidence: [quoted span]. Temperature zero. Chain-of-thought via forced evidence extraction before the verdict, same pattern we laid out in the LLM intent classifier piece. The model doesn't need to be large; it needs to be consistent.

The score itself is the same rubric shape we run elsewhere — intent class, signal strength, recency multiplier, account-fit multiplier — with weights tuned per customer. A fresh role change at a fit account is a top-decile signal. A stale engagement shift at a borderline account is the floor. The dashboard sorts descending and surfaces a decay timer on every item.

The reply move is comment-first, message-second, DM-only-if-invited.

The worst thing an operator can do with a LinkedIn signal is fire a cold connection request with a pitch note. The second-worst is ignore the public part of the engagement entirely and go straight to a Sales Navigator InMail. Both are technically allowed by the platform. Both are the surface-level reason LinkedIn outbound has a reputation for being slimy. There is a better shape.

When the signal is a comment on a relevant post, the first move is a useful reply in the same thread. Not a pitch. Not a link. A one-paragraph contribution that reads as value-add from a peer, ideally something the OP (or the commenter) couldn't find by searching. This is exactly the anatomy we laid out in the contextual cold message playbook — stay inside the four corners of what the member has already said in public, say something useful about the specific thing, don't import external data they didn't offer.

When the signal is a role change, the first move is a low-stakes connection request with a one-sentence note that references the role change only ("congrats on the new role") and offers nothing. The ask — if there is one — comes later, after the member accepts and a real thread opens. Leading with a pitch in the connection note is the move that torches the conversion rate.

When the signal is a baseline shift or a team-follow cluster, the first move is usually a comment on one of the posts that anchored the shift, not a direct message. The member doesn't know you've been watching their pattern (and shouldn't) — the cleanest way into the conversation is to become visible in the place they're already looking.

The sequencing logic — when to move from the original channel to email, when to follow up on LinkedIn, when to stop — is the same logic we laid out in the multichannel sequencing piece. LinkedIn signals slot in as the first-touch surface; email and direct message are downstream of the comment, not the other way around.

Where this pipeline fails.

Three failure modes we've watched teams hit, in order of frequency.

Your ICP doesn't post publicly. Some categories — certain parts of government, legal, healthcare, heavily regulated financial services — have a muted LinkedIn presence regardless of seniority. The job-change signal still works there because title changes are structurally visible. The comment and baseline-shift signals don't, because the members simply don't engage. If more than half your target accounts have decision-makers with fewer than ten posts in the last twelve months, LinkedIn intent is not your channel — go back to intent surfaces where your buyers actually speak.

You scale the list instead of the classifier. The first move operators make when the signal volume feels thin is to expand from 500 accounts to 5,000. It never works. The polling infrastructure can't keep up, the signal density drops, and you end up reading shallower data on a worse list. The right move when volume feels thin is to sharpen the classifier — tune the role-change keyword list, narrow the post-template definitions, raise the baseline-shift threshold — not to widen the net.

You hand it to someone without ICP fluency. The LLM classifier catches maybe 85% of the calls cleanly. The remaining 15% are edge cases that require someone who's used the product to adjudicate — a title change that's technically lateral but actually a buying role, a comment that reads neutral but carries a known pain term, a baseline shift that correlates with a recent funding round. If that person doesn't exist on your team, the dashboard fills up with false positives and the operator stops trusting it. The failure is organizational, not technical.

The realistic throughput math.

A focused operator working a 500-account watch list in this pipeline sees, on average, somewhere in the range of 40 to 80 qualifying signals a month — rough numbers that cluster inside a reasonable band but vary by ICP. Role changes account for roughly a third of the volume. Comment signals a little more than that. Baseline shifts and team-follow clusters fill out the rest. Not all of these are workable; the top decile by score is where the reply rate and close rate actually justify the time.

Against that top-decile volume, we see contextual LinkedIn outreach pulling reply rates in the 20–35% range when the signal-to-message mapping is tight. It's not as loud as the rawest Reddit windows — Reddit still wins on peak intent clarity — but the throughput over a month is comparable once the account list is properly tuned, and the decision-makers are more senior on average.

The throughput cap is a ceiling of operator attention, not a ceiling of signal availability. This is the same structural cap we see across every intent surface — outlined at length in the signal economy pillar. The plumbing is solved. The discipline of reading well is the work.

What 2027 looks like and why it doesn't matter yet.

We expect LinkedIn to tighten its API surface further over the next 12-to-18 months, not loosen it. The playbook above assumes continued tight limits and a steady cadence of ToS updates against the scraper crowd. If anything, the moat widens for operators doing real work — because the cost of doing it badly rises faster than the cost of doing it well.

The piece of this that will change is the classifier layer. Signal fusion across surfaces — LinkedIn role change plus Reddit post plus GitHub star plus HN comment on the same person — becomes a meaningfully better signal than any one of them alone. We're already running a limited version of this inside Shadow Inbox; the real version is a 2027 problem. For now, four LinkedIn signals, a narrow list, and a patient polling cadence is enough to make the channel pay.

Where Shadow Inbox fits.

Shadow Inbox's LinkedIn beta is the pipeline above, wired up. The account list is yours. The signal watchers run on our infrastructure against the API surface and the operator-session boundary. The classifier is the same one we use for Reddit and HackerNews, tuned for the LinkedIn signal set. The dashboard surfaces scored windows with decay timers and a draft reply, and you press send from your own account. We treat the LinkedIn module the same way we treat the rest of the product — the tool watches, you decide, you send. Pricing isn't the point; the point is whether the architecture above is the architecture you'd build if you were starting today. It is — we know because it's what we did.

If you'd rather build it yourself, the steps above are the recipe. Narrow the list, watch the four signals, ignore the three vanity ones, classify hard, reply as an operator. Everything else is color commentary.

● FAQ

Is this against LinkedIn's terms of service?
Reading member-visible content through a logged-in operator account, classifying it for yourself, and replying as a human is fine. Scraping behind authentication you don't own, spinning up puppet accounts, or automating at-scale connection requests is not. The line is whether a moderator reading your full activity would see a real operator doing real work or a script pretending to be one.
Why not use Sales Navigator alerts?
We do, for the signals it exposes — saved-search alerts, job-change triggers on saved accounts. But Sales Navigator caps lead alerts at a low daily ceiling and doesn't surface the comment-level or baseline-shift signals that matter most. It's a useful corner of the pipeline, not the whole pipeline.
How many accounts should we watch to start?
Three to eight hundred. Fewer than that and your signal rate is too sparse to be useful. More than a few thousand and you can't pull rich enough data at the polling cadence LinkedIn's infrastructure tolerates. The small-list, deep-data shape is forced on you by the API limits, and that's the right shape for this channel.
What counts as a baseline-shift signal in practice?
Compare a member's last 30 days of engagement against the prior 90. If they've gone from two generic likes a week to five comments on vendor-category posts in seven days, that's a shift. Absolute volume stays low — this isn't a loud signal — but the delta against their own history is what makes it predictive.
Does Shadow Inbox's LinkedIn beta need my LinkedIn password?
No. The beta runs against the Sales Navigator API for the slice of signals it exposes, and against logged-in-operator sessions you authorize for the rest. We don't store credentials, we don't impersonate, and the account actions all flow through you. The tool is signal detection and drafting — you press the send button.
— share
— keep reading

Three more from the log.

How to reply on Reddit without getting banned
002 · Reddit

How to reply on Reddit without getting banned

Reddit reply strategy for founders: why most marketing advice gets you banned, how moderators actually think, and the disclosure pattern that earns upvotes.

Jan 09, 2026 · 10 min
Buyer intent is the new marketing
003 · Buyer Intent

Buyer intent is the new marketing

Buyer intent is the new marketing: revealed demand vs manufactured demand, what intent-based marketing looks like operationally, and the org-design implications.

Apr 16, 2026 · 13 min