Curious how folks here are aligning enrichment with account-level intent. We’re seeing friction where SDRs manually reconcile contact data with account signals before outreach. What operational change actually reduced rep-side validation for you?
If an automated workflow has to be set up: Accounts get added -> they get enriched -> SDRs see enriched data? Or may be I didn't understand what you meant by 'SDRs manually reconcile contact data with account signals before outreach.'
The workflow you described is the ideal state. What I meant by “SDRs manually reconcile” is what often happens after enrichment. The system adds firmographics and intent signals, but reps still end up checking LinkedIn, scanning recent news, looking at product usage, or validating whether the account actually fits the ICP before reaching out. So the data is technically there, but trust isn’t fully there. The rep ends up doing a quick manual sense-check to avoid bad outreach. In a mature setup, enrichment + clear definitions + trigger logic should reduce that manual step a lot. In practice, many teams are somewhere in between. Curious how automated your current process actually feels day to day.
Hey, I have only tried a simpler workflow, with limited data points. So, the chances of that getting screwed up were probably small. Someone who has done more complex workflows might be able to help. Curious, what's the stack you're using for this? For enrichment specifically?
Simpler workflows usually mean fewer failure points. In more complex setups, the issues usually show up when multiple enrichment sources are layered in or when routing logic depends on confidence scores that aren’t very transparent. For enrichment specifically, I’ve seen teams combine something like Clearbit/ZoomInfo/Apollo for firmographics + LinkedIn scraping + first-party product or web signals, all flowing into the CRM. The real challenge isn’t the tool though — it’s defining what “enough data” means before intent is allowed to trigger action. Are you enriching at account level only, or contact + account together?
I have seen bad data from the enrichment tools enough times to not have trust in the automated enrichment. If the contact has enough information I prioritize the contact, otherwise account. I am trying an agentic process by defining a skill in claude to run for a batch of accounts/contacts. It's getting better with time still needs improvement. I am new to GTM, so I am figuring this out and might have wrong assumptions right now.
Nikhat I., thanks for sharing the details. Your workflow is definitely more multi-layered than what I have used. Vicky S. Interesting. I'd love it if you could share more details in DMs. Even though I have figured out decent browser use in Claude Code, I am still not there where it could interact with a third-party tool in the browser, compare fields and deliver results accurately. Lusha and Hunter.io have MCP servers that could be integrated into a Claude Code workflow, unsure of the performance, though. Would love to know more about your setup.
We reduced rep-side validation by flipping the workflow from contact-first to account-first. Instead of asking SDRs to reconcile signals manually, we centralized account-level intent + enrichment upstream (firmographics, buying stage, recent activity) and only routed contacts once the account crossed a clear intent threshold. Operationally, the biggest unlock was standardizing “sales-ready” criteria at the account level and enforcing it in routing—so reps trusted the data and stopped second-guessing it. Once trust was there, validation dropped naturally. Was this helpful?
This is really helpful, Anastasia C.. Flipping from contact-first to account-first is such a clean shift. Centralizing intent + enrichment upstream makes a lot of sense. Otherwise you’re basically asking SDRs to be the integration layer between systems. Standardizing “sales-ready” at the account level feels like the real unlock because it turns routing into a rules problem, not a judgment call. Curious, how did you define the intent threshold? Was it purely score-based, or did you require specific combinations of signals before an account could cross into sales-ready? Love this approach.
Appreciate that—totally agree on the “SDRs as the integration layer” problem. That’s exactly what we were trying to eliminate. On intent thresholds, we avoided making it purely score-based because that still invites debate. Instead, we used a hybrid model: a baseline score plus required signal combinations. Concretely, an account had to meet three conditions to be marked sales-ready:
Fit criteria (firmographics, ICP match, tech stack relevance).
2. At least one strong intent signal (e.g. high-intent keyword activity, demo/pricing page engagement, or inbound action). 3. One confirming signal within a defined time window (recent activity, multiple stakeholders engaged, or repeat visits). That structure reduced false positives and made routing feel deterministic rather than subjective. SDRs trusted it because they could see why an account crossed the line, and RevOps could tune thresholds without retraining reps. The biggest lesson was keeping the logic simple and explainable once it became opaque, adoption dropped fast.
And Secondly, We treated the initial threshold as a v1 hypothesis, not a permanent rule, and made that explicit to Sales from day one. What worked in practice:
Short feedback loops (weekly, not quarterly)
Every week we reviewed a small sample of sales-ready accounts with SDR managers: • Which converted to meetings • Which stalled • Which reps felt were “noise” That kept iteration grounded in outcomes, not opinions. 2. Adjust weights, not definitions We rarely changed what counted as intent. Instead, we tuned: • Recency windows (e.g. 7 → 14 days) • Signal decay (older activity mattered less) • Confirmation rules (e.g. 2 stakeholders instead of 1) This avoided the feeling that “the rules keep changing.” 3. Protect rep time with guardrails Any experiment that increased volume had to meet a simple bar: Did meeting-set rate or speed-to-first-touch improve? If not, we rolled it back quickly. 4. Make changes visible and explainable We shared a short changelog with reps: “We tightened X because Y was creating false positives.” Transparency mattered more than perfection. 5. Manager reinforcement Frontline managers previewed changes before rollout and coached to them in 1:1s. That kept trust intact even when thresholds shifted. The key insight: reps don’t need perfect intent models—they need predictable ones. As long as changes feel incremental, explainable, and tied to outcomes, adoption stays high. Hope you understand that now?
Anastasia C. loved the explanation. This indeed is very helpful. Thank you!
My pleasure 😊❤️
