anyone else finding that Apollo/Clay lists look great on paper but perform badly on send? decent open rates but replies feel completely off-ICP. curious how others are cleaning or validating lists before burning sending volume on them
Rabbil H. I have worked with a lot of these data partners to know that there is a LOT of decay in their data sets. The one way we did it in my previous roles was having our operational teams scrub the data set to validate and verify contacts and companies. This included validation checks on email formats, phone numbers, etc. and an actual verification looking at whether the person is still in the same company or not. Curious to know more about what you mean by off ICP?
Ideal Customer Profile
No, I meant in what way was it off the ICP? Did you find that the contact that you were trying to reach out to via the email was not part of the same company that you were targeting, or the title you were targeting? I'm trying to get a sense of what you mean by "off".
Rabbil H. Apollo and Clay rely on scraped and aggregated data, so some manual QA is a must before you start sending. I usually spot-check a sample of contacts and companies to make sure they actually fit the ICP. Titles can be especially misleading. Someone with "Operations" in their title could be in RevOps, Finance Ops, Sales Ops, or a completely different function.
I agree that there is some decay in the Clay lists. Also, different databases pull different things. We have a process where we pull lists initially from Clay, then run it through our Hubspot enrichment and intent signals. We only send to people that pass through both systems. If you use Sales Nav, you could also verify through that. Still not perfect, but a second verification is helpful.
Rabbil H. if you are getting decent open rates, why is that? And then not getting your desired results? say more forms filled or downloads etc? That to me sounds like an intent mismatch. In addition to what everyone is saying above - decay, and better ICp/ better what we want them to do after they open the email will help. I would say, also look at when the send gets triggered? Simply looking at open rates is NOT a good criteria to understand the effectiveness. Am ssuming you have other metrics for this. I have been out of outbound for a whwile, however back in the day, I used to try to build trigger based sends - someone bought a new tool or put out a linkedin post about looking for a solution etc. The goal is to look at the whole exercise as to what happens after the open? and why will the ICP care about giving you that action/result? Like say book a call.
To add on to Akshayata S., open rates are not a good metric. I've noticed that company spam/anti-phishing platforms can trigger open and click troughs when they test an email before deciding to deliver it. These kinds of metrics are going to become less meaningful as people are able to deploy a more agentic approach to email management.
Hi Rabbil H.! When relying on a single source for validation, like the tools you mentioned, establishing a multi-source validation framework should help. This workflow shows how to build a validation waterfall using Lushaโs API as the final truth-layer. Before a lead hits your sending tool, it passes through Lusha to confirm the email is still deliverable and the direct dial is still active. If it fails Lushaโs "Triple Verification," the lead is automatically pulled from the sequence. Joseph H. US healthcare is genuinely one of the harder ICPs to cover with a single source โ the org structures are complex, roles turn over fast, and a lot of the decision-makers don't have a strong LinkedIn presence which is where most tools pull from.
