And Secondly,
We treated the initial threshold as a v1 hypothesis, not a permanent rule, and made that explicit to Sales from day one.
What worked in practice:
- 1.
Short feedback loops (weekly, not quarterly)
Every week we reviewed a small sample of sales-ready accounts with SDR managers:
• Which converted to meetings
• Which stalled
• Which reps felt were “noise”
That kept iteration grounded in outcomes, not opinions.
2. Adjust weights, not definitions
We rarely changed what counted as intent. Instead, we tuned:
• Recency windows (e.g. 7 → 14 days)
• Signal decay (older activity mattered less)
• Confirmation rules (e.g. 2 stakeholders instead of 1)
This avoided the feeling that “the rules keep changing.”
3. Protect rep time with guardrails
Any experiment that increased volume had to meet a simple bar:
Did meeting-set rate or speed-to-first-touch improve?
If not, we rolled it back quickly.
4. Make changes visible and explainable
We shared a short changelog with reps:
“We tightened X because Y was creating false positives.”
Transparency mattered more than perfection.
5. Manager reinforcement
Frontline managers previewed changes before rollout and coached to them in 1:1s. That kept trust intact even when thresholds shifted.
The key insight: reps don’t need perfect intent models—they need predictable ones. As long as changes feel incremental, explainable, and tied to outcomes, adoption stays high.
Hope you understand that now?