I have built this, but it works primarily based on commitments & stalled deals. It can queue up the follow-ups depending on responses or activity from the leads, and keeps the deal in motion automatically. Admins can tweak and decide the cadences for the reps however they want. Context is built from tools you connect to it (email, calendar, CRM, others), and drafts are created in gmail & in existing threads if those exists. https://cosos.xyz/ Don't want to self-promote too much as it’s still in pre-launch, but happy to show how it works in practice, free of charge!
I think the real question is: what do you actually need the agent to know? Quick background context: I’ve spent the last years running GTM from 0-1 at a venture-backed company, then building an agent specifically for that motion and beyond for other startups. The pattern I keep seeing: teams that say “build” almost always mean “configure”, and teams that say “use” often underestimate what using well actually requires. Here’s what dies the build-it path: not the agent, the context problem. By the time you’ve wired anything GTM-related into something coherent and useful, with workspace isolation, multi-user permissions, a feedback loop that actually learns, and signal routing that doesn’t spam the wrong person, you’ve built a data infrastructure company. You haven’t done GTM work or closed deals in three months. The hard part isn’t generating a follow-up draft, tt’s knowing: - Which deals are actually at risk vs just slow - What that specific account’s last 60 days looked like before the conversation started - Who on the team should act, and in what form What I think GTM teams actually need is AI judgment, not AI engineering. The skill is knowing what a good output looks like vs a pattern-matched hallucination, when to trust the agent vs override it, how to define “done” for an autonomous action. The “build” instinct is right about one thing: you need to understand how these systems work to hold them accountable, but building the infrastructure yourself almost never compounds. Using something well, with real judgment about when it earns trust, that does. The teams I’m watching win aren’t the ones writing the most Python. They’re the ones who understand their revenue motion precisely enough to tell an agent what matters, and can tell when it gets it wrong.
thanks for pinging, noticed this only now. Nikhat I. exactly, trust is earned incrementally. we start with “here’s what you should know” and only graduate to “here’s what I did” once the system has proven it reads the situation/data correctly Gururaj P. to add to Nikhat’s example: same logic applies post-signup: a user connects their tools, starts exploring, invites a teammate. that’s a buying signal, not just a product event. the “system” should detect that pattern, surface it to the team with context (“signed up 2 days ago, connected 3 integrations, matches ICP”), and draft a personalized outreach/followup, not a generic “how’s it going?” drip email. where it gets interesting is when the system learns which signals actually convert. not every integration connect means intent, but an integration connect + a second session within 24h + matching your best-customer profile? that’s when the system should act, not wait for a human to notice in a dashboard
I’m currently building close to this at 3 early-stage companies. we start with setting up the “analyze -> decide” to the point where the system is trustworthy enough to let it “act/execute” semi-automatically. probably easier at the early-stage vs larger orgs but main benefit so far is the continuous prioritization and understanding on what should be moved & worked on, thanks to the real-time understanding of customer journeys
thank you Chris! sort of yes, but instead of having to switch over to a new CRM, it works on the one (+ other tools) you’re already using. currently in alpha so here looking for connections & learnings!
