If predictive AI models are getting better every year, why do GTM teams still miss targets so often; what’s the disconnect?
As sales, we’ve all been there. Nothing left to farm. Nothing real to hunt. Yet leadership mandates an all in push based on the data they’re seeing. Budget pressure. Board optics. A generous TAM model. The model isn’t lying. It’s incomplete. When reality on the ground diverges from the forecast, execution gets blamed. But the miss usually started upstream, in the assumptions leaders couldn’t or wouldn’t challenge.
This hits hard. When upstream assumptions go unchallenged, the model looks confident while reality on the ground tells a different story and execution ends up holding the bag.
Most unattainable targets aren’t an AI or data problem. They’re set through a political lens. Compromise numbers, optimism bias, and incentive smoothing, then handed to execution teams to absorb the risk. And then this is when the egos kick in. The numbers stop being signals and start becoming positions to defend.
This is such a great callout. When targets are shaped by compromise and optics, the risk quietly shifts to execution and that’s where things start to break. I really like how you framed it as numbers turning into positions to defend instead of signals to learn from. Once that happens, it gets a lot harder to have honest conversations about what’s actually changing and what to do next.
That’s a fair point Tony W.. No model or plan really works if it ignores the human side of selling. Buyers still respond to how well sellers understand their problem, ask the right questions, and guide the conversation. When forecasts or targets don’t account for where sellers actually are in terms of skill and support, the gap shows up quickly. Data can point to issues, but improvement still has to happen person-to-person.
