A quick question: As AI starts taking on more execution, it feels like we’re moving toward managing it more like a workforce than a tool. That introduces a different set of challenges: ownership of outcomes, coordination across systems & defining what AI is allowed to do. Almost like we’re recreating org design, but for humans + agents together. Are you thinking about AI in terms of tools, or starting to think about it in terms of roles and responsibilities? Thoughts please!
To me I don’t think you can ever replace human interaction. I think trust and relationships drive a lot of deals. That said, I think if you use AI effectively as a tool to automate a lot of the manual busy work you would probably have more time free to have more human to human interaction that drives more deals. So love the idea behind adding it as a tool but not a replacement.
I couldn't agree more. We're moving past the 'prompting' phase and into the 'partnership' phase. That’s exactly the philosophy we’re leaning into with Ziply AI. We don't want to just give users another tool to manage; we’re building it to handle the actual execution of social media strategy shifting the human role from 'doer' to 'director.' It’s about managing outcomes rather than just clicking buttons.
I couldn't agree more - but as a solopreneur on the job hunt, there are a few tools that's really been helping me automate my job search like FullEnrich Search. And that tool being powered by AI, it's been super helpful in waterfalling hiring manager contacts and getting me the right data to help me pop into mailboxes and stand out from the ATS black hole.
Zack L. The relationship side doesn’t go away. If anything, it becomes more important. The real value of AI is taking away the guesswork and busy work so reps can spend more time on actual conversations. When it works well, it’s not replacing the human part, it’s making sure they’re showing up at the right moment with the right context.
Sakshi S. This shift from doer to director is spot on. Where it gets interesting is making sure what’s being executed is actually grounded in real signals, not just more automation. Otherwise it just scales noise faster. The systems that seem to work are the ones that guide execution based on what’s actually changing with the buyer, not just predefined workflows.
Allie Harrison This is a great example of where AI actually helps. Especially in something like job search, where timing and visibility matter so much, having the right signals and contacts can make a huge difference. Feels like the common thread is using AI to get to the right people at the right time, then letting the human part take over from there.
Best to think of AI as a thinking partner rather than as a tool and approach it with a very high level of critical thinking and scepticism. Which is what most people find very difficult to do. In our tuning behind the scene with AISA (AI skill assessment) our model, for example, looks at signals to understand whether the person is seeing AI as a one direction tool (like a hammer) or sees it as an "agent" or a "partner". This perspective alone has been incredibly helpful in telling apart people that are okay vs great at using ai.
When it comes to working with agents that are more empowered to take actions on your behalf (like Claude CoWork for instance) I find the "responsibility ladder approach" from leadership/management best practices highly useful.
Yes. AI Agents MIGHT become roles rather than tools. Plan but always expected the unexpected. I would say developing a framework to audit the current inventory of AI agents, then define their roles through clear responsibility mapping, permissions (read/recommend/act), coordination workflows, measurement KPIs, and risk/failure mode guardrails might help.
Orgs that view AI as just a tool are not seeing the ROI they anticipated nor the adoption rates by staff. You have to view it as an assistant that does the grunt work in order for the employee to do the work that leads to decision making. This is especially true for a company that runs lean or early in growth. Proper AI usage enhances you not replaces.
Ozan D. I like the thinking partner framing, but I feel like most teams aren’t actually set up for that yet. They say they want a partner, but the system doesn’t have enough context or clean inputs to really reason. So it ends up either acting like a basic tool or people just stop trusting it. Feels less like a mindset problem and more like a foundation problem.
Mani This is a really grounded way to think about it. The permissioning part is huge. A lot of things break simply because it’s not clear what the system should act on vs just suggest. Also agree on failure modes. That’s usually where trust gets lost, not when things go right but when they go wrong.
Vern H. Yeah this tracks. If it’s just another tool, people treat it like one and usage drops. If it actually takes work off their plate and helps them move faster, adoption looks very different. The enhance not replace point is right, but it really only works if people feel the system is actually helping them, not giving them more to think about.
