# Interview: Jennifer Walsh - Mid-Market Software VP Sales
## Metadata
**Date:** January 15, 2026
**Duration:** 32 minutes
**Interviewer:** Marcus Chen, Solutions Architect, Salesloft
**Interviewee:** Jennifer Walsh, VP of Sales, Velocity Software (Mid-Market SaaS, $80M ARR)
**Location:** Virtual (Zoom)
**Context:** Salesloft + Clari integration demo discovery call
---
## Interview Transcript
[00:00-00:45] OPENING & CONTEXT SETTING
[MARCUS] Thanks for taking the time today, Jennifer. I know VP Sales schedules are pretty packed. Before we dive in, I'd love to understand where Velocity is right now and what's on your mind when you think about scaling the sales organization.
[JENNIFER] Of course, happy to chat. So we're at about $80 million in ARR right now, and we've grown from 30 reps to just under 70 in the last 18 months. It's been... exhilarating and terrifying at the same time. [laughs] The thing keeping me up at night is whether we can maintain the quality of our selling motion as we push toward 100 reps by end of next year.
[MARCUS] That's significant growth. From 30 to 100 is basically building a new sales organization. What does your current team structure look like?
[JENNIFER] We have three pods—enterprise, mid-market, and SMB. The enterprise team is pretty mature; those reps have been with us for 2-3 years. But the other two pods—that's where the friction is. In the last year, we've onboarded probably 25 net new reps, and retention in the first 12 months has been about 75%, which is reasonable but not great. It tells me the ramp time is longer than we'd like, and the feedback I'm getting from managers is that playbook consistency is suffering.
[MARCUS] Seventy-five percent is actually solid for high-growth, but I hear you on the playbook consistency piece. When you say that's suffering, what does that look like in practice?
[JENNIFER] Different discovery calls. Different qualification criteria. Some reps are using very aggressive discounting; others are staying firm on pricing. The sales cycle is all over the map—I've got some deals closing in 30 days and others that drag for six months for the same deal size. That inconsistency makes forecasting a nightmare. Finance can't predict anything, and I can't coach effectively if I don't know what "good" looks like.
---
[02:15-08:30] BIGGEST FEARS AROUND GROWTH & HIRING
[MARCUS] That's a really common pain point. So when you think about the jump from 70 to 100 reps, what's your biggest fear? What keeps you up at night beyond just consistency?
[JENNIFER] [pauses] Honestly? Three things. First, hiring the right people at scale. We've been fortunate to hire some really talented folks, but we've also made some misses. I don't have a great system for identifying top potential before they're fully ramped. We're kind of learning on the job with some reps, and that's expensive. Second, onboarding velocity. Right now, it takes our new reps about 3-4 months to hit quota. Some hit it in 8-10 weeks, and a few are still struggling at month five or six. I want to compress that to 2-3 months consistently, but I don't have a clear roadmap for how to do that at scale.
[MARCUS] What would it mean for you if you could compress that to 12 weeks consistently?
[JENNIFER] At 65% fully ramped reps hitting quota in their first year versus our current 55%, that's literally millions in additional revenue. Let's say the average rep generates $1.2 million in their first year at quota ramp. If we can move 10 reps from 55% to 65% productivity, that's $1.2 million in incremental ARR we're leaving on the table every year. And that compounds. Next year, we onboard 30 reps with the same 12-week ramp? The math gets really good.
[MARCUS] Walk me through your onboarding program. What does the first 90 days look like?
[JENNIFER] Week one and two are classroom training—product, industry, competitive landscape, our process. Weeks three and four, they're shadowing top performers and starting to take meetings with me or a manager coaching them. By week five, they're on their own territory, and we basically watch them struggle or succeed for the next three months. [laughs] We have weekly one-on-ones, manager reviews of calls, that kind of thing. But here's the brutal truth—we're not systematically capturing what our top performers are actually doing. I know Sarah in enterprise is closing at 32%, and Tom is at 28%, and that's two of our best. But can I articulate exactly why Sarah is better? Not really. I can feel it when I listen to her calls, but I can't quantify it or replicate it at scale.
[MARCUS] So you're basically leaving knowledge on the table. The top 20% have something figured out, but you're not bottling it and transferring it down.
[JENNIFER] Exactly. And when someone decides to go to a competitor—and we've had a couple of that caliber leave—that knowledge walks out the door. It's maddening. I wish I had a system to capture their discovery questions, their objection handling, how they structure pricing conversations. That's the playbook.
[MARCUS] Let's circle back to the hiring piece. You mentioned you don't have a great system for identifying top potential. Walk me through your hiring process.
[JENNIFER] We do interviews, presentations, sales simulations. The simulations are pretty good, honestly—we role-play a deal scenario and see how they think on their feet. But it's still fairly unstructured. The person who interviews them changes, the criteria changes a bit depending on the manager. And then once they're hired, I don't have a baseline to compare their early performance against their interview performance. Do the high scorers on the sim consistently outperform? I don't actually know. It's something I've been meaning to dig into.
[MARCUS] That's a really powerful insight. If you could see a correlation between your hiring data and eventual performance outcomes, how would you use that?
[JENNIFER] Oh man, we'd probably improve hiring quality dramatically. Right now, we're batting maybe 70% on hiring—seven out of ten hires work out, two or three are either mediocre or don't make it. If we could get to 85%? That's another 10-15% improvement in our ramp curve because we're not spending energy on people who weren't going to make it anyway. And we could be more selective about promotion into senior roles.
---
[08:45-15:30] CAPTURING TOP PERFORMER BEHAVIORS & PLAYBOOK
[MARCUS] Okay, so let's say we could solve the playbook capture problem. You mentioned call recordings, shadowing, and coaching notes. Today, what does that infrastructure look like? Are you recording all calls?
[JENNIFER] We're recording about 60% of calls consistently. The reps know we're recording, it's not secret, but it's not mandatory. So we're obviously getting a biased sample—probably more of the good calls and fewer of the rough ones. [laughs] I've been pushing for 100% recording, but we have some reps who are resistant and some who worry about customer comfort. Which is fair. But we're definitely missing data. And here's the problem—even with the calls we do record, we don't have a systematic way to analyze them. I'll listen to maybe five calls a week. That's a few hours. I'm looking for key moments, but I'm not tracking patterns.
[MARCUS] So if you had 100% of calls recorded with AI analysis—conversation intelligence—what would you want to know?
[JENNIFER] [immediately] Talk time ratios. Are my reps asking enough questions versus pitching? Discovery depth. Are they really understanding the customer's problems, or are they surface-level? Competitor mentions—who's coming up in conversations and how are we being positioned against them? Specific objection handling—what objections come up most, and who's handling them well? And honestly, I want to see the correlation between these metrics and win rates. Do reps with lower talk time and longer discovery windows win more? I think the answer is yes, but I want to prove it.
[MARCUS] That's really data-driven. Are you using any tool today for conversation intelligence?
[JENNIFER] No, actually. We use Gong for some recording and call notes, but we're using it in a fairly basic way. We're not running queries across a lot of calls. And to be honest, we're only using it for quality assurance and coaching on deals that went sideways. We're not using it as a proactive tool to understand top performer patterns.
[MARCUS] That's a really common gap. People buy Gong, Chorus, tools like that, and they use them at 20% of their capability. Now, let's talk about top performer shadowing. You mentioned that happens, but what does that actually look like?
[JENNIFER] It's very informal. A new rep will shadow Sarah, my top closer, for maybe a couple of meetings a month. They'll listen in, take notes. Afterward, we might grab 15 minutes and Sarah will walk through what she was thinking. But there's no structure. I don't have a shadowing curriculum. I don't track which reps shadow which top performers. And honestly, some top performers are really generous with their time and knowledge, and others less so. It creates this uneven development curve.
[MARCUS] And coaching notes? How are you documenting the insights from those shadowing sessions?
[JENNIFER] [laughs] I'm not, honestly. Most of the coaching notes are in Salesforce, and they're pretty informal. "Talked to Tom about objection handling on budget." That's about as detailed as it gets. I'm not capturing, "Here are the specific six questions Sarah asks to uncover budget constraints, and here's why they're effective." That level of detail is in Sarah's brain, not in our system. And when new reps shadow her, that context isn't there for them either.
[MARCUS] So if we could capture that—a structured framework of your top performer behaviors with specific examples, and then that became part of your onboarding and coaching curriculum—how valuable would that be?
[JENNIFER] [pauses, thinking] It would be transformative, honestly. Because right now, top performer success looks like luck or talent or experience. But what if it's actually just technique? What if Sarah is doing five specific things in discovery, and if we teach that to 70 reps, we move the entire team's performance curve? That's the hope, but we're not there yet.
[MARCUS] What's the biggest barrier to that right now?
[JENNIFER] Time and structure. I don't have the time to listen to enough calls, identify the patterns, and document the playbook. And I don't have a structure to say, "Here's the approved discovery playbook, here's objection handling, here's pricing." We kind of have implicit playbooks, but not documented ones that can scale. If I had that, I could train faster, onboard better, and coach more effectively.
---
[15:45-21:15] PIPELINE COVERAGE RATIO & FORECASTING
[MARCUS] Let's shift to forecasting and pipeline. I know that's a big part of your role as VP Sales. What's your target pipeline coverage ratio?
[JENNIFER] We're running about 3.5x coverage right now. So for $80 million in ARR, we need $280 million in pipeline at the start of the year to hit our number with confidence. But here's my problem—I don't know if that's the right number. It feels right intuitively because it's what the board expects and what we've hit before. But I haven't done the math on what our actual coverage should be based on our conversion rates, win rates, and sales cycle.
[MARCUS] That's a really important insight. Let's dig into it. What's your current win rate?
[JENNIFER] Enterprise is about 35%. Mid-market is 28%. SMB is 22%. So blended, we're right around 28-29%.
[MARCUS] And your average sales cycle?
[JENNIFER] That's where it gets messy. Enterprise is four to six months. Mid-market is two to four months. SMB is six to eight weeks. So the average is probably three and a half, four months? But it varies dramatically. And that's part of why forecasting is hard. I have some deals that move at lightspeed and others that stall out.
[MARCUS] So here's the math question. If you need $80 million in ARR, your blended win rate is 29%, and your average sales cycle is 4.5 months, what should your coverage ratio be? [pauses, does calculation] You should be running about 4.1x coverage. You're at 3.5x, which means you're probably about 15-20% light on pipeline at any given time. That's risky.
[JENNIFER] [pauses] Okay, so I'm under-pipelined. That's... not great. [laughs] But here's my concern. If we load up pipeline to 4.1x coverage, how much of that is quality? I don't want to be at 4.1x with a bunch of garbage opportunities that aren't real. Better to be at 3.5x with highly qualified deals than 4.1x with deal dust.
[MARCUS] Absolutely fair. So let's talk about stage definitions. How are you defining your pipeline stages today?
[JENNIFER] We have discovery, qualification, proposal, negotiation, and closed-won or closed-lost. Pretty standard. But here's the issue—discovery for one rep might be two meetings, and for another rep, it might be four or five. Same with qualification. So when I look at a deal in discovery, I don't actually know how much progress has been made. It's just... it's in discovery.
[MARCUS] Do you have stage entry and exit criteria?
[JENNIFER] We should. [laughs] I know we should. But our stage definitions are pretty loose. I've been wanting to tighten them up. Like, discovery shouldn't just be "we've had some conversations"—it should be "we've identified three to five key problems the customer is trying to solve, we've met with at least two stakeholders, and we have a clear understanding of their timeline." But that's my definition in my head. I'm not sure if all my managers and reps have the same definition.
[MARCUS] So that's actually a huge driver of forecast accuracy. If stage definitions are loose, deals move between stages inconsistently, and your forecast is basically guessing. How accurate is your current forecast?
[JENNIFER] [sighs] Honestly? We miss our number probably 20% of the time. Sometimes we hit, sometimes we're close, sometimes we're significantly off. It's been better the last couple of quarters, but it's still not predictable. We're not a nightmare—we don't miss by 30%—but we're off by $5 to 15 million pretty regularly.
[MARCUS] Out of $80 million, that's 6-18% variance. That's significant for board forecasting and planning.
[JENNIFER] Yeah, and the board definitely notices. [laughs] Finance wants to predict revenue within 2-3%, and we're nowhere near that. The problem is, I don't have visibility into why deals slip or why we're off. Is it that we're putting deals in the wrong stage too early? Is it that sales cycles are longer than we think? Is it that our win rates are lower than our historical data suggests? Without good data, I'm just guessing.
[MARCUS] Have you looked at your historical conversion rates stage-by-stage?
[JENNIFER] I have, actually. We know that if a deal makes it to proposal in our mid-market segment, we close it about 65% of the time. Negotiation is 85%. So there's pretty high confidence once we get to proposal. The variability is earlier in the funnel. We might think we have ten deals in qualification, but only four of them actually convert to proposals.
[MARCUS] So your issue is really around deal qualification and stage progression earlier in the funnel.
[JENNIFER] Yes. And I think part of that is that our reps are being optimistic. Or they don't have clear qualification criteria. Or they're putting deals in the pipeline too early because they're under pressure. I'm not sure which, but the gap between qualification and proposal is where the leakage is.
[MARCUS] What if you could see, in real-time, every deal in qualification and understand—based on conversation intelligence—whether that deal actually meets your qualification criteria? Whether the customer has a budget, has urgency, has identified the problem, whatever your criteria are?
[JENNIFER] That would be game-changing. Because then I could intervene early. If a rep has ten deals in qualification and only four of them are actually qualified based on objective data, I can coach them on the qualification conversations, or I can help them cull the pipeline and focus on the real opportunities. That's efficiency. That's forecast accuracy.
---
[21:30-26:45] REP TIME ALLOCATION & CRM HYGIENE
[MARCUS] One more big topic—rep time allocation. How much time do you think your reps are spending on actual selling versus admin?
[JENNIFER] [immediately] Too much on admin. [laughs] I wish I had exact numbers, but based on feedback from the team, I'd guess it's something like 40% on calls and meetings, maybe 30% on CRM updates and deal work, 20% on internal meetings, and 10% on other stuff. So barely half their time is direct selling.
[MARCUS] That's pretty typical, actually, but is that where you want them?
[JENNIFER] No, it's not. I'd like to see 50-55% on customer conversations, 20% on CRM and deal work, 15% on internal meetings, and 10% other. That extra 10% of selling time, if we could capture it across our team, is significant revenue. But the problem is, a lot of the admin stuff feels necessary. CRM updates are a compliance thing. Internal meetings are necessary for alignment. What am I supposed to cut?
[MARCUS] What's your CRM adoption like today?
[JENNIFER] We're at about 70-75% compliance on deal data. So not great. And what I've noticed is that the worst reps at CRM hygiene are also the worst reps overall. It's correlated. There's something about discipline and organization in CRM that shows up in their selling discipline. And the top performers are usually best-in-class on CRM.
[MARCUS] That's an interesting observation. So if you could improve CRM compliance to 90%, what would that enable?
[JENNIFER] Better visibility, for sure. I'd be able to see the true state of the pipeline in real-time instead of having to bug managers about what's actually in deals. I'd be able to understand where reps are spending time. And I'd probably improve forecast accuracy another 5-10%. But here's the issue—I can't just mandate better CRM usage. Reps will resist. They'll see it as busywork. How do I make them want to be good at CRM?
[MARCUS] That's a culture question. But what if CRM updates were automatic? What if the system was pulling in call recordings, email, meeting notes, and proposal data, and using that to auto-populate deal fields? So instead of a rep manually logging notes, they're just reviewing what the system captured and correcting it?
[JENNIFER] [perks up] That would be different. That's not busywork; that's having accurate information. If the system can listen to a call, capture that we discussed budget constraints, that we have three decision-makers, that they want a pilot before full implementation—and the rep just reviews and confirms—that's a win. That feels less like admin and more like data accuracy.
[MARCUS] Have you thought about approval workflows? Do you have anything around deal reviews or manager approvals before reps move deals?
[JENNIFER] Not formally. Our managers do periodic deal reviews with reps—maybe once a month for each rep—and we'll coach on specific deals. But there's no formal gate where a rep has to get approval to move a deal from qualification to proposal, for example. I've been thinking about implementing that, but I was worried it would slow things down.
[MARCUS] Does it slow things down, or does it prevent slower things? [pauses] Like, if a rep is moving a deal into proposal that's not actually qualified, and then it stalls for three months, you've already lost the velocity.
[JENNIFER] Yeah, that's a fair point. Structured deal reviews with actual criteria might improve velocity by helping reps focus on the right opportunities earlier. I haven't thought about it that way.
[MARCUS] What would your approval criteria be? Like, what has to be true about a deal for a rep to move it from qualification to proposal?
[JENNIFER] [thinking aloud] We'd need to confirm: one, the customer has identified a real business problem we solve; two, they have a budget either confirmed or very likely; three, there's urgency—they want to solve this within the next two quarters; four, we've met at least the champion and ideally one economic buyer; and five, we understand the competition and our win strategy. If all of that is true, it's proposal-ready.
[MARCUS] And if your managers were spending 10-15 minutes reviewing each deal for those five criteria before it advanced, how much better do you think your forecast accuracy would be?
[JENNIFER] Significantly. Because we'd be preventing deals from stalling or moving sideways. We'd be forcing conversations earlier about whether a deal is real. And if a rep isn't meeting those criteria, their managers can coach them on the qualification conversation.
---
[26:50-31:20] ROI ON SALES TECH STACK & DECISION CRITERIA
[MARCUS] Let's talk about your tech stack and ROI. You're running Salesforce, Gong, Outreach, HubSpot for some marketing integration. That's a decent stack. But how do you think about ROI on these tools?
[JENNIFER] [laughs] I don't, really. We bought them because we needed CRM, because I wanted visibility, because we wanted automation. But I've never done a rigorous ROI calculation. That's partly my fault. I should know this stuff.
[MARCUS] It's actually more common than you'd think. Sales leaders buy tools for specific pain points but don't measure the impact. So let's think about it. What would ROI look like for you? How would you measure it?
[JENNIFER] Well, for Salesforce, I guess the ROI is forecast accuracy, pipeline visibility, and rep productivity. Hard to quantify exactly, but if we go from 18% variance to 5% variance, that's worth something to finance. If we improve ramp time from 4 months to 3 months, that's directly measurable—more quota-carrying reps faster. If we free up 10% of rep time from admin, and they use that to sell, we probably pick up 8-10% in production. So the ROI on Salesforce is probably... I don't know, $5-10 million a year in incremental revenue? Roughly?
[MARCUS] And what's Salesforce costing you?
[JENNIFER] Including the admin we pay for—maybe $400,000 a year all-in.
[MARCUS] So if you're getting $5-10 million in incremental revenue, and that's flowing through to gross margin, what's your gross margin?
[JENNIFER] About 70%.
[MARCUS] So that's $3.5 to $7 million in incremental gross margin on a $400,000 investment. That's 9-17x ROI. [pauses] That's actually really good. But you haven't quantified it.
[JENNIFER] [laughs] I mean, we know Salesforce is critical. I wasn't questioning its value. I was more saying I haven't rigorously measured whether we're getting all the value we could out of it. We might be leaving ROI on the table if we're not using it fully.
[MARCUS] Fair. So let me ask a different question. If I told you there was a tool that would improve forecast accuracy by 10%, reduce ramp time by 15%, and improve rep productivity by 8%, what would be a reasonable price for that?
[JENNIFER] [thinking] Well, 10% improvement on forecast accuracy is probably worth $2-3 million in operating leverage to finance. 15% improvement in ramp time is probably worth $1.5-2 million in incremental revenue. And 8% improvement in rep productivity is probably worth $3-4 million. So theoretically, a tool that delivers all three could be worth $6-9 million a year. I'd be willing to spend up to maybe $500,000 to $1 million a year on something that credibly delivered half of that.
[MARCUS] Okay, that's a helpful framework. Now, the challenge is, how do you know it actually delivers that? Because that's where a lot of tool purchases go wrong. The vendor promises it, but the actual lift is half of that.
[JENNIFER] Yeah, and we haven't tried anything new in sales stack in about 18 months. We've been focused on implementation and adoption of what we have. But we're starting to think about what's next. And honestly, the biggest gaps are playbook consistency, forecast accuracy, and rep quality improvement. If there's a tool that helps us with those three things, I'm listening.
[MARCUS] What would success look like for you over the next 12 months? If you were to tell me, "This was a good year," what would that be?
[JENNIFER] Hit our $95 million ARR target. Scale the team to 95 reps and have 70% of them ramped and quota-carrying. Get our forecast accuracy to within 5% of target. Develop a documented, scalable playbook that we can teach to new reps and that drives consistent selling motions. And if we're being ambitious, improve rep productivity by 10-15% so we're getting more revenue per rep.
[MARCUS] That's all very doable. And what's the biggest blocker to any of those?
[JENNIFER] [pauses] Probably team capacity and infrastructure. I have one manager helping me with coaching. I don't have a training person. I don't have infrastructure for playbook development. We're running on fumes operationally. So I can push the team and get some of these things done, but I'm not going to do them well unless we invest in the infrastructure. And that might be tools, it might be people, probably both.
[MARCUS] So if we could move the needle on playbook development, rep quality improvement, and forecast accuracy with better infrastructure and tools, that would make a huge difference for you operationally.
[JENNIFER] Absolutely. Because then I'm not trying to do everything myself. I'm not the bottleneck on playbook. I'm not personally coaching 95 reps. The system is doing some of that heavy lifting, and I'm focusing on strategy and exceptional coaching.
---
[31:20-32:15] CLOSING & SUMMARY
[MARCUS] Alright, I don't want to take up too much more of your time. But before I go, I just want to confirm the key things I'm hearing. You're scaling from 70 to 100 reps, you're worried about playbook consistency, you don't have great visibility into what your top performers are actually doing, your forecast accuracy is 18% variance, and operationally, you're at capacity. Is that about right?
[JENNIFER] Yeah, that's a good summary. And all of those things are interconnected, right? If I could capture top performer behaviors, I'd have a playbook. If I had a playbook, I could train better and onboard faster. If I'm onboarding faster and better, reps are hitting quota earlier. If reps are hitting quota, my forecast is more predictable. It's all connected.
[MARCUS] Exactly. And the tools and infrastructure to do that—conversation intelligence, structured onboarding, playbook management, deal review workflows—that's where we can help. I'll send you some information on how we've helped other similar-size companies in your segment. And then maybe we schedule a follow-up where we can do a deeper dive on pipeline and forecast modeling?
[JENNIFER] Yeah, absolutely. I'm interested, especially on the playbook capture side. That's the thing that would move the needle the most for us. The rest we're managing okay, but playbook and consistency is the gap.
[MARCUS] Perfect. I'll send that over, and let's find time in the next week or two.
[JENNIFER] Sounds good. Thanks, Marcus. This was helpful.
[MARCUS] Thanks, Jennifer. Great conversation.
---
## Key Takeaways Summary
**Interview Insights & Opportunity Assessment**
### Critical Pain Points Identified
1. **Playbook Inconsistency**: With 40 new reps onboarded in 18 months, selling motions are inconsistent across team. Top performer (Sarah, 32% close rate) behaviors are undocumented and not systematizable. This is the primary blocker to scaling effectively.
2. **Forecast Accuracy Gap**: Current 18% variance vs. target 2-3% is driven by loose stage definitions and inconsistent qualification criteria. Deals entering qualification don't have clear advancement criteria, causing leakage and stalled deals.
3. **Onboarding Velocity**: 4-month ramp time with 75% first-year retention creates $1.2M+ in lost productivity per year. Compressed to 12-week ramp could add $1.2M in ARR annually when compounded across cohorts.
4. **Pipeline Coverage Misalignment**: Running 3.5x coverage vs. calculated requirement of 4.1x suggests $12-15M in pipeline shortfall annually. Concern about quality vs. quantity is valid but addressable with tighter stage definitions.
5. **Rep Time Allocation**: Only 40% of rep time is direct selling (target: 50-55%). CRM hygiene at 70-75% is correlated with rep performance; automatic capture would unlock adoption without perceived busywork.
### Business Impact Opportunity
- **Revenue Impact**: Playbook + onboarding optimization = $2-3M incremental ARR within 12 months
- **Productivity Gains**: 8-10% improvement in rep productivity with better tools and processes
- **Forecast Improvement**: Structured deal reviews + conversation intelligence could improve accuracy from 18% to 5% variance
- **Hiring Quality**: Better assessment data could improve hire-to-quota success from 55% to 65%
### Stakeholder Readiness & Budget Authority
- **Budget Decision-Making**: VP Sales with CFO engagement on $500K-$1M tool investment threshold
- **Urgency**: High operational need; scaling roadmap dependent on solving playbook and ramp issues within 6 months
- **Primary Interest**: Conversation intelligence + playbook capture is single highest-value lever
### Recommended Next Steps
1. Conduct pipeline analysis to confirm 4.1x coverage requirement and quantify opportunity cost
2. Build ROI model showing playbook impact on ramp time and team productivity
3. Propose conversation intelligence pilot with top 5 reps to establish baseline metrics
4. Define stage criteria framework and deal review workflow for VP Sales approval