Sprint Planning with Capacity Data
The number one reason sprints fail is over-commitment. Teams look at their backlog, feel optimistic, and pull in 20% more work than they can realistically finish. Then they spend the last three days of the sprint scrambling, cutting corners on testing, and carrying stories into the next sprint. This is not a people problem. It is a planning problem.
Capacity-based planning replaces optimism with math. Here is the exact process I use with every team.
The Capacity Formula
Step 1: Calculate Available Hours
For each team member: (Working days in sprint) x (Hours per day) - (PTO hours) - (Recurring meeting hours) - (Support rotation hours)
Step 2: Apply the Focus Factor
Multiply available hours by 0.6 to 0.7. This accounts for context switching, Slack conversations, code reviews, unplanned questions, and the general overhead of being a human. Teams that assume 100% focus consistently over-commit. Even 80% is optimistic for most environments.
Step 3: Map to Story Points
Use your team's historical data: if you completed 40 story points in the last 3 sprints with an average capacity of 280 focus hours, your ratio is roughly 7 focus hours per story point. This sprint, if your total focus hours are 250, plan for about 36 points.
Step 4: Reserve a Buffer
Subtract 15-20% for unplanned work. Production bugs, urgent requests, environmental issues. If your team historically encounters zero unplanned work, congratulations on your fictional team. Everyone else: build the buffer.
Running the Planning Session
Sprint planning should take no more than 2 hours for a 2-week sprint. If it regularly takes longer, the problem is almost always insufficient backlog refinement. Stories are entering planning unestimated, with vague acceptance criteria, and the team is doing refinement work during planning.
The structure that works:
- Sprint goal (15 min):The PO presents the sprint goal. Not a list of stories, a single coherent objective. "Users can complete the new onboarding flow end to end" is a sprint goal. "Complete tickets PROJ-123 through PROJ-142" is a to-do list.
- Capacity review (10 min): Share the capacity calculation. Everyone confirms their availability. Flag any PTO, on-call rotations, or external commitments.
- Story selection (45 min): Pull stories from the refined backlog in priority order until you reach capacity. For each story, confirm acceptance criteria and identify any dependencies. If a story is not well-understood, it does not enter the sprint.
- Task breakdown (30 min): Optional but helpful for newer teams. Break each story into technical tasks. This surfaces hidden complexity and improves estimates. Mature teams often skip this.
- Commitment (10 min): The team explicitly commits to the sprint goal and the selected stories. This is not the PM committing. The team commits. This matters for ownership.
Real-World Insight
I track a metric called Sprint Predictability: the percentage of committed story points completed. A healthy team scores 80-90%. Below 70% consistently means your planning process is broken. Above 95% consistently usually means you are undercommitting and sandbagging. The goal is not 100% completion. The goal is honest, predictable commitment that stakeholders can rely on.
Running Effective Standups
If your standup is a round-robin where each person recites what they did yesterday, what they are doing today, and whether they have blockers, you are running the most common and least effective standup format in the industry. That format turns a coordination meeting into a status meeting. And status meetings are what Jira is for.
The Walk-the-Board Method
Instead of going person by person, walk the board right to left. Start with items closest to done. This shifts the team's focus from "what am I working on?" to "what can we finish today?"
The facilitation pattern:
- Start in the "In Review" or "QA" column. Who can review this PR today? Is the test environment available? What is blocking this from moving to done?
- Move to "In Progress." Are you on track to finish today? Do you need to pair with anyone? Any unexpected complexity?
- Check "Blocked" items. What is the blocker? Who owns resolving it? When will it be resolved? If nobody knows, the PM takes an action item.
- Only then look at "To Do." Does anyone need to pull new work? Are priorities clear?
What to Watch For as the PM
The standup is your daily diagnostic tool. You are not just listening to updates. You are pattern-matching:
- Stale items: If a card has been in the same column for three days, something is wrong. Ask about it. It might be blocked, deprioritized, or someone is stuck but not admitting it.
- Work without tickets:"I was working on some tech debt yesterday." If it is not on the board, it does not exist for planning purposes. Either add it or stop doing it.
- Dependencies forming:"I need the API from Team B before I can finish this." This is a risk. Log it. Track it. Escalate it if needed.
- Scope creep signals:"I found some extra edge cases." This often means the story was under-specified. Note it for the retrospective.
The 10-Minute Rule
If your standup takes more than 10 minutes with a team of 7, something is wrong. Common causes: too many items in progress (enforce WIP limits), too many detailed discussions (take them offline), or too many people in the room (stakeholders should observe, not participate). I literally set a timer. When it goes off, the standup is over. Anything unresolved becomes a follow-up conversation with the relevant people.
Managing Sprint Scope and Blockers
Mid-Sprint Scope Creep
Scope creep is not a planning failure. It is an information failure. New requirements appear because someone learned something new. The question is not how to prevent it, it is how to manage it without destroying the sprint commitment.
My framework for handling mid-sprint additions:
- Size it immediately. When someone brings a new request, the first question is: how big is it? If it is less than half a day of work and is genuinely urgent, it might fit in the current sprint. Anything larger is a next-sprint conversation.
- Apply the swap rule. If a new item enters the sprint, an equivalently sized item must leave. This is non-negotiable. You cannot add work without removing work. The PO decides what gets swapped. If the PO cannot decide what to remove, the new item is not actually a priority.
- Protect the sprint goal.Any addition that threatens the sprint goal gets pushed back. The sprint goal is the team's contract with the organization. Breaking that contract mid-sprint erodes trust and predictability.
- Track scope changes. Log every mid-sprint addition in a visible place. At the retrospective, review the pattern. If you are adding 3-5 items every sprint, the problem is upstream: requirements are not being gathered early enough, or priorities are not being set firmly enough.
Handling Blockers: Escalation vs. Self-Resolution
Not every blocker needs escalation. The PM who escalates everything to leadership looks like they cannot solve problems. The PM who escalates nothing looks like they are hiding problems. The key is calibration.
Resolve Yourself
- Environment issues: coordinate with DevOps, do not wait for someone else to notice.
- Intra-team dependencies: facilitate a pairing session or reorder work.
- Unclear requirements: pull the PO into a 10-minute conversation. Now, not tomorrow.
- Access permissions: submit the request yourself instead of asking the engineer to navigate bureaucracy.
Escalate to Leadership
- Cross-team dependencies where the other team has different priorities and will not reprioritize without leadership direction.
- Resource constraints: you need a specialist who is allocated to another project.
- Vendor/third-party blocks: SLA violations, unresponsive partners.
- Organizational blockers: policy, process, or approval bottlenecks that you cannot influence directly.
When you do escalate, use this format: "Here is the problem. Here is the impact if it is not resolved by [date]. Here is what I have already tried. Here is the specific decision or action I need from you." Leaders want crisp problem statements and clear asks. They do not want a paragraph of context and a vague "what should we do?"
Sprint Reviews and Retrospectives
Making Sprint Reviews Exciting for Stakeholders
Most sprint reviews are boring. A developer shares their screen, clicks through a feature in a staging environment while stakeholders nod politely, and everyone leaves. That is not a review. That is a demo nobody asked for.
Here is how to make sprint reviews the meeting stakeholders actually look forward to:
- Start with the sprint goal outcome (5 min). Did we achieve the goal? Use a simple red/yellow/green status. Show the metric if you have it. "Goal was to reduce checkout errors. We deployed the fix on day 6. Checkout errors dropped 40% in the last 4 days." Stakeholders care about results, not effort.
- Demo from the user's perspective (15 min).Do not demo the feature. Demo the user journey. Walk through the scenario a customer would experience. Use production data if possible, staging if not. Let the developer who built it tell the story: "Before this change, users had to go through 5 screens. Now it is 2 screens and we pre-fill their data."
- Solicit specific feedback (10 min).Do not ask "any feedback?" That gets silence. Ask specific questions: "We chose to auto-save instead of requiring a save button. Does that match how your users think about this workflow?" Directed questions get useful answers.
- Preview what is next (5 min). End with a teaser for next sprint. This builds anticipation and ensures stakeholders see the trajectory, not just the snapshot.
Running Retrospectives That Produce Real Change
I have facilitated hundreds of retrospectives. The ones that produce change share three characteristics: psychological safety, specificity, and accountability.
The Retrospective Formula
Check in with the team. Use a one-word mood check or a 1-5 energy rating. This establishes that feelings matter and creates permission to be honest.
Use the format that matches your team's maturity. New teams: Start/Stop/Continue. Mature teams: 4Ls (Liked, Learned, Lacked, Longed For). Complex sprints: timeline retrospective mapping events to emotions. Everyone writes independently first, then shares. This prevents groupthink.
Cluster similar items. Dot-vote to prioritize. Look for root causes, not symptoms. "We had too many bugs" is a symptom. "We skipped code review on the last two days because we were rushing to complete the sprint" is the root cause.
Pick a maximum of 2 action items. Each must have an owner and a due date. Add them to the sprint backlog as first-class items. If retro actions are not tracked with the same rigor as feature work, they will not happen.
Review last sprint's action items first. Did they happen? Did they help? This creates accountability. If the team sees that retro actions consistently get done and make a difference, they will invest more in the process.
The Retrospective Anti-Pattern
The most common retro failure: generating 15 action items and completing zero. This happens when the team tries to fix everything at once. After three sprints of ignored action items, the team stops believing the retro matters. Fix this by enforcing the 2-item limit ruthlessly. Two items completed is infinitely more valuable than fifteen items ignored. Over a year of sprints, that is 48 to 52 completed improvements. That compounds into a fundamentally different team.
Key Takeaways
- 1Sprint planning should use capacity math, not gut feeling. Calculate available hours, apply a 60-70% focus factor, map to story points using historical data, and reserve 15-20% for unplanned work.
- 2Walk the board right to left in standups. Focus on finishing work, not starting work. Keep it under 10 minutes. Anything that needs a discussion gets taken offline.
- 3Mid-sprint scope changes follow the swap rule: something in, something out. Track all scope changes and review the pattern in retrospectives.
- 4Escalate blockers when you cannot resolve them with your own authority. Always present the problem, impact, what you have tried, and the specific ask.
- 5Retrospectives work when you limit action items to 2, assign owners, track completion, and review previous actions first. Consistency over ambition.