8 minute read

You're in a sprint planning meeting, and someone asks the dreaded question: "So, do we think we can deliver the user dashboard feature by the end of next month?" The room falls silent. Eyes dart around. Someone eventually mumbles, "Yeah, probably?" And that's how your team commits to a deadline based on nothing more than collective optimism.

If this sounds familiar, you're not alone. I've worked across many different software engineering teams, and I've seen the full spectrum of planning approaches - from rigorous methodologies that actually work to complete chaos masquerading as "agile flexibility."

The Great Planning Divide

In my experience, most teams fall into one of two camps when it comes to software planning, each with passionate advocates who swear their approach is superior.

The Kanban Purists

Some people are absolutely convinced that Kanban is the way forward. The philosophy is elegantly simple: break work down into the smallest possible stories, then measure everything that matters - story throughput, completion time, lead time for pickup. Once you have these metrics, planning becomes mathematical rather than mystical.

The beauty of this approach is its rejection of artificial constraints. No sprints, no arbitrary two-week windows, no forcing N story points into predetermined timeboxes. Instead, you ask: "This feature consists of 12 stories, we complete 3 stories per week on average, therefore this feature will take about 4 weeks." Clean, data-driven, predictable.

I've seen this work brilliantly when implemented properly. Teams develop a steady rhythm, stakeholders get reliable delivery estimates, and there's no artificial pressure to fit work into sprint boundaries that might not make sense for the actual work being done.

The Scrum Believers

On the other side are teams that swear by Scrum's more structured approach. Here, you still break work into stories, but you also size them - whether through t-shirt sizing (S, M, L, XL) or Fibonacci sequences (1, 2, 3, 5, 8, 13…). The team measures velocity - how many points they deliver per sprint - and uses this to inform capacity planning.

The sprint becomes a commitment mechanism. At the beginning of each sprint, the team collectively decides which work to pull in based on their historical velocity. If they typically deliver 20 points per sprint, they'll aim to commit to roughly 20 points worth of stories.

The burndown chart becomes your early warning system. In a perfect world, it's a linear line trending downward as work gets completed. When reality deviates from this ideal, you know something's going wrong before the sprint ends.

The Agile Confusion

Here's where things get interesting, and where I see a fundamental misunderstanding that plagues many teams. Lots of people think that if they're doing Scrum or Kanban, they're doing Agile. But this is so wrong.

Agile is the Agile Manifesto. It's this:

  • Individuals and interactions over processes and tools
  • Working software over comprehensive documentation
  • Customer collaboration over contract negotiation
  • Responding to change over following a plan

It's important not to conflate the two. The manifesto is your guiding philosophy; Scrum and Kanban are just tools that might help you embody those principles.

People should actually read the Agile Manifesto and use it as their guiding line. From there, if they prefer Scrum or Kanban, that's fine - pick what works for you! But pick with intention and purpose. Understand what you're choosing and why, and don't just pick it "because it's Agile." Use these methodologies as tools to help you answer questions about your team's work, not as rigid frameworks you must follow because someone told you they're "Agile."

Both Scrum and Kanban approaches, when done thoughtfully, share a crucial insight: you need some systematic way to answer fundamental questions that stakeholders will inevitably ask.

The Questions That Matter

Whether you choose Kanban, Scrum, or some hybrid, your planning process needs to help you answer:

  • Will we deliver feature X on time?
  • Are we generally on track? If not, what happened?
  • Do we need to cut scope to meet our commitment? Which work should we deprioritise?
  • How can we learn from this to improve future estimates?

What absolutely doesn't work is having… nothing. No process, no measurement, no systematic approach to understanding your team's capacity or delivery patterns.

When Good Intentions Meet Poor Execution

This brings up an interesting question about ownership. Whose job is it to drive this planning process? The engineering manager? Senior engineers? A dedicated scrum master?

I've seen successful teams organise this responsibility in different ways. Sometimes engineers step up to wear the scrum master hat, driving process in collaboration with their manager. Sometimes managers take full ownership, reaching out to engineers for estimates and insights. Sometimes it's delegated entirely to the team.

What I've learned is that there's no universally "right" approach - but there are definitely wrong ones.

I experienced this firsthand on a team that was sizing stories in time estimates but putting zero effort into understanding capacity. We knew a story would take "3 days," but we had no systematic way of knowing who would work on it or whether we actually had 3 days of availability.

Frustrated by this gap, I decided to try something that had worked at a previous company. I built a rudimentary Gantt chart in Excel: engineer names down the left side, weeks across the columns, and features stretching across the timeline based on their estimated duration.

The pushback was immediate and telling. "John can't work on the login feature!" people protested, completely missing the point. Exasperated, I changed the names to "Engineer A," "Engineer B," etc. This wasn't about specific assignments - it was about understanding whether we had the capacity to deliver what we were committing to.

I sent the updated chart around, explicitly tagging our engineering manager, and received… crickets. We continued with the same "finger in the air" approach to sprint planning, picking up whatever "felt like a good amount of work."

The outcome was predictable: inconsistent delivery, missed commitments, and frustrated stakeholders who couldn't get reliable answers about when things would be done.

Interestingly, rumour has it that manager was eventually dismissed for performance reasons. Make of that what you will.

The Most Predictable Team I Ever Worked On

In contrast, the most predictable and successful team I've been part of had a manager who, in collaboration with an engineer wearing the scrum master hat, took planning seriously. They reached out to feature owners for sizing, maintained historical velocity data, and used it to make realistic commitments about what could be delivered when.

This wasn't bureaucracy for its own sake - it was systematic thinking applied to a complex problem. The result was a team that consistently hit its commitments, stakeholders who trusted our delivery estimates, and engineers who had clear expectations about scope and timeline.

The Time Expansion Problem

There's a principle from Peopleware (a book I reference frequently and highly recommend) that work expands to fill the time allocated to it. This makes a compelling case for setting sensible timeframes for feature delivery.

Without boundaries, how do you know when you're done? How much time should you spend polishing that component? When is "good enough" actually good enough? Setting explicit timeframes - even if they're occasionally wrong - creates healthy constraints that prevent endless scope creep and perfectionism.

When estimates turn out to be wrong due to genuine unknowns, that's valuable learning. But setting no bounds at all removes any pressure to work efficiently or make trade-off decisions.

The Uncertainty Principle

This brings us to one of the trickier aspects of software planning: dealing with uncertainty. I've observed that larger stories tend to have exponentially more uncertainty, which is part of why Fibonacci sizing makes intuitive sense.

The difference between a 1-point story and a 3-point story is relatively small and predictable. But a 21-point story compared to a 13-point story? That could mean the difference between 3 months and 5 months, because larger work has more opportunities for unexpected complexity to emerge.

Risk factors compound this uncertainty:

  • Familiarity: Work similar to what you've done before is lower risk
  • Technology: New frameworks or tools introduce unknowns
  • Dependencies: Heavy reliance on other teams multiplies complexity
  • Domain knowledge: Working in unfamiliar business areas slows progress

Good planning processes account for these risk factors in their sizing and timeline estimates.

The Engineer's Dilemma

Here's the uncomfortable truth: many engineers don't want to care about process. They'd rather focus on the actual work - writing code, solving technical problems, building features. Process feels like overhead, bureaucracy, a distraction from "real" work.

I understand this perspective, but I've come to believe it's shortsighted. Process exists to answer important questions that stakeholders need answered. Without it, teams can't provide confidence about delivery timelines, can't identify problems early, and can't learn systematically from their experiences.

The absence of process doesn't eliminate the need for planning - it just makes planning invisible and unreliable. Stakeholders still need to know when features will be ready. Product managers still need to coordinate releases. Sales teams still need to make commitments to customers.

When engineering teams refuse to engage with planning systematically, someone else ends up making promises based on even less information. That's how you end up with impossible deadlines imposed from above rather than realistic commitments made collaboratively.

What Good Looks Like

Software planning is undeniably complex. There are no perfect methodologies, no silver bullets that eliminate uncertainty entirely. But that complexity is exactly why good managers don't shy away from it.

The best technical leaders I've worked with tackle planning head-on. They experiment with different approaches, measure what works, and iterate until they find something that serves their team and stakeholders well. They never let teams operate without any systematic approach to understanding capacity and delivery patterns.

This doesn't mean drowning teams in process for process's sake. It means being thoughtful about which processes serve real needs and being disciplined about maintaining them even when they're occasionally inconvenient.

The Path Forward

If you're on a team struggling with planning (or avoiding it entirely), consider starting small:

  1. Pick one metric: Whether it's story throughput, sprint velocity, or cycle time, start measuring something consistently
  2. Set explicit timeframes: Even rough estimates create helpful constraints and learning opportunities
  3. Review and learn: When estimates are wrong, spend time understanding why
  4. Communicate transparently: Share what you know and what you don't with stakeholders

The goal isn't perfect prediction - it's systematic improvement and reliable communication about what your team can deliver.

Software planning will always involve uncertainty, but that's no excuse for avoiding it entirely. The teams that embrace this complexity and work systematically to manage it are the ones that consistently deliver value and build trust with their stakeholders.

What planning approaches have worked (or failed spectacularly) in your experience? How does your team balance process with flexibility?

Updated: