1. 10 April 2014

    Comments

    Buffer

    Agile estimating in a Scrumban context

    Last night I went to the London Kanban Coaching Exchange to listen to a talk by Nader Talai titled “Does size matter”. In his talk Nader told us a story of a team that gained trust with its stakeholders through improved estimation.

    Our journey with Agile estimation…

    Personally I have been through an Agile estimation journey with my team over the last year. In our work before last August my team ran a fairly vanilla Scrum routine, meeting three times a sprint for an hour to discuss and size the top of the backlog.

    For us, at that time, estimation was a useful practice and we felt we were accurate enough. Taking work that we’d recently completed and comparing upcoming work against it; asking “is upcoming feature X bigger or smaller than completed feature Y" gave us a good sense of what we could take on.

    At that time, during sprint planning we’d take our targeted stories and plan in detail assigning an estimate of hours against each task. To be fair - we were often wildly inaccurate at estimating the hourly tasks but it all held together and we enjoyed good trust with our stakeholders.

    Moving from Scrum to Kanban…

    In August we made a few changes. Firstly we started working on new projects for new stakeholders. Secondly I had read a lot of debate about #NoEstimates and wanted to remove a lot of the overhead associated with estimating - especially estimating tasks in hours where we were failing to improve. Lastly - for business reasons - it made sense for us to become very familiar with the mechanics of Kanban rather than Scrum.

    So between August and January we shifted our routine to Kanban. I blogged a fair amount about our experiences if you missed it and we felt fairly successful in that transition. Certainly we felt predictable in our delivery even though we’d stopped estimating work in either story points or hours.

    Additionally, by tracking the cycle time of each user story we had a good basis for Kaizen - identifying the elapsed time from accepting the work to it being completed and where work had stopped in the process.

    We were able to point to our cumulative flow chart and say with confidence "By the end of the release we’ll be here" or "Based on our previous data the next feature will be delivered in 12 days (with 80% certainty)". Those probabilistic predictions felt a lot more stable than using abstract points to project into the future.

    image

    But we lost something by not estimating work…

    At the moment we are planning our next release which will run over a number of months. The business drivers for using a strict Kanban methodology have changed so we have some free reign to adopt our process.

    In our release retrospective recently we agreed that striving for continuous flow and moving away from iteration goals meant we lost some valuable interactions in the team.

    Some members missed the focus that a sprint goal and commitment and the two week timebox gave them.

    Others missed the planning and estimating meetings where they had the opportunity to discuss and get a shared understanding of the work. For them the practice of estimating was more valuable than the outcome of the estimating session. 

    The evils of estimation

    Last nights talk by Nader crystallised some thoughts for me. When we moved away from Agile estimates in our last release we realised the value of cycle time and the Lean method of identifying waste in our process.

    Holding up a “13 point estimate” as the only measurement of size abstracts a lot of evils that might lurk in the teams work.

    Firstly if teams feel under pressure to meet a certain velocity (lets say 26 points a sprint) it’s too easy to buckle and start to play with the numbers.

    "OK, lets take this feature and call it 13 points even though last time we called it an 8. We need to keep our velocity up"

    Or

    "We’re 2 days away from the end of the sprint, lets make sure we get this feature done quickly to keep our velocity up"

    Neither of these patterns are helpful. By making the work bigger you mask a lot of inefficiencies that will follow. If the developer can actually complete the work faster because he over-estimated to make the velocity report look good will he gold-plate the feature to use up the remaining time? If the team finish early and bring in more work does that make the velocity chart more accurate? Of course not.

    By rushing to complete work before the end of the sprint, normally an arbitrary date anyway, only quality can suffer. Why rush to complete a feature in order to satisfy the velocity chart if you have to then rework it later or ship it with low quality.

    How about this one from the Product Owner

    "You can do 26 points in a sprint right? So I can trade these two 13 point features for twenty-six 1 point features??"

    Of course not.

    How we are going to use estimates

    Despite the above I believe estimates to be helpful. They only go bad when the output is used as a management tool and especially when that management exists outside of the team.

    Here is how we are going to use estimates in the next release.

    1. Use story points as a planning guide but use cycle time for predictability, reporting and kaizen

      Estimating in T-Shirt size or story points is a useful activity because it encourages conversations about features and promotes shared understanding of what is complex (big).

      It also identifies features which are candidates for decomposition so they can flow easily through the process.

      To track predictability cycle time (how long a feature takes to complete) is a much more powerful metric. It provides a guarantee to stakeholders with an accompanying range (“We can provide a feature in 12 days (with 80% predictability)” and that is backed up with concrete data.

    2. Always remember that the value in estimating is the activity, not the output

      Taking time to estimate throughout the release promotes discussion and highlights where team members have different interpretations of the requirements.

      If one member estimates a feature at 1 point and another member at 13 points… you’ve just realised the value of estimates. That misunderstanding needs to be straightened out before development starts.

    3. Don’t drive the team to get better at estimating

      Mike Cohn has the following model in his book “Agile Estimating and Planning”

      image
      I like it - it shows that the accuracy of estimates increases dramatically with some low amount of effort but will never reach 100% accuracy.

      And actually trying to improve the accuracy of estimates requires significantly more effort and actually ends up reducing accuracy as you gather more and more data.

    Teams will continue to be inaccurate in their estimates - thats fine and expected. Your predictability and trust with stakeholders should be built using probabilistic data (cycle time and cumulative flow) leaving estimates to be a team activity and metric.

    Thanks Nader for a thought provoking talk

  2. 31 March 2014

    Comments

    Buffer

    What CustDev is and is not

    Last week I wrote about a Customer Development methodology that we are codifying in my organisation.

    We are a group of product development teams in a larger, established company. Although we don’t have the problem that “The Lean Startup” and “The 4 steps to the Epiphany” aim to solve (who are our customers, which market are we entering?) we do have a need to validate our product vision amongst our existing known customers.

    That word “vision” is important. 

    Customer Development is not a methodology for interviewing customers, discovering their biggest wishes, complaints or issues and building a product roadmap. It is also not a process for holding focus groups and getting feedback on completed features.

    From our companies inception we’ve been led by visionaries and a product vision that is simply the sum of a list of customer wishes would not have taken us to our market position today. We feel that we have to lead our customers to innovative solutions at a rate that is slightly faster than comfortable for them.

    The “Technology Adoption Lifecycle” model is useful. 

    We feel that our job is to satisfy our existing customers across the market but to innovate and predict which products will appeal to the Early Adopters and Early Majority, keeping everyone moving to the left across this diagram.

  3. 27 March 2014

    Comments

    Buffer

    A new Lean Customer Development methodology

    Currently my teams are in the period between product releases, tidying up on the one just gone and planning for the one ahead. It’s a great time to think about Customer Development and how to apply it to our next product iteration.

    I’m a huge fan of books like “The Lean Startup" by Eric Ries and "Running Lean" by Ash Mauyra and "The 4 steps to the epiphany" by Steve Blank.

    These books hold a wealth of generally applicable techniques and guidance but solve an adjacent problem.

    They all deal with the problem of “We have a vision for a product.. is there a market fit and what should the product actually be”. They deal with the uncertainty of customer discovery (do they exist, will they pay for a product) and product validation (is this idea for the product the right one).

    Fundamentally they deal with business model planning and validation.

    In my line of work I have to solve a slightly different problem. I work for a  Platform-as-a-Service company. My customers are known to me (I’ve been talking to them today!) and my motivation isn’t to discover if a market exists or which pricing plan would appeal to which market segment.

    PaaS and SaaS economics

    The PaaS and SaaS economy is driven by a few key metrics:

    • New customers: Bringing people into the system that weren’t using the product previously
    • Renewals: Keeping those customers happy and using the product
    • Upsells: Expanding the product further into the customers organisation to increase the numbers of users

    The aim of Customer Development for me is to build a product that will drive one (or all three) of those metrics for my company.

    I’m struggling with what to call this methodology - it isn’t the same as Customer Development as codified by Steve Blank, or Lean Startup as codified by Eric Ries.

    This is a methodology in need of a name!

    What is Customer Development

    In my organisation Customer Development exists as a methodology in parallel with Product Development (Agile, Scrum).

    As a methodology it complements and reinforces Product Development by eliminating risk, uncertainty and the causes of rework and waste.

    image

    What are the aims of Customer Development

    The aims of Customer Development, in order, are:

    1. Systematically prepare the Product Development teams to build a product that will drive new customers to our product (new logos) and will drive customer loyalty (renewals).
    2. To reduce waste in our Product Development process by building the “right thing” first time.
    3. To shorten the feedback loop between developers and customers.

    What are the principles of Customer Development

    1. There are no facts inside our building, only opinions. Facts come from validated research with customers.
    2. We aren’t interested in building what we think is “the right product”. We are interested in building a product our customers will use driving new logos and renewals
    3. The biggest source of waste in Product Development is feedback delay. Customer Development provides feedback on assumptions, ideas and features as quickly as possible to eliminate waste.

    Lots more to come on #CustDev and our work to build a product that will drive logos, renewals and upsells!

  4. 14 February 2014

    Comments

    Buffer

    Finding the 1%: The danger of atrophy in Agile teams

    According to Wikipedia…..

    Atrophy is the partial or complete wasting away of a part of the body. Causes of atrophy include mutations, poor nourishment, poor circulation, loss of hormonal support, loss of nerve supply to the target organ

    "Agile team atrophy" is something that I’ve experienced with teams before. Left over time there is a potential for good habits to drop off and bad habits creep in.

    Standups might be skipped if we’re all too busy today. We don’t have time to write an automated test for that feature. We don’t need a retrospective for this sprint.

    My team seems to march over rolling terrain - walking up steep inclines towards great agile practices but sometimes wandering through long valleys where things start to slip and require attention.

    I read a great post by James Clear on BufferApps blog - by the way, one of my favourite internet companies at the moment after hearing Joel speak at the London LeanStartup group.

    James wrote about the science of marginal gains. What happens if you improve everything by 1%

    image

    From James’ post:

    It’s so easy to overestimate the importance of one defining moment and underestimate the value of making better decisions on a daily basis.

    Almost every habit that you have — good or bad — is the result of many small decisions over time.

    And yet, how easily we forget this when we want to make a change.

    So often we convince ourselves that change is only meaningful if there is some large, visible outcome associated with it. Whether it is losing weight, building a business, traveling the world or any other goal, we often put pressure on ourselves to make some earth-shattering improvement that everyone will talk about.

    Meanwhile, improving by just 1 percent isn’t notable (and sometimes it isn’t evennoticeable). But it can be just as meaningful, especially in the long run.

    And from what I can tell, this pattern works the same way in reverse. (An aggregation of marginal losses, in other words.) If you find yourself stuck with bad habits or poor results, it’s usually not because something happened overnight. It’s the sum of many small choices — a 1 percent decline here and there — that eventually leads to a problem.

    I believe in this model. In fact I’d go further in the application of the model to Agile teams and say that if you aren’t continually improving by 1% you are probably regressing somewhere. Teams need to keep improving.. most of all to keep the idea of Kaizen alive.

    If you have a learning culture built into the team and the ability to experiment it’s possible (but not easy) to regularly find that 1% improvement and make it stick.

    Without that culture of improvement (Kaizen) even if you think you are standing still you are very likely to be regressing by 1% week over week.

    What improvements have you made today? Answers in the comments please!

  5. 3 February 2014

    1 note

    Comments

    Buffer

    Size versus complexity versus delivery time

    I’ve been repeating a theme recently on this blog and twitter.

    I believe this to be true - if you stop measuring purely dev time and start to measure the end-to-end value stream from requirements analysis, feature definition, planning, dev, test, documentation and polishing time.. variability naturally reduces.

    And don’t forget the wait time between each activity. In my team the cycle time is outrageous and there is a huge amount of time wasted whilst features sit between activities.

    Here is the scary thing - even if you go and find a couple of amazing developers with great productivity will you ship sooner? Eliminating waste in the form of wait time has to be the primary focus for Development Managers.

    Some additional confirmation from Kirk Bryde in the kanbandev mailing list 

    For a large release for a Fortune 500 organization of about 1,500 stories over a 9 month schedule (and about 30 teams) I compared the CFDs [Cumulative Flow Diagrams] for story-points vs. story-counts, and the curves looked almost identical. i.e. There was no discernable difference between the shape of the curves when these “right-sized” stories were counted individually (when the y-axis was story-counts), or when the y-axis was story-points. (The organization used Scrum, so they liked to use story-points, but my CFD comparisons proved that it really wasn’t necessary - counts of the stories proved to be just as accurate.)

    To Paul’s point, it is amazing how much some teams (and leaders) will fret over poker-planning (story-point estimating) and metrics-tracking of story-points - without sufficient added value to justify the time taken. At the end of a large release, there are so many OTHER factors that cause MORE variability to delivering the software product “on time”, so precise story-point estimating and tracking doesn’t really matter.

  6. 31 January 2014

    1 note

    Comments

    Buffer

    "Limit Work In Progress" versus "Fix it on sight"

    Email to my team.

    Hey team,

    After chatting with Asad this morning I wanted to send this note to explain my thoughts on “Limit WIP” versus “Fix it on sight”.

    I was explaining a change that we need to make and said that he shouldn’t focus on that right now, we’ll give it priority in the backlog. Asad pondered for a moment and said “But it’s a one line change”

    Here are my thoughts. Limiting WIP is crucial for us in order to get things done. If we focus on a small batch of changes at one time we get work completed faster and with higher quality. It is really easy for us to start a lot of development changes and explode our WIP levels. Doing that means that we have untested, undocumented and unknown code in the system which would suck.

    BUT!! You guys are all a lot closer to the solution than I am. If you see issues that are truly “fix it on sight” issues please go ahead and whack them. I would define a “fix it on sight” issue as something that a pair can complete dev and testing on in a single timebox. (What is that? An hour or two? An afternoon??)

    Just make sure that if you start a “fix it on sight” it doesn’t require resources that aren’t currently involved (Documentation would be a good example) and that you do actually finish it within the time box you set.

    If you defer quick fixes because I told you to backlog it, you are letting process get in the way of progress when it should be enabling it.

    Feel free to tell me “I can complete this quickly, I’m doing it”, I won’t have a problem with that.

    You guys are a self-organising team, I’m here to coach and if I’m telling you something that you feel is wrong just let me know.

    18 days to go!!

  7. 28 January 2014

    1 note

    Comments

    Buffer

    Kanban work items - to slice or not to slice

    As we draw to the end of one large project - only 21 coding days left to go, people - we already have one eye on our next adventure. We’re thinking about how to improve predictability and quality in our next release.

    This was our first experience using Kanban to deliver a large software project and we’ve been happy with the outcome. It was also our first time not estimating the size of features using “planning poker” and relying on probabilistic data collected from completed work.

    A conversation on David Andersons “Modern Management Methods" Yahoo! group has helped me understand how to improve our process.

    A wrong assumption that I had made was that we should strive to break all work items down to be roughly the same size. According to David this is a common bias that folks from the Agile world bring with them to their Kanban thinking.

    Scrum == Deterministic, Kanban == Probabilistic

    We’re stuck on the idea that we can be deterministic by estimating work - either by assigning story points to the work items, or by attempting to break them down to be equal size. Both activities attempt to control and understand the size of the work involved.

    Kanban offers an alternative approach - using data gathered from past work we can be probabilistic about how long work items will take. The idea that we can control the work by estimating or attempting to split into “the right size” is neither helpful to the process or impactful on the time it takes to deliver the feature.

    This is one thing we know to be true after our current release and the cycle time measurements that we took:

    "The size of a work item or it’s complexity has no correlation on the time it takes to deliver the work."

    Unless you work in a very high efficiency environment there is actually little need to control the size of work items. The likelihood that size correlates to lead time is unlikely.

    I think this is a common misunderstanding - Davids blue book on Kanban does mention “reducing variability”. After reading the mailing list I went back to the book and I see that it’s a long way down the list of efficiencies that could be implemented.

    Teams coming from Scrum may believe that Kanban requires small, consistently sized work items to be effective. It seems that this just isn’t true.

    Handling variability efficiently is a competitive advantage

    One of the main benefits that we’ve taken from using Kanban as our method of controlling work is the ability to handle urgent requests and bug fixes. 

    Our team handles customer support issues, urgent bug fixes and feature work and Scrum didn’t gracefully handle the interruptions that kept happening.

    In a high efficiency environment reducing variability in the size of work items might be a benefit. In most development teams they should look at their ability to gracefully handle interrupts and different types of work as a competitive advantage.

    As someone said on the mailing list: Let the water flow around the rocks gracefully. Don’t try and control the rocks too much. 

    Use classes of service to handle variability

    Our Kanban system allows for different classes of service to be applied to different types of work items. We allocate 20% of effort to fixing problems and 40% each to our two products that we are enhancing.

    Classes of Service is the right method to control variability rather than splitting work down to be the “right size”. Classes of service control queuing discipline - how soon work will be started after being added to the queue. 

    Treating types of work (note not different sizes of work) with different classes of service allows you to make decisions based on probabilistic data. 

    This is much more helpful than trying to be deterministic by using story point sizing or splitting work into the same size chunks

    My conclusion?

    In the next release we’ll still split features into work items but I’ve moved away from the idea that all items need to be the same size.

    The data we’ve collected from our current release gives us a good understanding of how long features take and size and complexity are attributes that don’t directly affect the delivery time.

  8. 27 January 2014

    1 note

    Comments

    Buffer

    The beauty of small batch sizes

    I blogged last week about code reviews with the awesome Phabricator. The team is still impressed with how it’s working and how code review has changed.

    From an experience where you might get a code review if you are lucky (or unlucky) to where you know that you will consistently be reviewed. It’s great stuff.

    Individual members of the team are now pitching in on reviews. Phabricator allows individuals to define rules that create audit tasks based on author, keywords and so on.

    It struck me whilst reading some code today that systematic code review reinforces the Agile and Lean principles of controlling batch size.

    Working with small batch sizes gives faster feedback

    When someone is asked to give a review on an entire feature they might be presented with dozens of files and hundreds of lines of code. Realistically how good a job can they do?

    By minimising batch size you give the reviewer a chance to analyse your changes in a decent amount of detail. The effectiveness of the review increases.

    Small batch sizes and systematic code review tools like Phab, also means that you get feedback much faster. This is hugely valuable in decreasing the overhead of context switching - the productivity killer.

    If you write enough code to formulate a useful batch of code, commit and push it, I’ll get a code review task assigned. I can get back to you within minutes due to the small batch size and you can incorporate that feedback quickly. Rather than rediscovering the work you had submitted.

    Working with small batches localises problems

    Committing small batches of code means that problems introduced can be found, isolated and fixed sooner. Much better than sifting through a lot of code that somehow worked locally but now results in dozens of failing tests.

    Small batches decreases overhead

    This seems counter-intuitive but I really believe it to be true. By encouraging smaller batches of work to be committed we obviously increase the number of code review tasks that are needed.

    But each code review task is easier to understand, feedback flows back to the developers quicker and quality improves. Resulting in less concerns raised back to the team and better code overall.

    If an activity that is deemed critical to the team seems to have high overhead the best way to tackle that is “do lots of that thing”. Efficiencies are found because people are incentivised to find them and the system improves radically.

    Eric Ries - the Lean Startup guy - has a great blog post on working in small batches sizes

  9. 24 January 2014

    Comments

    Buffer

    A week in code review with Phabricator

    My development team has used code reviews as a tool to control quality for a long time. We’ve always been aware that peer review is an effective tool to find bugs early but - honestly - we’ve been going the motions with it for a while.

    We would include “Peer review” on our kanban board as a task and that would get done, but then additional work and polish would happen, and more code would get pushed into our repository.

    Our team had a commitment to code review but no system. A change this week has turned code review on it’s head and I feel had a massive impact on our quality.

    We found Phabricator - a Free Software product built at Facebook that offers a suite of tools for development teams. Its has issue tracking and collaboration features that overlap with our own tooling but they can be stripped out to leave a really nice code auditing tool.

    We now have an easy-to-use systematic way of reviewing each and every commit.

    Phabricator can either host your code repository or in our case we’ve configured it to pull from our centrally hosted Git server.

    A rule is setup to create an audit task if any of the team push code - both I and my senior developer get asked to audit code but anyone can jump in and offer a review.

    The audit task sits there in my queue. The interface for reviewing code is just great.

    You can see some code that one of the team wrote - perfectly good code but with a second pair of eyes you can see a possible future bug.

    The method this._secureRecord returns either a database record or null if something goes wrong.

    The code works great but line 69 calls .update() on that record - that won’t work if the value is null and we could check to make sure.

    Phabricator sends off an email and then tracks the concern until the team resolve it.

    Code review is by far the cheapest way of finding bugs and we’ve shortened the feedback cycle for our developers by giving code reviews that are targeted.

    Give Phabricator a go for your development team!

  10. 3 January 2014

    1 note

    Comments

    Buffer

    Moving on from ServiceNow

    Today, after 2 years, 4 months and 2 days - considered to be some length of time - I handed in my notice at ServiceNow.

    I don’t have much juicy gossip - it’s an amazing product, a great company and a hugely talented group of individuals. Maybe in the future I’ll do a more critical review of my time there but, as a spoiler, it would be overwhelmingly positive with some commentary about the challenges of a hyper-growth company and a development organisation that nearly tripled in size.

    If you are a customer considering the platform - buy it. They are on a trajectory that will kill SFDC in 5 years. They’ve already served BMC their breakfast in most prospective accounts they go in to.

    If you have a chance to work there - do it. It’s an incredibly hard working place but there are still a ton of challenges to solve.

    So why did I leave? Well - firstly, I don’t have another job to go to just yet which probably says something.

    My most-used retrospective technique with teams was “The Organisational Soup”. A technique for classifying challenges and problems by the teams ability to change them.

    From FutureWorks consulting.

    image

    All individuals face challenges at work - their boss, their projects, their environments. To tackle challenges it helps to write them down and place them on the above chart.

    What challenges can you as an individual exert control over? What challenges can you influence? What else is there that you cannot either control or influence? Well - that’s where I’ve been struggling.

    These factors exist in “The Soup”. You can’t control or influence some people or some circumstances and so you can only respond to them.

    I felt strongly enough about some of the challenges that I faced to take a response action of moving on. It’s all good, I can’t wait for the next challenge.

    There are some amazing companies to work for out there. Who’s next!?