Monthly Archives: October 2010

The Case of the Burn-up Chart: The importance of release planning.

We currently have a team working on a solution for a customer with a very compressed timeframe. In the setup for the project there was a large number of product backlog items created and added to the team project (we use Team Foundation Server). You can see the rate at which effort was added to the backlog in the following graph:

image

However, one of the things that bit us from a reporting perspective was that a lot of the work items were added to the root iteration path. This means that in the initial release burn-down bars the scope simply wasn’t visible. As time progressed these PBIs were pulled into the release and as a result we got a “burn-up” chart:

image

To any customer this is going to look pretty shocking and instinctively we knew we were actually burning through the scope so some further analysis was required. What I did was produce the inverse of a standard burn-down graph by plotting work done and committed to by sprint:

image

That in its own right is an interesting graph because it means that either the sizing of the backlog is inconsistent or the team suddenly put the foot to the floor (or started working some long hours). Just to double check the graph is showing me what I expect, you can see that the first two sprints (completed) are pretty much all green (done), and the current sprint is pretty much all committed.

image

Finally – the remaining work for this particular release can be seen simply by extending this graph to include the root \Release 1 iteration path (the remaining story points after this sprint are highlighted):

image

Anyway – gotta go, my flight is boarding. I think the lessons learned from this are:

  1. To get a text-book graph you need to do some release planning and make sure work items are in the right bucket before the first sprint begins (accepting there is always changes in the requirements that impact the graph to a lesser extent).
  2. If you don’t get it right, its not the end of the world. The analysis cube gives you the information you need to figure out what is going on and provided you keep your sprint backlogs well managed it should be OK.

Cheers!

Advertisements

.NET Package Management with NuPack (finally)

Nupack-logoOne of the key things that a software developer can do to make their code easy to maintain is ensure that it is relatively easy to compile and run their code on a fresh workstation. This can sometimes be quite difficult given the complex dependencies that any piece of code might require from databases to third-party libraries.

This week there was the announcement of NuPack (bundled with the beta of ASP.NET MVC 3.0). The .NET community is entering a new chapter in open-source/vendor collaboration with the release of NuPack via the Outercurve Foundation (formerly CodePlex Foundation). The cool thing about this is that NuPack directly contributes to solving one of those key maintainability problems for software developers – managing external dependencies.

NuPack is a package manager for the .NET platform, it allows developers to browse for dependencies to include in their own projects and perform the necessary actions to install, update them and even remove them. NuPack is one of the critical ingredients for what I would like to call GLF5 (Get-Latest F5) compliance.

Some History

NuPack is a collaboration between Microsoft, the members of the Nubular open source project and Outercurve Foundation. Nubular was an extension to the RubyGems infrastructure which supported the deployment of .NET dependencies, NuPack removes the Ruby dependency and in the process seems a better fit for pure .NET developers (just like RubyGems is a better fit for pure Ruby developers).

Some people are likely to be critical of Microsoft coming to the party late on the package manager front and displacing the efforts of some others who have been working to solve this problem independently of the software giant. Some of the other projects (in addition to Nubular) include:

  1. ngems
  2. OpenWrap (by Sebastien Lambla)
  3. hornget
  4. … many more.

Whilst these were all good attempts, ultimately I think that they would have failed to get broad adoption because they lacked vendor support. The kind of vendor support which makes sure that the NuPack component gets bundled with future versions of Visual Studio and into the hands of every .NET developer around the world.

Where Next

NuPack still has a ways to go before it does everything that developers need from a package management perspective, but being an open source project you can get involved. And I think this is the .NET communities first real chance to get traction solving the package management problem.

I’m going to be looking through my various open source projects to figure out what makes sense to throw into a NuPack package. One of the things that it will be important for developers to get across if they start making packages is the importance of Semantic Versioning. This is something the Nubular guys were across (and picked up from the RubyGems community) and which I’d love to see the .NET community adhere to as well.

Shipping software, its not all about the code.

Ah the week that was! We currently have a project underway building a solution for a customer. All in all the software seems to be coming along quite well doing what it needs to do. The team is leveraging some of the thinking around some previous software they had developed which I think has helped them a fair bit and there are some really cool capabilities.

But, shipping software is not all about the code. The team had a few challenges this week where we realised we weren’t reporting adequately to the customer which means they didn’t have the visibility that they required. Our build lab also had some issues which hadn’t been addressed and so consequently when the customer asked about progress we weren’t able to demonstrate right that second.

The good thing about working at Readify is that there is the depth of talent and the desire to rectify problems when they are discovered and so I am pleased to say that we got our reporting back online very quickly (supported by some work item tracking process improvements) and manually deployed to the lap whilst the automated deployment is getting fixed (job for next week). Unfortunately we didn’t bring the reporting back online fast enough to realise that we may need to drop some features in this iteration/sprint – so it feels like a surprise.

In summary, even though we were doing great on the technical aspects of delivery, we needed to increase the priority on making sure the other aspects of the project housekeeping were being done. This is the essence of the DevPod process (a combination of Scrum and Visual Studio ALM tooling that we use internally).