Team level practices:
- Planning game
- Acceptance tests
- Small releases
- Whole Teams
- Simple design
- Test Driven Development
- Refactoring
- Pairing
- Continuous integration
- Collective ownership
- Sustainable pace
- Metaphor
Bob use a deck of index cards, one for each story. Cards will be added and removed as the project moves on. First each story gets an estimate. The estimates are arbitrary numbers sigifying the relative complexity of a story. These numbers constitute the function points or story points used to plan and measure project progress.
An initial plan is then made laying the stories out in weekly iterations. The initial number of story points in an iteration is a guess and depends on team experience amongst other tings. If a team starts out trying to do 50 points in a week and on wednesday only 22 are done, the customer is promptly told that he can expect only 44 points to be finished by the iteration and so on. When the iteration ends, the final number of points done is counted, put in a graph to show progress and a new plan is created based on this new knowledge. The customer will select the stories that are most important at the beginning of each iterations, and stories can be added or taken away. The project is done when the customer cannot find any new stories that are worth implementing or the delivery date, whichever comes first.
They also do an initial two weeks of analysis into the problem domain. This is consistant with what Eric Evans recommends. This is when the requirement harvesting begins, and this phase could even produce an initial object model. The important thing is that this model will be discarded when the implementation starts.
So when is a story done? This is down to the acceptance tests. When a story passes all its acceptance tests its done. Acceptance tests are automated using Fitnesse or another framework, and run automatically. Ideally they would be written by the customer in Fitness, but most often it will be Business Analysts writing the "happy path" and QA people and testers writing the exceptions and boundary conditions.
How do they handle project slippage? When you measure function points done per week and function points remaining, after around five weeks the number of points done per week in average will be acurate enough for predicting a delivery date. This is still pretty early in the project, about seven weeks into the schedule we can tell the customer that the end date is unrealistic. This is unheardof in waterfall projects, where the norm is to have this information in the end at best when the implementation phase is reestimated. Of course the customers will still complain about slippage, but when you discover this early it's possible to do something about it. Most often the date will stay fixed, adding more people can be an option this early, and it is never a good idea to compromise on the quality. What is left is the scope, and the story deck needs to be reshuffled and reprioritized.
User demonstrations after each weekly iterations and fully functional, ready-for-production releases every six weeks. This means that if the project is cancelled after one of these releases, it will provide value to the customer. Also it allows the customer to terminate the project before the end delivery date if he cannot find any more stories worth implementing.
Whole teams refer to the practice of co-locating the whole team in one room. This includes business analysts and QA people, project manager and testers.
The lower level practices are more known to us and we have implemented most of them in the course of our work with Eric Evans on Domain Driven Design. To me these team level practices are what we need to complete the picture and help us avoid project failures in the future.
No comments:
Post a Comment