Friday, January 25, 2013

The evilness of bugtrackers

It is quite common for software projects to use bugtrackers. Personally, I don't really like them and usually like to advice teams to stop using them. Let me tell you why.

Back in 2008/2009 when I had my first experiences with Scrum and agile software development, I was taught to "fix bugs immediately", meaning: If a bug comes up, you fix it.

We adopted this point of view and as soon as a bug was found we would write an index card for the bug put it in the todo-column of a lane of a story we were currently working on. Not later than in the next standup someone from the team took the bug and fixed it (admittedly this worked most of the time, there were times when a bug could have stayed in the todo lane for a day or two longer). We kept the bugtracker alright, but the only reason we did so, was to enable users to easily submit bugs and be informed when we have closed the bug. If a new bug came up in the bugtracker, we also wrote an index card and put it on the board with the bugtracker issue number as additional information.

As putting bugs in a lane they had nothing to do with was rather confusing we introduced a bug-lane at the top of our sprintboard, by this also visualizing: Bugs that occur have top priority! Basically this meant that every bug caused disturbance.

What effects did all this have on us?
First off, let's face it: Nobody really likes fixing bugs, it's annoying. The immediate disturbance added even more annoyance, meaning every time a bug came up, we were really annoyed. Annoyed about ourselves that the quality of our software was so low. To speak numbers: In 2008 we had an average of 25 bugs per month.

What happened to us was that we started looking for ways to produce less bugs. Obviously, we succeeded as we could cut the defect rate in half in the first year:
Average amount of bugs per month
All we did was implement one simple agile value: transparency.

Let's get back to bugtrackers.
Before Scrum we also used bugtrackers and kept collecting bugs (which I now believe is sometimes the most important use-case of a bugtracker). Once in a while we would organize a "bug-fixing-day" where we tried to fix as many bugs as possible in one day only to realize that at the end of the day there were still a lot of bugs left, we felt like Sisyphus. Bug-fixing-day soon became a dread and we tried to do it as seldom as possible.

Also I have seen projects were 150 bugs were accumulated over the course of a 3/4 year only to spend a whole month at some point almost doing nothing else than fixing bugs and at the end still having 50 open bugs although having fixed 200. If it hadn't been for the approaching release date probably eveb more bugs would have been accumulated. The team did the opposite of implementing transparency by hiding the bugs.

In this context the "fridge-effect" also applies: You do have a vague knowledge of what's in your fridge but as long as you don't open it, you don't exactly know what's in it. Moreover you have to actively open it, to see what's in it.

Situations like these two mentioned have a few impacts:
  • The developers are annoyed and demotivated and try everything to postpone the bugfixing as much as possible.
  • Any measured velocity of the team has no value at all since there is a lot work that has to be done some time later.
  • Any release planning has no value at all since there is no actual velocity to begin with.
  • Any calculated or tracked development costs have no value at all since there is a lot of work that has to be done some time later and there is no usable velocity.
  • Any calculated ROI has no value at all since there were hidden costs some time later.
What you can calculate in a stunning accuracy though is the average cost of a bugfix. Way to go!

Let's continue with response times.
Imagine you're spending your winter holidays in Prague and you're staying at a really nice hotel. Your room is nice and all and you feel comfortable but unfortunately the heating isn't working and your room is as cold as it is outside. You decide to go the check-in:
"Hi. The heating in my room is broken. Can you please fix it?"
"Of course. I will just write it down on our Needs-to-be-repaired-list and we'll get back at you asap! In the meantime use this warm blanket as a workaround."
The next day nothing has changed, so you decide to cancel your stay at the hotel (after all you paid a lot of money for it) and go to another hotel. 2 years later you get a call from that first hotel: "Hello Sir, we just wanted to inform you that we fixed the heating in that hotelroom of yours".

Sounds silly? In software projects this actually happens all the time. I've had bugs filed for Mozilla Firefox that only took 2 years until they even got any kind of response or bugs for Netbeans (mostly for the PHP components) that took 3 months. Needless to say, I don't use either product anymore. Although it's not the only reason, I'm not using them anymore, all these cases made me feel like I'm not taken seriously as a customer. And when that happens I start looking out for alternatives, which are usually easy to find. And since software always has users, there is always someone who wants to be taken seriously.

All in all my experience taught me that bugtrackers support the lazyness of developers and the tendency to low-quality-software where customers are not taken seriously. These are the reasons why like to advise teams to stop using them.

Update:
As Volker Dusch correctly stated I left out the part about "don't spend time managing lists of bugs. Just fix them". That was intentional and is covered by the first link in "Additional reading".

Additional reading:

Sunday, December 2, 2012

A christmas retrospective

This is a quite nice end-of-the-year-retrospective method I read about first last year on Boris Glogers blog. Since the original post is in german and doesn't tell you how to prepare, here is a detailed preparation instruction and the how-to itself, both in english. One warning ahead: Preparation for this one takes a lot of time. But it's totally worth it.

Time needed
Preparation: A few hours
Playing: 60 - 120 minutes, depending on your team size

Preparation
First of all take a flipchart paper (or something of at least similar size - depending on your team size you might need a larger sheet of paper, you will understand why later on) and draw a large christmas tree. Draw it as large as you can and don't forget the star on the top, after all you want the tree to look nice:


Now draw some small candles and christmas baubles in 2 colors. Draw one candle and 2 baubles for every team member. If you're lazy you can draw one candle and one bauble in each color and then copy it a few times (that's what I did ;-)



Once you have your candles and baubles finished, draw a present and a christmas sock for every team member and write their names on it. In order to keep the presents and socks individual you might want to not copy them ;-)


Cut out the candles, baubles, presents and socks. After that create a matrix putting the socks on the left of the tree and the presents under the tree:



Last part of preparation is to buy some cookies for the retrospective itself :-)

How to play
Give everyone a candle. Set the timer to 5 minutes and ask "What does motivate me getting up and going to work?". Everyone should write some words about the question on the candle (answers could be: I love our team, the current project is awesome, etc.). When the time is up everyone in turn glues her/his candle on the tree where her/his name crosses itself and says a few words about it. A glue stick might come in handy ;-)

Give everyone 2 baubles for every other team member. Set the timer to 10 minutes and ask them to write something about every team member. On one bauble they should answer the question "What did I like most about working with you?" (answer could be something like "I really liked pair programming with you") on the other "What do I hope I'll be able to be doing together with you next year?" (answer could be something like "I know you're really good at test driven development, so I hope you could teach me how to do it, at best while pair programming").

When you're done filling out the baubles, everyone in turn gets their presents. Let's assume you are the first to get your presents. Everyone in the team tells you what they wrote about you on their baubles and glue them at the corresponding positions in the tree-matrix (there are 2 for every combination). Once you have received all your presents, say a few words about how you feel and continue with the next person in your team.

Once you're finished, your tree should look like this:

Friday, November 2, 2012

Definition of done traffic light

This is a quite nice retrospective method I learned from agile42. Prepare the definition of done of your team by writing each point on a single line and put three columns besides every point. Paint the columns in red, yellow and green. The now colored columns stand for "We did that well" (green), "We did that ok" (yellow), "We didn't to that very good" (red).

In the retrospective every team member puts a dot in the colored column expressing his view on how good the team applied this point of the DoD. The result looks like this:

Now you can pick the 3 points with the lowest overall rating, discuss them (20 min timebox each) and develop optimisations for them.

You can do the same for your working agreements.

Tuesday, August 28, 2012

The Definition of done workshop

Recently one team I take care of as a Scrum Master needed a Definition of Done. Since I couldn't find anything on the internet about a corresponding workshop or a good method to create a definition, I created a workshop based on the "Challenge Cards" method from the book "Gamestorming" (page 158). Here's how you do it:

Time needed:
About 30 - 60 minutes

Preparation:
Prepare green and red colored post its and a blank flip chart (or similar).

How to play:
Divide the team into two groups. One group is the problem group, they get the red post its. The other group is the DoD group, they get the green post its. Now give both groups 10 minutes time.

Ask the problem group to think of common problems the team encounters regarding development, which could be fixed by a definition of done. Examples would be "a lot of bugs", "released version doesn't work" or "duplicate data appears". Important: Point out that it's not about problems the team can't be made responsible for (kernel panic on a server for instance), but that it's about problems that occur due to errors made during development.

Ask the DoD group to come up with possible items for a definition of done. This can be anything they can think of.

After the 10 minutes, ask the problem group to post their first problem on the left side of the flip chart. Now ask the DoD group to post one or more items that can solve this one problem on the right side of the flip chart. Let them draw an arrow going from the DoD item(s) to the problem post it. Notice that there can be more than one DoD items solving a problem and that one DoD item can solve multiple problems. It is ok and desirable to draw arrows from already posted DoD items to a new problem.
Continue until all the problems have been posted.

Now there are 3 possible outcomes:

You still have DoD items left.
Ask the DoD group to post them on the flip chart. After that, ask the whole team if they can think of more items they would like to include on the definition of done.

You still have problems that don't have a possible solution.
Ask the whole team if they can think of anything solving it. After that, ask the whole team if they can think of more items they would like to include on the definition of done.

Every problem has at least one possible solution.
Ask the whole team if they can think of more items they would like to include on the definition of done.

As a last step go over all the DoD items that have been posted on the flipchart during the workshop and ask the team if this item makes sense on a definition of done and if they agree, if this item lands on the definition of done.


Example:
Our result looked like this

Tuesday, August 14, 2012

A different kind of retrospective: Story workshop

Yesterday I tried a different kind of retrospective: The story workshop. Instead of writing post-its answering "What was good?"/"What went wrong?" or Mad/Sad/Glad, you ask the team to do the following:


"Write a short story, a movie, a play or a comic about the last sprint"

The feedback from the attendees was very positive, the main statement being that you get a quite different look on the past sprint, when you try to draw it (everyone in the retrospective drew a comic).

One of the comics looked like this:

Wednesday, July 25, 2012

Skillboard

Here at CHIP Online we introduced a Skillboard a short while ago. We did this, because we realized that it's hard to keep track of who knows what the larger the company gets.
The skillboard itself is quite easy: You put the skills in the rows (like PHP5, NoSQL or Meeting Moderation) and each row has 3 columns: Basic, Advanced, Pro. These 3 columns represent the knowledge state. Now everyone has the opportunity to put his name at the proper position: Which skill do I have at what level. If you're pro at PHP5, simply put your name in the pro column of the PHP5 row.
The advantage of this skillboard is: If anyone needs help with any technology, he/she can take a look at the skillboard and see if someone in the company has knowledge of it.

Saturday, June 9, 2012

Different estimation methods compared

A while ago, I conducted a workshop about agile estimation. In this workshop, I showed the participants three (of many) different ways of estimation: Planning Poker, Magic Estimation and the Team Estimation Game.


Planning Poker [Interaction level: high]
Probably the classic estimation method. The product owner reads a user story, the team estimates using the planning poker cards. If there are any differences in the estimation, the team starts a short discussion about everyones different opinion of the story at hand. If after about 3 estimation rounds there is still no consensus, the story is obviously not clear enough and everyone tries to gather new information until the next estimation session. Planning Poker usually uses story points, although I've already seen cards with animals on it (quite a nice idea actually). The estimation session ends when either all the stories are estimated or when time is up.
At the workshop I let the participants estimate what it takes to clean a kitchen. During estimation the team realized, that story #1 and #4 are too large and split them up to 2 and 3 stories.


Magic Estimation [Interaction level: low - medium]
Probably the second classic estimation method. Prepare a flipchart (or similar) where you draw an arrow going up (symbolizing the increased complexity) and write your references beside the arrow, be it user stories, T-Shirt size or whatever you feel most comfortable with. The product owner reads a user story, writes it on a post-it and puts the post-it on the flipchart. This procedure is repeated for about 5 stories. The team then gets 2 minutes to stand in front of the flipchart and silently move the post-its to where it thinks they belong. If during the 2 minutes you realize, that a story is wandering around a lot, you might want to talk about some details after the 2 minutes. This whole procedure is repeated until there are no stories left or the time is up. 
At the workshop the result looked like this:


Team Estimation Game [Interaction level: low]
An estimation method I found a while ago here (page is in german) and have used at the workshop myself for the first time. This method is quite similar to magic estimation. As with magic estimation you need a flipchart prepared. The product owner brings all the stories on index cards to the estimation and lays them, heads down, on a table. Someone from the team (whoever wants to go first), takes the first story card, reads it out aloud and then puts it on the flipchart where ever he/she thinks it belongs. Now the second one in the team pulling a card can either put it besides, above or below the first story read. From the third story on, every team member has 3 possibilities for his move:

  • Read the next story card and put it on the flipchart
  • Reassign a story already on the flipchart, also explaining why he does that
  • Skip
When reassigning a story, you might also mark the story with a dot. The more dots a story gets, the more opinions about it differ and you should discuss it in detail. Like with the two other estimation methods, the team estimation game ends when there are no more stories left or when time is up.
At the workshop the result of the team estimation game looked similar to magic estimation:


Conclusion:
In my time as a developer I only used planning poker (apart from any workshops or similar I attended). After having conducted this workshop, I still prefer planning poker over magic estimation and team estimation, since the interaction with planning poker is way higher than with the other two (which is also the conclusion of the workshop participants). That doesn't render magic estimation and team estimation totally useless, there are still valid use cases for them. They are actually quite useful when you need a lot stories estimated in a short time, e.g. at the beginning of a project. Other than that my favorite remains planning poker.