Showing posts with label agile. Show all posts
Showing posts with label agile. Show all posts

Monday, July 25, 2016

The downside of frequent releases

Nowadays we tell people to release often, to do continuous delivery or to do continuous deployment. Facebook for example does a release at least once a day (for a presentation about the Facebook release process, see here), Netflix does so at least once a week. The advantages are well-known and obvious, one of the most famous ones being reduced risk. But there are also disadvantages, which are rarely talked about. Since continuous delivery and deployment mostly focus on what advantages it has for us, as the developers, we tend to forget to check if customers do like it. And I have to say: As a customer I don't always like it.
I don't mind CD (continuous d...) on websites like Facebook or Netflix, because as a customer mostly I don't even realize that there's been a release. But they drive me crazy when it comes to apps on my mobile devices. I know that I can activate automatic updates, but I don't do this mostly due to security reasons. I have often removed apps because at some point they wanted to have additional rights I did concur with (example: When an app suddenly wants to have access to my contacts but is not contacts-related at all).

Since I had a constant feeling of "update-spam" regarding my apps, I decided to track the data for almost 2 months (March 16 - May 7). I did so for my 2 mobile devices, 1 iPhone 5 running iOS 9, 1 Nexus 7 tablet running Android 6, writing down the number of updates, installed Apps and OS version usually every two days (I did not manage to do so every two days). The Nexus 7 had 94 apps installed the whole time, the iPhone 5 57, with 58 apps for a few days. These are the results:

  • In average there were 2 app updates on iOS and 3 on Android per day
  • In regard to the apps installed that made up 3% of the apps installed per day
  • In total there were 104 app updates on iOS and 164 on Android
  • In total almost every app had 2 updates in 2 months
You can find the spreadsheet with the full collected data here.

You could argue that 2 updates for every app in 2 months is not so much and you would be right. I have not collected data on this, but the amount of updates is not spread across all apps, as some of my installed apps' last updates date back to December 2015 or even back to March 2015.

I have a similar experience with the program FileZilla. I use it once in a while (maybe once every 1-2 weeeks) and almost everytime I open it, it asks me if I wish to install the latest update.

For me, as a customer, having 2-3 so many visible(!) updates is really unbelievably annoying. Therefore my advice would be: If you plan on releasing often, also take into consideration what kind of product you have. Last but not least, try to make your updates as silent as possible.

Wednesday, April 27, 2016

The effects of team size

When I started at Infineon back in May 2015, one of my teams consisted of roughly 8 developers - a team size which in my opinion is already too large. The Scrum Guide proposes the following:

Fewer than three Development Team members decrease interaction and results in smaller productivity gains.  Having more than nine members requires too much coordination.

My own experience is that starting from around 6 people you already have too much coordination. For my team people in the company were complaining that the output of the team was so low. One of my suggestions was clear: Split the team. A suggestion which my predecessor also already tried to change for a year before I started. It took me around 1/2 year to finally get the team to split into two teams of 4. This is what happened to the velocity (the graph shows roughly one year):


I have asked the team in the retrospective directly following the team split (Sprint 100) aswell as after Sprint 100 why they think the velocity has gone up almost 100% and how they felt. This is what they replied:

  • Higher identification with stories due to smaller teams
  • More focused daily
  • More pair programming
  • No more "unpopular" stories
  • Less time wasted (e.g. in meetings you cannot really contribute)
Now I'm certain this might not happen to every time. It is useful to know that in my case the stories in the initial team were quite diverse and spanned over several products that have to be integrated into the SDK they are building.

Wednesday, March 30, 2016

Release planning using the Agile Strategy Map

The situation

One of my teams provides the SDK for a range of hardware products, of which some are still in development. The hardware product development is working with milestones, at which certain new features of the product are available or changes to the hardware layout are made. The SDK is expected to support these features and layout changes with the release of the milestone. Although many features are known upfront, requirements do change in the course of the milestone completion. Example: The hardware layout has a flaw and needs to be changed.

The SDK does get a major version update (e.g. from 2.5.0 to 2.6.0) with a milestone release. The team was struggling to plan the releases. The scope of the features was unknown. Features that were not needed for the release were started to be built in, only to be stopped one sprint later, because a major feature request was forgotton until the last sprint before the milestone release. The team has 2 product owners, of which one is more of a consulting product owner, and at the moment one main stakeholder (apart from a lot more).


The approach

My approach was to use a hierarchical backlog to plan the contents of a release. There are quite some ways to achieve that, like User Story Maps, Impact Mapping or, my preferred approach in this case, the Agile Strategy Map. I helped the product owners and the one main stakeholder creating the Map using sticky notes of different sizes and colors. Our Agile Strategy Map consists of the following elements:

  • Release goal (large sticky note): The goal of the release in one sentence
  • Critical Success Factor (red sticky notes): A CSF is a feature/item/etc. that has to be done in order to reach the release goal
  • Possible Success Factor (yellow sticky notes): A PSF is a nice-to-have. It does not have to be done in order to reach the release goal
  • Necessary Condition (orange sticky note): A sub work-item of a PSF/CSF. In order to complete a PSF/CSF, every NC has to be completed.
Our first map draft looked like this:
The Agile Strategy Map at the start of the release planning

In order to create the map, we took the following steps in a series of sessions:

Collect contents for the release
Everyone writes down what they think and/or know has to be in the release

Weight contents using Business Value Poker
Similar to Planning Poker, everyone defines the business value of the contents defined. This will nicely show in a later step that business value does not automatically mean higher priority.

Define the release goal
Once the contents and the business values were clear, define the release goal. This is one sentence that describes what will be achieved with this release

Identify CSF and PSF
Once the release goal is clear, identity the critical items that have to be done in order to reach the release goal. Everything else will be a PSF.

Prioritize
Prioritize the PSFs and CSFs based on the available information likes Business Value, PSF, CSF, etc. We only used small sticky notes with the priorities on them combined with a discussion, but any other means of priorization such as dot-voting, "buy a feature" or else is fine aswell. This will show that high business value does not automatically mean high priority. On the picture above you can see that the item with the highest business value 3000 only had priority 5.

Identify the NCs
Identify all the requirements and work items a CSF or PSF has. You can group NCs below another NC.


The result

The Agile Strategy Map near the end of the release

Creating the Agile Strategy Map for release planning had several benefits. First off, since we put up the map on a whiteboard accessible for everyone, everyone including stakeholders could see what we were up to. In this case it actually lead to a stakeholder causing a priority shift, since she saw that an item that was a very important customer request only had priority 8. It also helped us to remove certain subparts (NCs) of a feature (CSF/PSF) we thought had to be in the release, the most common reason being that we didn't have enough time to complete all the NCs for a feature and they weren't crucial for the release anyhow (which we didn't know beforehand). Having a hierarchical backlog helped us to only remove certain parts of a feature easily instead of dropping the whole feature. Last but not least we could track throughput of PSFs, CSFs and NCs and because of that we were able to make pretty good predictions on how many items will fit in the next release (example: For the next release we only prioritized until 10, knowing that anything below most likely won't make it anyhow).


P.S. One side note: There is no rule how many user stories come out of a PSF/CSF or a NC. In our case we had PSFs that were only one user story and NCs that were 3 user stories.

Tuesday, July 14, 2015

User Story Taboo

For my agile workshops I created a little game called "User Story Taboo" which I'm using to teach attendees how to write user stories. Here's the game description.


Participants

4 - 16

Duration

  • about 60 - 90 minutes (normal version)
  • about 90 - 120 minutes (extended version)

Goal and description

The goal of the game is to make participants understand how they can write good user stories.

User Story Taboo is based on the board game "Tabu" where teams have to describe words and terms without using certain forbidden words.

The participants will write user stories but they will be forbidden to use certain words. This is to show participants that no one needs user stories like "As a server I want to...". They shall learn the pros when not only defining the functionality but also who it is for and how he/she will benefit from it.

We are doing this by writing stories for an extended version of battleship. The rules for battleship are well known, thus we can concentrate more on writing the stories instead of discussing the rules of battleship.

Forbidden words

User
better
faster
end-user
operator
client
customer
server
to look at
work
company
stakeholder
program
system
product owner
game producer
easy

Preparation

  • Print requirements
  • Write forbidden words on flipchart or print them

Setup

  • Divide participants into groups of max. 4 people
  • Give each group a part of the requirements. Ideally each group has the same amount of requirements

Game

Round 1: Write user stories

Explain to the participants that good user stories describe who wants to have something, what he/she wants and why he/she wants it. In my experience providing the common template "As <role> I want to have <functionality> so that I have <value>" helps the participants in writing their first user stories. Show and explain the forbidden words.

Now let the participants write their user stories. Sometimes this takes 2-3 iterations until the stories meet all the requirements (who, what, why) and omit all the forbidden words.

Round 2: Acceptance criteria

Explain acceptance criteria.

Let groups select 2 user stories from round 1 and make them write acceptance criteria for them (encourage participants to be creative here :)

Extended Version: Groups don't select the stories but each group writes the acceptance criteria for all their stories from round 1.

Round 3: Identify epics

Explain epics and the lifecycle of a requirement (Requirement -> Epic -> User Story -> Task, etc.). The rules from the ebook "5 Rules for Writing Effective User Stories" are a good basis and orientation.

Let participants take a look at the written stories and make them identify epics and user stories.


Extended version

Round 4: Split stories or epics

Explain how to identify stories that are too large (e.g. words like and, or, but, etc.)

Let groups select 2 user stories or epics and make them split these stories/epics.

Backlog-Management

Show and explain participants possibilities to manage the backlog. Some possibilities:


You can also find all the materials in German and English on Github.

Friday, May 29, 2015

How to measure productivity

A few days ago I was having a discussion on how to measure productivity (I will not elaborate on if you should measure it all, that is for another blogpost). We came up with a few metrics that might be useful indicators.

Hit rate

That is the actual story points achieved in the sprint divided by the commited/forecasted story points.
Example: You commited 52 story points, but you have only achieved 37. Your hit rate is 71%.

Bugs per sprint + their average lead time

Track how many bugs are opened for your team/product per sprint. The idea is: the less bugs arise, the higher the quality of your software and the less you are sidetracked by bugfixing. Naturally this indicator works best if you fix bugs instantly and don't collect them in a bugtracker.
Also: track the average lead time it takes to fix bugs. The less time it consumes, the less you are sidetracked. Try adding a policy for this (for example: "We try to fix every bug in 24 hours").

Impediments per sprint + their average lead time

Track how many impediments arise per sprint and their average lead time. The less things you have, that impede you, the more productive you should be. Also: the faster you can remove these impediments, the higher your productivity should be.

Amount of overtime / crunch time

We were a bit unsure about this one. How much does it really say about productivity? In my opinion you should only do overtime in absolutely exceptional situations. In my opinion if you need overtime (or crunch time) there is something fundamentally flawed in the way you do your work. My theory is, that overtime is taken when planning badly. This can be either because you (constantly) plan too much and/or there are way too few people to do the work you want to be done. If you want to track this, make sure that people are able to provide their overhours anonymously.

Reuse rate

One thing we were completely unsure about is the reuse rate (how many of your code is getting reused?). The idea was that the less you reinvent the wheel over and over again, the more productive you should be. But how to track this? The only things we came up with was to run c&p-detection/duplicate-code-detection. Is this a valid metric in this case? What if you have multiple projects? If you have any ideas for this one, please let me know in the comments.

Don't: Velocity

Don't use your velocity as an indicator for productivity. First it is very easy to manipulate and chances are about 99% that it will be. Second every team has its own velocity, meaning there is no qualitative information about productivity to be found here.


So far we haven't tested these metrics yet as indicators for productivity. If we should start to do so, I will gladly let you know about any outcomes. If you should have any more ideas on how to measure productivity, please let me know in the comments below.

Thursday, July 18, 2013

Active learning cycle

Many teams seem to struggle with keeping track of their improvements from the retrospective. One really useful tool for that is the active learning cycle.

Take a sheet of flipchart paper and divide it into 4 areas: Keep, Try, Breaks and Accelerators. The most common form looks like this but you can always use a different form if it suits you better:
Active Learning Cycle
At the end of the retrospective you put your actions/improvements you decided on in "Try". Those are things that you want to try out. Remember to put the active learning cycle afterwards in a place where everybody can see it, near the team board would be a good place.

Not later than in the next retrospective you use to active learning cycle to decide what you want to do with the actions that are on the cycle.

  • Did you like it and you want to continue doing it? Put it in "Keep" and keep on doing it
  • Did you think it rather impeded you and you want to stop doing it? Put it in "Breaks". This could be things like "Standup at 2pm", "Digital team board", etc. And, more important: Stop doing it ;-)
  • Was it something that helped you but which is nothing you can really keep on doing all the time? Put it in Accelerators. This could be things like "2-day team offsite" (It was an accelerator for the team, but you can't do a 2-day offsite every week).
You don't have to wait though, the active learning cycle is supposed to be a "living" artifact, so you can always move post-its around when you feel it's time to do so. Of course you can also move things from "Keep" to "Breaks" or "Accelerators" if at some point it isn't helping you anymore. Since your active learning cycle will be very full at some point you might have to remove post-its someday. The moment, when you remove something is totally up to you, but from my experience it's best to only remove them, when they've already become second nature to the team.

Wednesday, July 3, 2013

Why is the 4 week sprint still the literary default?

I’ve been wondering for a long time why the 4 week sprint still seems to be the default in Scrum literature. Even the State of Scrum Report states there is a 38% majority using 2 week sprint while 29% use 3-4 week sprints (page 25). Given that 3 and 4 week sprints have been merged in the statistics implies that the actual percentage amount of teams using 4 week sprints is even lower than 29%. Yet in the same report insight #2 states that “a Sprint is one iteration of a month or less that is of consistent length throughout a development effort.” completely ignoring its own results (page 38). Also, why isn’t the book “Software in 30 days”, released in 2012, called “Software in 14 days”?

Part of Scrum and agile in general is to generate feedback as quickly and often as possible, using 30 day sprints you spend a whole lot of time between two feedback cycles. In addition 4 weeks of time is so much that it’s really hard to look back at them when sitting in a retrospective. Can Scrum literature please inspect & adapt and use the 2 week sprint as the new default?

Friday, June 21, 2013

How to improve your retrospective

Marc Löffler did a session about "How to improve your retrospective" past weekend at agile coach camp 2013. Result was a list of possibilites what you can do to improve your retrospective and maybe sometimes vary the usual format as described in "Agile Retrospectives".

Since the results speak for themselves, I will just post the photos of the results here:


Tuesday, June 18, 2013

Sprint Burndown Chart: Yes or no?

I don't believe in sprint burndown charts. So far, in every team I've been Scrum Master for, not one developer really wanted to update the burndown by himself, meaning I was either the burndown-monkey or I found myself asking the team to update the burndown regularly. And since I don't get the real advantages of burndown charts, I struggle to explain the team why it's important to maintain the burndown chart. I guess that's what you call a chicken/egg-problem.

The burndown chart is supposed to show the current status of the team and indicates whether the team is likely to successfully get everything done in the sprint. But: In my opinion you don't need a burndown for that because all the information can be read from the sprintboard. Nowadays sprints are mostly 2 weeks long (I haven't heard from anyone using longer sprints than that in a long time), so it's relatively easy to overlook this time. While I think that a burndown is useful with 4 week long sprints, in sprints up to 2 weeks length it's basically just maintaining an information that is already on the sprintboard.

I've been trying to solve this dilemma for quite some time now but haven't found a real solution yet. First I tried to find the reasons for a burndown chart. Having found none that did satisfy me, I thought: Well, maybe there's another good, intuitive way to visualize the current status. One that would maybe integrate directly into the sprintboard. So far, I haven't found one.

Being on agile coach camp 2013 past weekend, I took the chance and conducted a session called "Burndown Chart 2.0" hoping to find this intuitive, sprintboard-integrated way.

Although we haven't found one, the session has helped me a lot to remind myself what the burndown chart is about, whom it is for and whether it is useful or not. First of all the burndown chart is a tool by the team for the team and no one else, no manager, no stakeholder, no one else. Second its main purpose is transparency: Transparency where we are, transparency what has happened.

Deriving from the fact that it's a tool by the team for the team, the burndown chart should be used when the team wants to use it. If you feel that a burndown chart could help you in your current situation, then use it, otherwise there's probably no real reason to use it. Using a burndown chart is not essential.
Apart from that it doesn't always have to be a chart. Depending on what you want to visualize it can also be a traffic light (visualizing if the team thinks the sprint will be successfull) or any other visualization you can think of.

Monday, June 17, 2013

How to praise your devs

There was a quite controversial but very fruitful and inspiring conversation about "How to praise your devs" from the viewpoint of a Scrum Master. Here are the few notes I took for myself:

  • Sentences like "Thank you that you brought up that topic, it really helped the team move forward" basically are another way of saying "You did well"
  • "You did well" sentences are a judgement of the work the developer does and are almost always seen violent. Least of all is a Scrum Master the one who is to judge the work of the developer
  • When appreciating someone tell them how you feel (NVC) instead of judging the work they did
  • Try to use appreciation instead of praise
  • Base your feedback on your relationship with the other person
  • Align your appreciation with the purpose of the team. If the team does not have a purpose or doesn't know its purpose give the team exactly that
  • Developers do communicate and they communicate a lot. But they do it in their own ways and other people are sometimes seen as intruders

Solution Focus

Past weekend I attended the agile coach camp 2013 and one of the sessions I've been to was by Klaus Schenck about "Solution Focus".

Solution Focus is a coaching technique (and in more general a mindset) where you set your focus on solutions and solving problems when coaching and talking to people.

Solution Focus Matrix

One of the tools helpful to understand Solution Focus is the SF Matrix. The vertical axis represents the degree of happyness, the horizontal axis time.

Solution Focus Matrix

This leads to 4 quadrants:
  • On the upper left are things from the past that we're feeling happy about
  • On the lower left are things from the past that we're feeling sad or angry about
  • On the upper right are things in the future that we're looking forward to full of expectation
  • On the lower right are things in the future that we're worried about
The goal of Solution Focus is to reach the upper right quadrant, we wanna be optimistic about the future. In order to do that we first need to identify in what quadrant sentences said lie. Some examples:
Before Scrum we had no problems. Now everything is bad!
The way management talked to us a few years back was absolutely horrible.
Hey, we want to start with Kanban. Could you help us please? 
As you may have realized while reading the examples the real world isn't always as easy as a model and words and sentences can easily fit in more than one quadrant. The whole point of Solution Focus is to lead the coachee / dialog partner to focus on an optimistic future and we can reach that by asking the right questions.

  • If you realize your coachee is happy about the past your questions should be about how the past experience could be repeated
  • If you realize your coachee is unhappy about the past your questions should be about how she survived and what she learned from that experience
  • If you realize your coachee is worried about the future your questions should be about what she would like instead or how she would want to do things differently
  • If you realize your coachee is already optimistic about the future your questions should make her want to kick off right away
Naturally asking the right questions isn't that easy and additionally questions like "What did you learn from that experience?" feel rather odd to the coachee (as they would to everyone), meaning your focus is on getting that questions answered but not necessarily asking it directly. One possible way would be to ask about the feelings and go on from there. But that is only one possibility.

Solution Focus Scaling

A second tool for Solution Focus is a ladder / scaling technique:
Solution Focus Scaling
SF Scaling is used to show where you are right now. You could use a flipchart, draw a ladder and ask your coachee to draw where he thinks he stands. However using a room (or similar) as Klaus did with us in his session seemed more impressive to me. The length of your room represents the scale from 0 to 10.
  1. Ask your coachee to stand where she thinks she is with her problem right now, her sight towards 10.
  2. Ask her to turn around and ask how she feels (this will most likely be something like "I can see that I'm not still at the beginning", "I'm farther than I thought", etc.)
  3. Ask her to stand at 10, sight towards 0 and ask how she feels now (answers will be most likely something like "this is really far away" or "I feel uncomfortable standing here"). 
  4. Ask her to stand where "good enough" would be and ask her how she feels
  5. Ask her to return to the original position and ask her how she feels now (the answer will probably include that she's seen the "perfect state" and that things look differently for her now)
  6. As a last thing ask her to take a step forward and then turn around. Ask her how it feels like to have taken a step forward.
When we did this excercise Klaus asked us to pick something that we care about and want to improve for ourselves, I chose my rhetorical skills. Seeing where I am right now and where I can be with only one step forward got me inspired to finally take that rhetoric workshop I've been wanting to do for 2 years now. Thank you Klaus for that!

Friday, January 25, 2013

The evilness of bugtrackers

It is quite common for software projects to use bugtrackers. Personally, I don't really like them and usually like to advice teams to stop using them. Let me tell you why.

Back in 2008/2009 when I had my first experiences with Scrum and agile software development, I was taught to "fix bugs immediately", meaning: If a bug comes up, you fix it.

We adopted this point of view and as soon as a bug was found we would write an index card for the bug put it in the todo-column of a lane of a story we were currently working on. Not later than in the next standup someone from the team took the bug and fixed it (admittedly this worked most of the time, there were times when a bug could have stayed in the todo lane for a day or two longer). We kept the bugtracker alright, but the only reason we did so, was to enable users to easily submit bugs and be informed when we have closed the bug. If a new bug came up in the bugtracker, we also wrote an index card and put it on the board with the bugtracker issue number as additional information.

As putting bugs in a lane they had nothing to do with was rather confusing we introduced a bug-lane at the top of our sprintboard, by this also visualizing: Bugs that occur have top priority! Basically this meant that every bug caused disturbance.

What effects did all this have on us?
First off, let's face it: Nobody really likes fixing bugs, it's annoying. The immediate disturbance added even more annoyance, meaning every time a bug came up, we were really annoyed. Annoyed about ourselves that the quality of our software was so low. To speak numbers: In 2008 we had an average of 25 bugs per month.

What happened to us was that we started looking for ways to produce less bugs. Obviously, we succeeded as we could cut the defect rate in half in the first year:
Average amount of bugs per month
All we did was implement one simple agile value: transparency.

Let's get back to bugtrackers.
Before Scrum we also used bugtrackers and kept collecting bugs (which I now believe is sometimes the most important use-case of a bugtracker). Once in a while we would organize a "bug-fixing-day" where we tried to fix as many bugs as possible in one day only to realize that at the end of the day there were still a lot of bugs left, we felt like Sisyphus. Bug-fixing-day soon became a dread and we tried to do it as seldom as possible.

Also I have seen projects were 150 bugs were accumulated over the course of a 3/4 year only to spend a whole month at some point almost doing nothing else than fixing bugs and at the end still having 50 open bugs although having fixed 200. If it hadn't been for the approaching release date probably eveb more bugs would have been accumulated. The team did the opposite of implementing transparency by hiding the bugs.

In this context the "fridge-effect" also applies: You do have a vague knowledge of what's in your fridge but as long as you don't open it, you don't exactly know what's in it. Moreover you have to actively open it, to see what's in it.

Situations like these two mentioned have a few impacts:
  • The developers are annoyed and demotivated and try everything to postpone the bugfixing as much as possible.
  • Any measured velocity of the team has no value at all since there is a lot work that has to be done some time later.
  • Any release planning has no value at all since there is no actual velocity to begin with.
  • Any calculated or tracked development costs have no value at all since there is a lot of work that has to be done some time later and there is no usable velocity.
  • Any calculated ROI has no value at all since there were hidden costs some time later.
What you can calculate in a stunning accuracy though is the average cost of a bugfix. Way to go!

Let's continue with response times.
Imagine you're spending your winter holidays in Prague and you're staying at a really nice hotel. Your room is nice and all and you feel comfortable but unfortunately the heating isn't working and your room is as cold as it is outside. You decide to go the check-in:
"Hi. The heating in my room is broken. Can you please fix it?"
"Of course. I will just write it down on our Needs-to-be-repaired-list and we'll get back at you asap! In the meantime use this warm blanket as a workaround."
The next day nothing has changed, so you decide to cancel your stay at the hotel (after all you paid a lot of money for it) and go to another hotel. 2 years later you get a call from that first hotel: "Hello Sir, we just wanted to inform you that we fixed the heating in that hotelroom of yours".

Sounds silly? In software projects this actually happens all the time. I've had bugs filed for Mozilla Firefox that only took 2 years until they even got any kind of response or bugs for Netbeans (mostly for the PHP components) that took 3 months. Needless to say, I don't use either product anymore. Although it's not the only reason, I'm not using them anymore, all these cases made me feel like I'm not taken seriously as a customer. And when that happens I start looking out for alternatives, which are usually easy to find. And since software always has users, there is always someone who wants to be taken seriously.

All in all my experience taught me that bugtrackers support the lazyness of developers and the tendency to low-quality-software where customers are not taken seriously. These are the reasons why like to advise teams to stop using them.

Update:
As Volker Dusch correctly stated I left out the part about "don't spend time managing lists of bugs. Just fix them". That was intentional and is covered by the first link in "Additional reading".

Additional reading:

Sunday, December 2, 2012

A christmas retrospective

This is a quite nice end-of-the-year-retrospective method I read about first last year on Boris Glogers blog. Since the original post is in german and doesn't tell you how to prepare, here is a detailed preparation instruction and the how-to itself, both in english. One warning ahead: Preparation for this one takes a lot of time. But it's totally worth it.

Time needed
Preparation: A few hours
Playing: 60 - 120 minutes, depending on your team size

Preparation
First of all take a flipchart paper (or something of at least similar size - depending on your team size you might need a larger sheet of paper, you will understand why later on) and draw a large christmas tree. Draw it as large as you can and don't forget the star on the top, after all you want the tree to look nice:


Now draw some small candles and christmas baubles in 2 colors. Draw one candle and 2 baubles for every team member. If you're lazy you can draw one candle and one bauble in each color and then copy it a few times (that's what I did ;-)



Once you have your candles and baubles finished, draw a present and a christmas sock for every team member and write their names on it. In order to keep the presents and socks individual you might want to not copy them ;-)


Cut out the candles, baubles, presents and socks. After that create a matrix putting the socks on the left of the tree and the presents under the tree:



Last part of preparation is to buy some cookies for the retrospective itself :-)

How to play
Give everyone a candle. Set the timer to 5 minutes and ask "What does motivate me getting up and going to work?". Everyone should write some words about the question on the candle (answers could be: I love our team, the current project is awesome, etc.). When the time is up everyone in turn glues her/his candle on the tree where her/his name crosses itself and says a few words about it. A glue stick might come in handy ;-)

Give everyone 2 baubles for every other team member. Set the timer to 10 minutes and ask them to write something about every team member. On one bauble they should answer the question "What did I like most about working with you?" (answer could be something like "I really liked pair programming with you") on the other "What do I hope I'll be able to be doing together with you next year?" (answer could be something like "I know you're really good at test driven development, so I hope you could teach me how to do it, at best while pair programming").

When you're done filling out the baubles, everyone in turn gets their presents. Let's assume you are the first to get your presents. Everyone in the team tells you what they wrote about you on their baubles and glue them at the corresponding positions in the tree-matrix (there are 2 for every combination). Once you have received all your presents, say a few words about how you feel and continue with the next person in your team.

Once you're finished, your tree should look like this:

Friday, November 2, 2012

Definition of done traffic light

This is a quite nice retrospective method I learned from agile42. Prepare the definition of done of your team by writing each point on a single line and put three columns besides every point. Paint the columns in red, yellow and green. The now colored columns stand for "We did that well" (green), "We did that ok" (yellow), "We didn't to that very good" (red).

In the retrospective every team member puts a dot in the colored column expressing his view on how good the team applied this point of the DoD. The result looks like this:

Now you can pick the 3 points with the lowest overall rating, discuss them (20 min timebox each) and develop optimisations for them.

You can do the same for your working agreements.

Tuesday, August 28, 2012

The Definition of done workshop

Recently one team I take care of as a Scrum Master needed a Definition of Done. Since I couldn't find anything on the internet about a corresponding workshop or a good method to create a definition, I created a workshop based on the "Challenge Cards" method from the book "Gamestorming" (page 158). Here's how you do it:

Time needed:
About 30 - 60 minutes

Preparation:
Prepare green and red colored post its and a blank flip chart (or similar).

How to play:
Divide the team into two groups. One group is the problem group, they get the red post its. The other group is the DoD group, they get the green post its. Now give both groups 10 minutes time.

Ask the problem group to think of common problems the team encounters regarding development, which could be fixed by a definition of done. Examples would be "a lot of bugs", "released version doesn't work" or "duplicate data appears". Important: Point out that it's not about problems the team can't be made responsible for (kernel panic on a server for instance), but that it's about problems that occur due to errors made during development.

Ask the DoD group to come up with possible items for a definition of done. This can be anything they can think of.

After the 10 minutes, ask the problem group to post their first problem on the left side of the flip chart. Now ask the DoD group to post one or more items that can solve this one problem on the right side of the flip chart. Let them draw an arrow going from the DoD item(s) to the problem post it. Notice that there can be more than one DoD items solving a problem and that one DoD item can solve multiple problems. It is ok and desirable to draw arrows from already posted DoD items to a new problem.
Continue until all the problems have been posted.

Now there are 3 possible outcomes:

You still have DoD items left.
Ask the DoD group to post them on the flip chart. After that, ask the whole team if they can think of more items they would like to include on the definition of done.

You still have problems that don't have a possible solution.
Ask the whole team if they can think of anything solving it. After that, ask the whole team if they can think of more items they would like to include on the definition of done.

Every problem has at least one possible solution.
Ask the whole team if they can think of more items they would like to include on the definition of done.

As a last step go over all the DoD items that have been posted on the flipchart during the workshop and ask the team if this item makes sense on a definition of done and if they agree, if this item lands on the definition of done.


Example:
Our result looked like this

Tuesday, August 14, 2012

A different kind of retrospective: Story workshop

Yesterday I tried a different kind of retrospective: The story workshop. Instead of writing post-its answering "What was good?"/"What went wrong?" or Mad/Sad/Glad, you ask the team to do the following:


"Write a short story, a movie, a play or a comic about the last sprint"

The feedback from the attendees was very positive, the main statement being that you get a quite different look on the past sprint, when you try to draw it (everyone in the retrospective drew a comic).

One of the comics looked like this:

Saturday, June 9, 2012

Different estimation methods compared

A while ago, I conducted a workshop about agile estimation. In this workshop, I showed the participants three (of many) different ways of estimation: Planning Poker, Magic Estimation and the Team Estimation Game.


Planning Poker [Interaction level: high]
Probably the classic estimation method. The product owner reads a user story, the team estimates using the planning poker cards. If there are any differences in the estimation, the team starts a short discussion about everyones different opinion of the story at hand. If after about 3 estimation rounds there is still no consensus, the story is obviously not clear enough and everyone tries to gather new information until the next estimation session. Planning Poker usually uses story points, although I've already seen cards with animals on it (quite a nice idea actually). The estimation session ends when either all the stories are estimated or when time is up.
At the workshop I let the participants estimate what it takes to clean a kitchen. During estimation the team realized, that story #1 and #4 are too large and split them up to 2 and 3 stories.


Magic Estimation [Interaction level: low - medium]
Probably the second classic estimation method. Prepare a flipchart (or similar) where you draw an arrow going up (symbolizing the increased complexity) and write your references beside the arrow, be it user stories, T-Shirt size or whatever you feel most comfortable with. The product owner reads a user story, writes it on a post-it and puts the post-it on the flipchart. This procedure is repeated for about 5 stories. The team then gets 2 minutes to stand in front of the flipchart and silently move the post-its to where it thinks they belong. If during the 2 minutes you realize, that a story is wandering around a lot, you might want to talk about some details after the 2 minutes. This whole procedure is repeated until there are no stories left or the time is up. 
At the workshop the result looked like this:


Team Estimation Game [Interaction level: low]
An estimation method I found a while ago here (page is in german) and have used at the workshop myself for the first time. This method is quite similar to magic estimation. As with magic estimation you need a flipchart prepared. The product owner brings all the stories on index cards to the estimation and lays them, heads down, on a table. Someone from the team (whoever wants to go first), takes the first story card, reads it out aloud and then puts it on the flipchart where ever he/she thinks it belongs. Now the second one in the team pulling a card can either put it besides, above or below the first story read. From the third story on, every team member has 3 possibilities for his move:

  • Read the next story card and put it on the flipchart
  • Reassign a story already on the flipchart, also explaining why he does that
  • Skip
When reassigning a story, you might also mark the story with a dot. The more dots a story gets, the more opinions about it differ and you should discuss it in detail. Like with the two other estimation methods, the team estimation game ends when there are no more stories left or when time is up.
At the workshop the result of the team estimation game looked similar to magic estimation:


Conclusion:
In my time as a developer I only used planning poker (apart from any workshops or similar I attended). After having conducted this workshop, I still prefer planning poker over magic estimation and team estimation, since the interaction with planning poker is way higher than with the other two (which is also the conclusion of the workshop participants). That doesn't render magic estimation and team estimation totally useless, there are still valid use cases for them. They are actually quite useful when you need a lot stories estimated in a short time, e.g. at the beginning of a project. Other than that my favorite remains planning poker.

Friday, May 4, 2012

Histogram: Team communication

Based on the satisfaction histogram from "Agile Retrospectives - Making good teams great" (p. 64 - 67), I did a communication satisfaction histogram in our last retrospective.
To measure how the team thinks about the communication between each other they could choose between 5 states:

"The communication in our team is ..."
  • 5: ... outstanding
  • 4: ... mostly very good
  • 3: ... satisfactory, but needs to be improved
  • 2: ... bad and needs to be improved badly
  • 1: ... a disaster
Team members vote anonymously, for scoring you can use dots:

I also left some space for further comments (write them on Post-Its and put them on the flipchart).