Monday, July 25, 2016

The downside of frequent releases

Nowadays we tell people to release often, to do continuous delivery or to do continuous deployment. Facebook for example does a release at least once a day (for a presentation about the Facebook release process, see here), Netflix does so at least once a week. The advantages are well-known and obvious, one of the most famous ones being reduced risk. But there are also disadvantages, which are rarely talked about. Since continuous delivery and deployment mostly focus on what advantages it has for us, as the developers, we tend to forget to check if customers do like it. And I have to say: As a customer I don't always like it.
I don't mind CD (continuous d...) on websites like Facebook or Netflix, because as a customer mostly I don't even realize that there's been a release. But they drive me crazy when it comes to apps on my mobile devices. I know that I can activate automatic updates, but I don't do this mostly due to security reasons. I have often removed apps because at some point they wanted to have additional rights I did concur with (example: When an app suddenly wants to have access to my contacts but is not contacts-related at all).

Since I had a constant feeling of "update-spam" regarding my apps, I decided to track the data for almost 2 months (March 16 - May 7). I did so for my 2 mobile devices, 1 iPhone 5 running iOS 9, 1 Nexus 7 tablet running Android 6, writing down the number of updates, installed Apps and OS version usually every two days (I did not manage to do so every two days). The Nexus 7 had 94 apps installed the whole time, the iPhone 5 57, with 58 apps for a few days. These are the results:

  • In average there were 2 app updates on iOS and 3 on Android per day
  • In regard to the apps installed that made up 3% of the apps installed per day
  • In total there were 104 app updates on iOS and 164 on Android
  • In total almost every app had 2 updates in 2 months
You can find the spreadsheet with the full collected data here.

You could argue that 2 updates for every app in 2 months is not so much and you would be right. I have not collected data on this, but the amount of updates is not spread across all apps, as some of my installed apps' last updates date back to December 2015 or even back to March 2015.

I have a similar experience with the program FileZilla. I use it once in a while (maybe once every 1-2 weeeks) and almost everytime I open it, it asks me if I wish to install the latest update.

For me, as a customer, having 2-3 so many visible(!) updates is really unbelievably annoying. Therefore my advice would be: If you plan on releasing often, also take into consideration what kind of product you have. Last but not least, try to make your updates as silent as possible.

Wednesday, April 27, 2016

The effects of team size

When I started at Infineon back in May 2015, one of my teams consisted of roughly 8 developers - a team size which in my opinion is already too large. The Scrum Guide proposes the following:

Fewer than three Development Team members decrease interaction and results in smaller productivity gains.  Having more than nine members requires too much coordination.

My own experience is that starting from around 6 people you already have too much coordination. For my team people in the company were complaining that the output of the team was so low. One of my suggestions was clear: Split the team. A suggestion which my predecessor also already tried to change for a year before I started. It took me around 1/2 year to finally get the team to split into two teams of 4. This is what happened to the velocity (the graph shows roughly one year):


I have asked the team in the retrospective directly following the team split (Sprint 100) aswell as after Sprint 100 why they think the velocity has gone up almost 100% and how they felt. This is what they replied:

  • Higher identification with stories due to smaller teams
  • More focused daily
  • More pair programming
  • No more "unpopular" stories
  • Less time wasted (e.g. in meetings you cannot really contribute)
Now I'm certain this might not happen to every time. It is useful to know that in my case the stories in the initial team were quite diverse and spanned over several products that have to be integrated into the SDK they are building.

Wednesday, March 30, 2016

Release planning using the Agile Strategy Map

The situation

One of my teams provides the SDK for a range of hardware products, of which some are still in development. The hardware product development is working with milestones, at which certain new features of the product are available or changes to the hardware layout are made. The SDK is expected to support these features and layout changes with the release of the milestone. Although many features are known upfront, requirements do change in the course of the milestone completion. Example: The hardware layout has a flaw and needs to be changed.

The SDK does get a major version update (e.g. from 2.5.0 to 2.6.0) with a milestone release. The team was struggling to plan the releases. The scope of the features was unknown. Features that were not needed for the release were started to be built in, only to be stopped one sprint later, because a major feature request was forgotton until the last sprint before the milestone release. The team has 2 product owners, of which one is more of a consulting product owner, and at the moment one main stakeholder (apart from a lot more).


The approach

My approach was to use a hierarchical backlog to plan the contents of a release. There are quite some ways to achieve that, like User Story Maps, Impact Mapping or, my preferred approach in this case, the Agile Strategy Map. I helped the product owners and the one main stakeholder creating the Map using sticky notes of different sizes and colors. Our Agile Strategy Map consists of the following elements:

  • Release goal (large sticky note): The goal of the release in one sentence
  • Critical Success Factor (red sticky notes): A CSF is a feature/item/etc. that has to be done in order to reach the release goal
  • Possible Success Factor (yellow sticky notes): A PSF is a nice-to-have. It does not have to be done in order to reach the release goal
  • Necessary Condition (orange sticky note): A sub work-item of a PSF/CSF. In order to complete a PSF/CSF, every NC has to be completed.
Our first map draft looked like this:
The Agile Strategy Map at the start of the release planning

In order to create the map, we took the following steps in a series of sessions:

Collect contents for the release
Everyone writes down what they think and/or know has to be in the release

Weight contents using Business Value Poker
Similar to Planning Poker, everyone defines the business value of the contents defined. This will nicely show in a later step that business value does not automatically mean higher priority.

Define the release goal
Once the contents and the business values were clear, define the release goal. This is one sentence that describes what will be achieved with this release

Identify CSF and PSF
Once the release goal is clear, identity the critical items that have to be done in order to reach the release goal. Everything else will be a PSF.

Prioritize
Prioritize the PSFs and CSFs based on the available information likes Business Value, PSF, CSF, etc. We only used small sticky notes with the priorities on them combined with a discussion, but any other means of priorization such as dot-voting, "buy a feature" or else is fine aswell. This will show that high business value does not automatically mean high priority. On the picture above you can see that the item with the highest business value 3000 only had priority 5.

Identify the NCs
Identify all the requirements and work items a CSF or PSF has. You can group NCs below another NC.


The result

The Agile Strategy Map near the end of the release

Creating the Agile Strategy Map for release planning had several benefits. First off, since we put up the map on a whiteboard accessible for everyone, everyone including stakeholders could see what we were up to. In this case it actually lead to a stakeholder causing a priority shift, since she saw that an item that was a very important customer request only had priority 8. It also helped us to remove certain subparts (NCs) of a feature (CSF/PSF) we thought had to be in the release, the most common reason being that we didn't have enough time to complete all the NCs for a feature and they weren't crucial for the release anyhow (which we didn't know beforehand). Having a hierarchical backlog helped us to only remove certain parts of a feature easily instead of dropping the whole feature. Last but not least we could track throughput of PSFs, CSFs and NCs and because of that we were able to make pretty good predictions on how many items will fit in the next release (example: For the next release we only prioritized until 10, knowing that anything below most likely won't make it anyhow).


P.S. One side note: There is no rule how many user stories come out of a PSF/CSF or a NC. In our case we had PSFs that were only one user story and NCs that were 3 user stories.

Tuesday, July 14, 2015

User Story Taboo

For my agile workshops I created a little game called "User Story Taboo" which I'm using to teach attendees how to write user stories. Here's the game description.


Participants

4 - 16

Duration

  • about 60 - 90 minutes (normal version)
  • about 90 - 120 minutes (extended version)

Goal and description

The goal of the game is to make participants understand how they can write good user stories.

User Story Taboo is based on the board game "Tabu" where teams have to describe words and terms without using certain forbidden words.

The participants will write user stories but they will be forbidden to use certain words. This is to show participants that no one needs user stories like "As a server I want to...". They shall learn the pros when not only defining the functionality but also who it is for and how he/she will benefit from it.

We are doing this by writing stories for an extended version of battleship. The rules for battleship are well known, thus we can concentrate more on writing the stories instead of discussing the rules of battleship.

Forbidden words

User
better
faster
end-user
operator
client
customer
server
to look at
work
company
stakeholder
program
system
product owner
game producer
easy

Preparation

  • Print requirements
  • Write forbidden words on flipchart or print them

Setup

  • Divide participants into groups of max. 4 people
  • Give each group a part of the requirements. Ideally each group has the same amount of requirements

Game

Round 1: Write user stories

Explain to the participants that good user stories describe who wants to have something, what he/she wants and why he/she wants it. In my experience providing the common template "As <role> I want to have <functionality> so that I have <value>" helps the participants in writing their first user stories. Show and explain the forbidden words.

Now let the participants write their user stories. Sometimes this takes 2-3 iterations until the stories meet all the requirements (who, what, why) and omit all the forbidden words.

Round 2: Acceptance criteria

Explain acceptance criteria.

Let groups select 2 user stories from round 1 and make them write acceptance criteria for them (encourage participants to be creative here :)

Extended Version: Groups don't select the stories but each group writes the acceptance criteria for all their stories from round 1.

Round 3: Identify epics

Explain epics and the lifecycle of a requirement (Requirement -> Epic -> User Story -> Task, etc.). The rules from the ebook "5 Rules for Writing Effective User Stories" are a good basis and orientation.

Let participants take a look at the written stories and make them identify epics and user stories.


Extended version

Round 4: Split stories or epics

Explain how to identify stories that are too large (e.g. words like and, or, but, etc.)

Let groups select 2 user stories or epics and make them split these stories/epics.

Backlog-Management

Show and explain participants possibilities to manage the backlog. Some possibilities:


You can also find all the materials in German and English on Github.

Thursday, June 25, 2015

Why the Shenmue 3 kickstarter campaign is a smart move

About a week ago during Sonys press conference on the E3 the kickstarter campaign for Shenmue 3 was revealed. There has been a lot of criticism ever since, mostly stating that this is the new "pre-order bonus" in the gaming industry. Being a gamer myself, I am not really fond of all the pre-order bonuses aswell as the DLC strategies applied for many games. Let me tell you why I think the Shenmue 3 kickstarter campaign was actually quite a smart move.

The risk of Shenmue 3

Despite high critical acclaim Shenmue 1 and 2 were not too succesful regarding sales according to various statistics (see here, here and here). The first installment is even considered a commercial failure and had a total cost of about 70 million US$. Remember that this was back in 1999. Final Fantasy XIII, which came out 10 years later, had a total cost of about 65 million US$. Furthermore it is now already 14 years ago that Shenmue 2 came out for the Dreamcast and Xbox. In the meantime we've seen the rise of Call of Duty, Grand Theft Auto III, IV and V, Dota 2, League of Legends, World of Warcraft and Final Fantasy XI - XIV. Meaning there are a lot of new games and game series that are consumed by the gamers. These factors add up to a high risk for a possible third installment of the series. Former Sega producer Stephen Forst confirmed this in a few tweets from 2013 stating that the risk was to high and that the brand awareness is probably rather low.

The Kickstarter campaign as MVP

So when they (meaning Sony and Suzuki Yu) started the Kickstarter campaign they actually created a minimum viable product. Other than a simple community vote they found actual customers who are willing to pay money in order to be able to play Shenmue 3. For Sony, who were in from the start (although stating otherwise in the beginning) the successful funding of the 2 million US$ Kickstarter campaign within the first 2 days was reason enough to count the MVP as successful and to officially support the game as publisher, partner and stakeholder. Given the "official" definition of an MVP they have done everything right:
Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.

The pros for the gamers

Even though there are other voices saying that the campaign is just another kind of the pre-ordner madness in the gaming industry, I think this is not the case. First, other than when pre-ordering a game I don't have to pay the full price. The lowest pledge that contains a digital copy of the game is $29:


Furthermore not only will the gamers get frequent updates on the game progress but from the lowest pledge on the supporters already have the possibility to influence the direction of the game:


Conclusion

I understand that gamers are fed up with all the pre-order madness and the DLC strategy of today publishers. Nevertheless I think the that Kickstarter campaign is a good thing. For Sony it is a definitive proof that given the risks there seem to be enough gamers willing to pay. For us gamers it is a possibility to get a game we want and even influence parts of it.

Friday, May 29, 2015

How to measure productivity

A few days ago I was having a discussion on how to measure productivity (I will not elaborate on if you should measure it all, that is for another blogpost). We came up with a few metrics that might be useful indicators.

Hit rate

That is the actual story points achieved in the sprint divided by the commited/forecasted story points.
Example: You commited 52 story points, but you have only achieved 37. Your hit rate is 71%.

Bugs per sprint + their average lead time

Track how many bugs are opened for your team/product per sprint. The idea is: the less bugs arise, the higher the quality of your software and the less you are sidetracked by bugfixing. Naturally this indicator works best if you fix bugs instantly and don't collect them in a bugtracker.
Also: track the average lead time it takes to fix bugs. The less time it consumes, the less you are sidetracked. Try adding a policy for this (for example: "We try to fix every bug in 24 hours").

Impediments per sprint + their average lead time

Track how many impediments arise per sprint and their average lead time. The less things you have, that impede you, the more productive you should be. Also: the faster you can remove these impediments, the higher your productivity should be.

Amount of overtime / crunch time

We were a bit unsure about this one. How much does it really say about productivity? In my opinion you should only do overtime in absolutely exceptional situations. In my opinion if you need overtime (or crunch time) there is something fundamentally flawed in the way you do your work. My theory is, that overtime is taken when planning badly. This can be either because you (constantly) plan too much and/or there are way too few people to do the work you want to be done. If you want to track this, make sure that people are able to provide their overhours anonymously.

Reuse rate

One thing we were completely unsure about is the reuse rate (how many of your code is getting reused?). The idea was that the less you reinvent the wheel over and over again, the more productive you should be. But how to track this? The only things we came up with was to run c&p-detection/duplicate-code-detection. Is this a valid metric in this case? What if you have multiple projects? If you have any ideas for this one, please let me know in the comments.

Don't: Velocity

Don't use your velocity as an indicator for productivity. First it is very easy to manipulate and chances are about 99% that it will be. Second every team has its own velocity, meaning there is no qualitative information about productivity to be found here.


So far we haven't tested these metrics yet as indicators for productivity. If we should start to do so, I will gladly let you know about any outcomes. If you should have any more ideas on how to measure productivity, please let me know in the comments below.

Thursday, July 18, 2013

Active learning cycle

Many teams seem to struggle with keeping track of their improvements from the retrospective. One really useful tool for that is the active learning cycle.

Take a sheet of flipchart paper and divide it into 4 areas: Keep, Try, Breaks and Accelerators. The most common form looks like this but you can always use a different form if it suits you better:
Active Learning Cycle
At the end of the retrospective you put your actions/improvements you decided on in "Try". Those are things that you want to try out. Remember to put the active learning cycle afterwards in a place where everybody can see it, near the team board would be a good place.

Not later than in the next retrospective you use to active learning cycle to decide what you want to do with the actions that are on the cycle.

  • Did you like it and you want to continue doing it? Put it in "Keep" and keep on doing it
  • Did you think it rather impeded you and you want to stop doing it? Put it in "Breaks". This could be things like "Standup at 2pm", "Digital team board", etc. And, more important: Stop doing it ;-)
  • Was it something that helped you but which is nothing you can really keep on doing all the time? Put it in Accelerators. This could be things like "2-day team offsite" (It was an accelerator for the team, but you can't do a 2-day offsite every week).
You don't have to wait though, the active learning cycle is supposed to be a "living" artifact, so you can always move post-its around when you feel it's time to do so. Of course you can also move things from "Keep" to "Breaks" or "Accelerators" if at some point it isn't helping you anymore. Since your active learning cycle will be very full at some point you might have to remove post-its someday. The moment, when you remove something is totally up to you, but from my experience it's best to only remove them, when they've already become second nature to the team.