Bloatware is a law of nature. Understanding it can help you avoid it

Today software can be churned out with an impressive speed, but few have stopped to ask the question of whether all the features they build were really necessary in the first place. Lean start up, Agile, Dev-Ops, automated testing etc. are frameworks that have made it possible to develop quality software  at impressive speeds. Are all the features they build really used by real users or were they just clever ideas and suggestions. Not too much research exists, but the Standish Groups CHAOS Manifesto from 2013 has an interesting observation on the point.

“Our analysis suggests that 20% of features are used often and 50% of features are hardly ever or never used. The gray area is about 30%, where features and functions get used sometimes or infrequently. The task of requirements gathering, selecting, and implementing is the most difficult in developing custom applications. In summary, there is no doubt that focusing on the 20% of the features that give you 80% of the value will maximize the investment in software development and improve overall user satisfaction. After all, there is never enough time or money to do everything. The natural expectation is for executives and stakeholders to want it all and want it all now. Therefore, reducing scope and not doing 100% of the features and functions is not only a valid strategy, but a prudent one.”

CHAOS Manifesto 2013 

20% of features are the most often used. It looks like the Pareto principle is at work here. The Pareto principle states that 80% of the effect comes from 20% of the causes. Many things have been described with it from the size of cities to wealth distribution  to word frequencies in languages. There has even grown an industry from it based on the bestselling book “The 80/20 Principle: The secret of achieving more with less” by Richard Koch. Other titles expand on this:  “the 80/20 manager”, “The 80/20 Sales and Marketing” and the “80/20 Diet”.

This could seem a bit superficial and you would be forgiven for thinking whether there really is any reality to the 80/20 distribution. It could just as well be a figment of our imagination, an effect of our confirmation bias; we only look for confirming evidence. Never the less, it seems that there is solid scientific ground when you dig a bit deeper.

The basis for the 80/20 principle
The Pareto principle is a specific formulation of a Zipf law. George Kingsley Zipf (1902-1950) was an American linguist. He noticed a regularity in the distribution of words in a language. He looked at a corpus of English text and noted that the frequency of a word is inversely proportional to its rank order. In the English language the word “the” is the most frequent and thus has rank order 1. It accounts for around 7% of all words. “Of” is the second most frequent word and accounts for 3,5 %. If you plot a graph with the rank order as the x-axis and the frequency as the Y-axis you will get the familiar long tail distribution, that Chris Anderson has popularised.

One thing to notice at this point is that the 80/20 distributions is relatively arbitrary. It might as well be 95/10 or 70/15. What is important here is the observation that a disproportionately large effect is obtained from a small amount of observations.

While Chris Anderson’s point was that the internet opened up for businesses opportunities in the tail, that is, for products that were sold relatively infrequent, the point for software development is the opposite, to do as little as possible in the tail.

Optimizing product development
We can recast the problem applying Zipf’s law. Take your planned product and line up all the features you intend to build. The most frequently used will be used twice as much as the second most used, and three times as much as the third most.

In principle you could save a huge part of your development efforts if you were able to find the the 20% features that would be used the most by your customers. How would you do that? One way is the lean start up way which is reaching mainstream. Here the idea is that you build som minimal version of the intended feature set of your product. Either by actually building a version of it or by giving the semblance of it being there and monitoring whether that stimulates use by intended users.

This is a solid and worthwhile first choice. There are however reasons why this is not always preferable. Even working with a Lean start up approach you have to do some work in order to test all the proposed features. That amount of work need not be small. Remember the idea of a Minimal Viable Product is just that it is minimal with regard to the hypotheses about its viability. Not necessarily a small job.

The Minimal Viable Product could be a huge effort in itself. Take for example the company Planet Labs. Their MVP was a satelite! It is therefore worthwhile to consider even before building your minimal viable product what exactly is minimal.

Ideally you want to have a list of the most important features to put into your MVP. That way you will not waste any effort on features that are not necessary for an MVP. The typical way this is done is for a product manager, product owner or the CEO to dictate what should go in to the MVP. That is not necessarily the best way, since their views could be idiosyncratic.

A better way
A better way you can do this is by collecting input on possible features to include from all relevant stakeholders. This will constitute your back log. Make sure it is well enough described or illustrated to be used to elicit feedback. Here you have to consider your target group and the language they use and the mental models they operate with.

Once you have a gross list of proposed features the next step is to find a suitable group of respondents to test whether these features really are good. This group should serve as a proxy for your users. If you are working with personas, you should find subjects that are similar to your core personas. Then you will simply make a short description of the intended product feature or even illustrate it, list the proposed features and ask the subjects in a survey or some similar fashion “If this feature was included in the product how likely is it that you would use it? On a scale from 1-5″

Once you have all the responses for every feature simply calculate the score by adding all the ratings they got. Then you can follow Zipfs lead and rank features from top to bottom. If you calculate the total of all scores you can find the top 20% features. Simply start with the highest scoring and continue until the cumulative score of features approaches 20% of the total score. It is still however a good idea with a sanity check, so you don’t forget the login function or similar (you can trust algorithms too much)

What to do
Now that you have saved 80% of your development time and cost, you could then use the effort to increase the quality of the software. You could work on technical debt, to make it more robust while you wait for results.

You could also use this insight in your product intelligence and look at the top 20% most frequently used features of your product. Once you have identified them optimize them so they work even better. That would be a short cut to getting happier customers. You could optimize response times for these particular features so the most important features work faster. You could optimize the visibility in the user interface, so they are even more easy to see and get to or you could be used the insight in marketing to help focus the positioning of your product and to communicate what it does best.

To sum up, product utilization seems to follow a Zipf law. Knowing the top 20% features could help you focus development effort, but it could also help you focus marketing effort, user interface design and technical architecture.

 

References:

Richard Koch: “The 80/20 Principle: The secret of achieving more with less

Chris Anderson: “The Long Tail

http://www.quora.com/What-is-the-deeper-physical-or-mathematical-logic-behind-the-pareto-principle-of-an-80-20-distribution

http://www.quora.com/Statistics-academic-discipline/What-is-an-intuitive-example-of-the-Pareto-Distribution

http://www.quora.com/Pareto-Principle/In-what-conditions-would-you-expect-a-power-law-distribution-curve-to-emerge

https://en.wikipedia.org/wiki/Feature_creep

https://en.wikipedia.org/wiki/Software_bloat

Photo by flickr user mahalie stackpole under CC license

 

 

is the Apple watch a Telegraph?

The coming of the Apple is the buzz of the moment. Apple is the champion of making things simpler, but have they gone too far with the apple watch and made it too simple.

One click bonanza

The received wisdom in new product development is that you should take out steps, and continually simplify the product. This is what amazon did with one-click and this is what apple did with [insert your favorite Apple product here]. The reason is that it increases usability.

But sometimes the simplification meets a point where it doesn’t improve usability any more. With any product you will have some measure of complexity. Complexity is conventionally conceived as the number of possible states the system can have. So, roughly a measure of complexity is the number of variables a user can choose between and the number of states they can assume.

Some products are the antithesis of the amazon One-Click. Microsofts office suite has heaps of functions that are never used. Other products like SAP have a lot of different screens with a lot of functions, which make them difficult to use. But the reason that these functions are there is often that users actually need these functions, so for them they are necessary. If you take a way that functionality you will make the user interface more simple, but the complexity of the task you wish to do remains, only now, because of the too simple interface, it is even more complex than it was before. This is what we could call residual complexity, that is, the complexity of a task that is not supported by the tool.

Let me give you an example of high residual complexity. We bought a dishwasher called something with one-touch (perhaps inspired by amazon?) where indeed there was only one button. Actually at the face of it good thinking: Simplify to the core of the problem. What do I want to do with a dishwasher? make it wash my dishes. That works very well. Under normal circumstances. That is, until I discovered after it had been installed that, it just didn’t work. Not much you can do with one button then. I called the store and they had me push some sequences on the button to do diagnostics. Suddenly I found that the dishwasher was stuck in Turkish language. A language I am not intimately familiar with. What to do when you have only one button?

Finally it was back to the original language and an operator came on site to fix it and it worked. Now we were happy until the dishwasher had finished its washing cycle. For some reason, the product manager or whoever was in charge thought it would be nice if the dishwasher played “Ode an die Freude” from Beethoven’s 9th symphony. I love that piece and especially the ode, but not when it is played in a  15 second melody sequence with clunky 8 bit soundgenerator and repeated three times. Now I wanted to turn it off, but what to do with only one button?

One click communication

To illustrate it further lets take the simplification to its extreme. Take a keyboard on a computer. It has about 50-60 keys. They can be on or off. That leaves us with a product with 100-120 different possible states (not counting combinations, since a keyboard records only one stroke at a time). If you would like to simplify this maximally you could introduce a One-click concept where the keyboard had only one key that could be on or off. We just reduced the complexity of the user interface with a factor of 100 or more popularly we made it a 100 times more simple.

That, however, has been done centuries ago (literally). It’s called a telegraph. The telegraph illustrates clearly the problem of residual complexity, because in order to carry out the necessary tasks with a telegraph (communication) where there is only one button, it shifts the complexity from the user interface to the task: you need to learn morse code in order to use it!

That means when there is an inherent task complexity you cannot simplify the user interface beyond a certain point if the goal is to increase usability.

Residual complexity and the Apple watch

Now let’s return to the Apple watch. As compared to a watch, the Apple watch is not more simple. Quite the contrary. On the other hand compared to a smartphone it is simpler. And many, including Apple compares it to exactly that. You can do many of the same things on the Apple watch as you can on the iPhone.

For example you can read and reply to messages only, there is no keyboard. So, if you want to reply you have to choose a preconfigured reply or dictate a reply.

You can read an email there, but if it is longer than a short message you will have to scroll incessantly. You can also listen to music, but what if you want to search for a song? You can look at my calendar, but what if the entry is more than 15 characters or you want to move an appointment?

All of these examples are examples of residual complexity. Could it be that Apple just made it too simple? Could it be that Apple just built a new telegraph for your iPhone?

 

Photo by Clif1066 @flickr under CC license