How to come up with a product that is truly unique

How do you come up with a product idea that the whole world is not already selling? This is an interesting question that I think every entrepreneur asks him or herself regularly. I don’t have the answer for it, but I can tell you something about how to end up with the answer.

Ban TechCrunch 
The first step is to stop reading start-up media. Any start up media! That’s over – period. These just promote Groupthink and turns your attention to products and services that everybody is already doing. This is why the world is flooded with instant messaging and photo apps and to-do lists.

Think of it as entrepreneur information detox. You need to get it out of your system. If you absolutely need to read something, read something that nobody else reads. I can recommend Kafka’s short stories, Thomas Tranströmers poems or Mike Tyson’s biography.

If you have special knowledge…
Do you know something that most other people don’t? Have you worked in a niche? If you do then think hard about how to leverage that knowledge for a product or service. Is there some problem that is frequent in this special area you know, preferably a problem that someone would pay to get rid of. If that is the case you have your first lead there.

If for example you work in a cinema you may have noticed that it is a problem to clean chairs quickly enough between showings if somebody spilled something. Maybe the solution is a special coating for the chairs, maybe a cover that can be changed.

A good example of a company that did this is Zendesk | Customer Service Software & Support Ticket System. Zendesk started from the founders’ working with customer support systems, which they found to be too complex and difficult to implement and use.

If you have no special knowledge…
If you don’t have any specialised knowledge, which is often the case if you are fresh out of school or have spent most of your youth playing Fifa, there are several options. Think about stuff that you absolutely wouldn’t like to work with. Stuff that would be really boring, disgusting or socially awkward. It should be something you would lie about it if you were telling about it on a first date.

Think along the lines of condoms for dogs, reading stories for senior citizens, avoiding sewage blockage or code review. Now come up with a product/service that would make this thing easier.

“But why would I do something I don’t want to do?” you may ask. The thing is that this is usually a good indicator of what other people think as well and that is where you have the opportunity.

One of my favourites in this area is the company The Specialists who employ people with autism to do tasks that others find tedious like testing. What is incredibly boring or difficult for other people is something they like to do. Another example is Coloplast who makes products for continence care. Essentially they just make plastic bags, but for a special purpose.

Go datadriven
Another option is to find some way to pick up on a demand that is currently not well served. It could be selling niche stuff on amazon, which can be amazingly lucrative (see this thread on Quora). There are even tools for discovering such opportunities like Jungle Scout (Jungle Scout makes product research on Amazon EASY). But there are also other general SEO tools that can give you the same effect like Moz.

Get out into the world..
Now that you have some vague directions you have to go out into the world to find out how to build a business model around this. This takes research about the users and customers, but also about competitors and suppliers. Strategyzer | Business Model Canvas is a good short hand for figuring out what to think about and where to go.

Lean start up, MVP etc…
I’m not going to go into more detail about this here. A quick search will flood you with quality material on how to build a product from an initial idea and turn it into a success.

Building a Product Strategy for a Backend Product

When you learn and read about product management you will quickly learn how important it is to engage with your customers, be agile and make experiments, but when your product is a back-end system with no end users, but just other applications and it is considered key infrastructure that others depend on to work in a predictable way, it is not so easy to be agile do A/B tests lean start up style experiments and user testing.
This is a classical problem and one very often ignored in product management literature. Here it seems always to be about products that have users that you can sit down and talk to and learn what to do. There are however a few things you can do if you are the product manager of a back end product and need to build a product strategy.


Align Strategy

It is necessary to sit down and look at all the consumers of your product. They are essentially your customers. That means identifying all other products that depend on or will depend on your product. Unfortunately product managers don’t always have a strategy. Then you need to look at other artefacts like road maps, visions, even marketing material. It is also a good idea to talk to them to understand where they are moving. Here you actually don’t need to concern yourself with the end users.

Once this is done find out what the strategy is for their product. Doing this may uncover some contradictory demands. One product may want you to focus on microservices another on batch deliveries another wants a message based architecture. Some may prefer REST/JSON type services, others SOAP/XML and others just FTP/CSV in a scheduled batch. Welcome to the world of Agile development where teams decide inside their own bubble what would be most agile for them.

Unfortunately it is your problem to reconcile these differences with the different consumers. In order to do this you need stakeholder management.

Manage Stakeholders

It is necessary for you to chart the different stakeholders and weigh their importance and actually do a typical stakeholder analysis where you find out what their interests are and how you should communicate with them. Unfortunately most product managers leave it at that and forget the art of stakeholder management. In the best case they will fill out a stakeholder analysis and store it on their harddrive never to be opened again. But stakeholder management is more like politics. Watch Game Of Thrones or House Of Cards for inspiration.
You have to understand the different fractions and their powerbase. Understand the different persons their culture. You have to lobby ideas, be the diplomat, explain the positions of other stakeholders. Look at key persons social network profiles in order to find out what type they were, where they live, what they do in their sparetime. Understand their concerns, apply pressure when needed and yield when it is necessary. Remember politics is all about compromise. But you can only do that once you have a plan.

Draft a plan

All the input you have got from the above points now has to be integrated with your own knowledge about the product. What are the possibilities, the technical limitations, technical debt etc? Given your knowledge of the status of your product and the possibilities and available resources you have to plan for how it should change. Draft a plan on a few headlines. Focus for example on capabilities you would like to develop, data you want to capture or ways of working with consuming products. Find out only a few key goals you have, but have suggestions for more.

Reiterate

Now, start over again, because product strategy, like any strategy takes time and you need to form a coalition behind it if it should succeed. You are not finished until you have that coalition behind you. Not until then will you have a proper product strategy.

Wyldstyle or Emmet? Lego lessons for product managers

This holiday season offered a chance for me to see the Lego movie once again. Since I had seen it once already, my mind, not so tied up with following the action and intricate plot, was free to see the deeper perspectives in the film and put it into a product management context.
At the core the movie is about two different ways of building with legos. On the one hand we have Emmet, the super ordinary, construction worker and his friends who always build according to issued plans. On the other hand we have Wyldstyle and the master builders, who build innovative new creations from what ever is available.
The master builders are the renegades, “the cool kids”, those that fight the evil president business. They are extremely creative and anarchistic. The prophecy of Vitruvius states that the chosen one, a master builder, will save the universe.
When Emmet becomes the chosen one, a certain friction arises because he definitely does not have much in way of creativity or innovation potential. But he redeems himself in the end, because he is able to make plans and have the everyone work as a team. He gets the master builders to work together to infiltrate the corporate offices etc.

Working as a team
So, what does this mean? we could generalise lego building to any kind of building and therefore also building software. There are two modes of creation: the heroic genius way of the master builder  or the dull plan based of the team. Just as in the movie, we in the tech industry celebrate the master builders: we cheer the work of the lone geniuses: Steve Wozniack, Linus Thorvalds, Mark Zuckerberg etc.
But just as Walter Isaacson’s latest and highly recommendable book “The Innovators” show, the geniuses NEVER made anything entirely by themselves. It was always as part of some sort of team effort.
Further, every day the wast majority of software out there is built by lifeless ordinaries like Emmet, who are just following plans. Maybe it is time for their vindication and time to take seriously that software development is a team effort. It is never the result of the mythical master builder and there is no prophecy that a chosen one will save the universe. The ability to work is just as important as being a genius.

Worth keeping in mind for the product manager
In practise there are three lessons we could learn from the lego movie
1) Don’t frown upon a plan. Even if it might be changed along the way, a plan is not a bad thing in itself. Agile development for example is often pitted against plan based development. There can be different kinds of plans like roadmaps, specifications or project plans. Following your gut and just jumping from sprint to sprint entirely on inspiration and a spur of the moment will not suffice. It will, metaphorically, only let you charge towards the front door, while a plan may take you all the way towards the top.
2) There is an I in team – it’s hidden right in the “A” hole. A team effort is a team effort, and if you can’t control your ego you are an A hole. It is  important to keep egos in check, because the power of a team will always be superior to that of any individual.  Most people are not geniuses, but that doesn’t mean that their effort is less worth. The entire team may loose motivation and coordination will diminish if egos prevail.
3) Master builders are great and necessary. It is from the individuals who dare think differently that new impulses come. Prototypes, drafts, wild ideas are the domain of the master builder. He or she is not sufficient, though a crucial source for innovation. It is therefore also necessary to allow room for the innovators in a team, but not so much that their ego takes over, but enough that they don’t wither and die.
As a product manager or any type of manager it is therefore important to keep these three lessons in mind: have a plan, keep egos in check and give room for the innovators.

Bloatware is a law of nature. Understanding it can help you avoid it

Today software can be churned out with an impressive speed, but few have stopped to ask the question of whether all the features they build were really necessary in the first place. Lean start up, Agile, Dev-Ops, automated testing etc. are frameworks that have made it possible to develop quality software  at impressive speeds. Are all the features they build really used by real users or were they just clever ideas and suggestions. Not too much research exists, but the Standish Groups CHAOS Manifesto from 2013 has an interesting observation on the point.

“Our analysis suggests that 20% of features are used often and 50% of features are hardly ever or never used. The gray area is about 30%, where features and functions get used sometimes or infrequently. The task of requirements gathering, selecting, and implementing is the most difficult in developing custom applications. In summary, there is no doubt that focusing on the 20% of the features that give you 80% of the value will maximize the investment in software development and improve overall user satisfaction. After all, there is never enough time or money to do everything. The natural expectation is for executives and stakeholders to want it all and want it all now. Therefore, reducing scope and not doing 100% of the features and functions is not only a valid strategy, but a prudent one.”

CHAOS Manifesto 2013 

20% of features are the most often used. It looks like the Pareto principle is at work here. The Pareto principle states that 80% of the effect comes from 20% of the causes. Many things have been described with it from the size of cities to wealth distribution  to word frequencies in languages. There has even grown an industry from it based on the bestselling book “The 80/20 Principle: The secret of achieving more with less” by Richard Koch. Other titles expand on this:  “the 80/20 manager”, “The 80/20 Sales and Marketing” and the “80/20 Diet”.

This could seem a bit superficial and you would be forgiven for thinking whether there really is any reality to the 80/20 distribution. It could just as well be a figment of our imagination, an effect of our confirmation bias; we only look for confirming evidence. Never the less, it seems that there is solid scientific ground when you dig a bit deeper.

The basis for the 80/20 principle
The Pareto principle is a specific formulation of a Zipf law. George Kingsley Zipf (1902-1950) was an American linguist. He noticed a regularity in the distribution of words in a language. He looked at a corpus of English text and noted that the frequency of a word is inversely proportional to its rank order. In the English language the word “the” is the most frequent and thus has rank order 1. It accounts for around 7% of all words. “Of” is the second most frequent word and accounts for 3,5 %. If you plot a graph with the rank order as the x-axis and the frequency as the Y-axis you will get the familiar long tail distribution, that Chris Anderson has popularised.

One thing to notice at this point is that the 80/20 distributions is relatively arbitrary. It might as well be 95/10 or 70/15. What is important here is the observation that a disproportionately large effect is obtained from a small amount of observations.

While Chris Anderson’s point was that the internet opened up for businesses opportunities in the tail, that is, for products that were sold relatively infrequent, the point for software development is the opposite, to do as little as possible in the tail.

Optimizing product development
We can recast the problem applying Zipf’s law. Take your planned product and line up all the features you intend to build. The most frequently used will be used twice as much as the second most used, and three times as much as the third most.

In principle you could save a huge part of your development efforts if you were able to find the the 20% features that would be used the most by your customers. How would you do that? One way is the lean start up way which is reaching mainstream. Here the idea is that you build som minimal version of the intended feature set of your product. Either by actually building a version of it or by giving the semblance of it being there and monitoring whether that stimulates use by intended users.

This is a solid and worthwhile first choice. There are however reasons why this is not always preferable. Even working with a Lean start up approach you have to do some work in order to test all the proposed features. That amount of work need not be small. Remember the idea of a Minimal Viable Product is just that it is minimal with regard to the hypotheses about its viability. Not necessarily a small job.

The Minimal Viable Product could be a huge effort in itself. Take for example the company Planet Labs. Their MVP was a satelite! It is therefore worthwhile to consider even before building your minimal viable product what exactly is minimal.

Ideally you want to have a list of the most important features to put into your MVP. That way you will not waste any effort on features that are not necessary for an MVP. The typical way this is done is for a product manager, product owner or the CEO to dictate what should go in to the MVP. That is not necessarily the best way, since their views could be idiosyncratic.

A better way
A better way you can do this is by collecting input on possible features to include from all relevant stakeholders. This will constitute your back log. Make sure it is well enough described or illustrated to be used to elicit feedback. Here you have to consider your target group and the language they use and the mental models they operate with.

Once you have a gross list of proposed features the next step is to find a suitable group of respondents to test whether these features really are good. This group should serve as a proxy for your users. If you are working with personas, you should find subjects that are similar to your core personas. Then you will simply make a short description of the intended product feature or even illustrate it, list the proposed features and ask the subjects in a survey or some similar fashion “If this feature was included in the product how likely is it that you would use it? On a scale from 1-5″

Once you have all the responses for every feature simply calculate the score by adding all the ratings they got. Then you can follow Zipfs lead and rank features from top to bottom. If you calculate the total of all scores you can find the top 20% features. Simply start with the highest scoring and continue until the cumulative score of features approaches 20% of the total score. It is still however a good idea with a sanity check, so you don’t forget the login function or similar (you can trust algorithms too much)

What to do
Now that you have saved 80% of your development time and cost, you could then use the effort to increase the quality of the software. You could work on technical debt, to make it more robust while you wait for results.

You could also use this insight in your product intelligence and look at the top 20% most frequently used features of your product. Once you have identified them optimize them so they work even better. That would be a short cut to getting happier customers. You could optimize response times for these particular features so the most important features work faster. You could optimize the visibility in the user interface, so they are even more easy to see and get to or you could be used the insight in marketing to help focus the positioning of your product and to communicate what it does best.

To sum up, product utilization seems to follow a Zipf law. Knowing the top 20% features could help you focus development effort, but it could also help you focus marketing effort, user interface design and technical architecture.

 

References:

Richard Koch: “The 80/20 Principle: The secret of achieving more with less

Chris Anderson: “The Long Tail

http://www.quora.com/What-is-the-deeper-physical-or-mathematical-logic-behind-the-pareto-principle-of-an-80-20-distribution

http://www.quora.com/Statistics-academic-discipline/What-is-an-intuitive-example-of-the-Pareto-Distribution

http://www.quora.com/Pareto-Principle/In-what-conditions-would-you-expect-a-power-law-distribution-curve-to-emerge

https://en.wikipedia.org/wiki/Feature_creep

https://en.wikipedia.org/wiki/Software_bloat

Photo by flickr user mahalie stackpole under CC license

 

 

Product Management Maturity And Tool Support

A recent report on product management tools by Sirius Decisions has revealed that 50% of Product Managers are looking for product management specific tools.

There are a number of dedicated product management tools, such as those surveyed by Sirius Decisions, yet when you ask product managers only 13% seem to use such tools. What can be gleaned from another survey,  by Product Hunt  is that no dedicated product management tool seems to be on the radar of product managers. At first I thought it was a mistake, so I contacted Product Hunt to verify. The method they arrived at their list was the following

We came up with the PM tools list by polling leading product managers in the industry and that’s what they selected

Being the supplier of one such dedicated Product Management Tool, we wanted to dig deeper into why there were such discrepancies in the market. Looking at the list by product hunt, the tools are all generic tools or for some single purpose, but none were used for supporting a coherent product management process. At least not such as described in the reference models of AIPMM, ISPMA or similar industry standards.

Outstanding custom essay writing from scratch! We guarantee dedicated writers, delivery on-time and errors free final work!

In the ERP space there are numerous tools that cover for such industry standard processes like Procure-to-Pay or campaign management. So, why don’t we see more tools that support a best practice process, but only tools for either very generic purposes (Trello, Evernote or Excel) or very specific purposes (like KISSmetrics, Streak or Do)?

Maturity

I believe that the reason has to do with maturity. The maturity level a company has is a fairly good indication of what tools will work. If you want to implement SAP in a CMMi level 1 company it is going to be a tough ride, since SAP is wonderful for repeatable processes, and at level 1 you don’t have such. Conversely if you want to implement project management in a CMMi level 5 company with only trello, it might also be a hard sell.

The CMMi model is both loved, hated and misunderstood. Anyhow, given the right understanding and appropriation, I think it is a good framework for conceptualizing maturity.  We have to remember that it is not about any particular process, but a metamodel that stipulates something about the proces that you should follow. therefore it is not a competitor to the ISPMA syllabus or the AIPMM PRODBok. Rather these are particular ways of executing product management process.

Product Management is covered by the Development process in the CMMi model called CMMi-DEV and it should therefore be possible to single out process areas and look at what sort of tool support that fits. In the following I will go through the 5 maturity levels of the CMMi model and describe key processes and give recommendations for optimal tool support.

Level 1 – Initial (Chaotic)

It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes. As the CMMi-DEV says:

“Maturity level 1 organizations are characterized by a tendency to overcommit, abandon their processes in a time of crisis, and be unable to repeat their successes.”

There are no particular process areas pertaining to Level 1.

Tool use: Eclectic, Usually Microsoft office suite (Excel, Word, powerpoint)

Recommendation: Select one key part of the product development process to support with a tool (idea management, Bug fixing, development, planning). Find one central place and tool to document. The tool should be tactical and “light weight” and easily customizable

Examples: Trello is light weight and will fit for almost any work proces where you have work items (ie. tasks, features, userstories) that go through phases. Podio is another popular tool where the strength is in its customizability. There are plenty of Apps where one is guarantied to come close to your needs and then you can just adapt it. Uservoice is good if you want to manage the ideation process. Zendesk is for support and will be great if your primary pain is to address and fix users problems.

Level 2 Repeatable

It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.

Here is what the CMMi writes about Level 2:

“Also at maturity level 2, the status of the work products are visible to management at defined points (e.g., at major milestones, at the completion of major tasks). Commitments are established among relevant stakeholders and are revised as needed. Work products are appropriately controlled. The work products and services satisfy their specified process descriptions, standards, and procedures.”

Key Process Areas:

  • PP – Project Planning
  • PPQA – Process and Product Quality Assurance
  • REQM – Requirements Management

Tool Use: Usually one tool is used for part of the process, but often you will see differing tools across different departments in the organisation.

Recommendation: Converge on a common tool to use and focus on lowest common denominator across the people involved in the process. The most important here is that it should be possible to see the status of work products.

Examples: Jira is already used by millions and very good for assuring clarity regarding what is committed and the status of work products, Rally and Version One are similar and flexible. These tools are all good for the above mentioned process areas.

Level 3 – Defined

It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.

“A critical distinction between maturity levels 2 and 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures can be quite different in each specific instance of the process (e.g., on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines.”

Key Process Areas:

  • DAR – Decision Analysis and Resolution
  • PI – Product Integration
  • RD – Requirements Development
  • RSKM – Risk Management

Tool Use: Usually a suite is used for a part of the process. And use of this is consistent across different departments.

Recommendation: Make sure the tool you have selected is a suite that is tightly integrated with up stream and downstream processes, because when you begin to reap the benefits of being at Level 3 you will usually want to expand the process reach. This is best done if it is already a suite.

Examples: Focal Point is often used for RD and RSKM and is very customizable. Sensor Six is aimed towards DAR and therefore worth considering if you want to focus on that process area. HP Quality Center, Rational Suite are sort of all round and has extensive functionality to support most processes.

Level 4 Quantitatively Managed

It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.

“A critical distinction between maturity levels 3 and 4 is the predictability of process performance. At maturity level 4, the performance of projects and selected subprocesses is controlled using statistical and other quantitative techniques, and predictions are based, in part, on a statistical analysis of fine-grained process data.”

Key Process Areas:

  • OPP – Organizational Process Performance
  • QPM – Quantitative Project Management

Tool Use: Consistent and mandatory use of a suite for the entire process

Recommendation: Make sure the tool supplies full fledged reporting facilities out of the box and customizable. Visualization is key to success here, because metrics that are not easily visualized are not going to help managemen.

Examples: same products as Level 3, but it is probably necessary to boos reporting: Qlikview, Mixpanel, Gekkoboard are good for visualizations of process trends, but if you need more sophisticated statistical analysis, SPSS, SAS or Rapid miner, to mention an open source alternative, are good options.

Level 5  – Optimizing

It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements

“ A critical distinction between maturity levels 4 and 5 is the focus on managing and improving organizational performance. At maturity level 4, the organization and projects focus on understanding and controlling performance at the subprocess level and using the results to manage projects. At maturity level 5, the organization is concerned with overall organizational performance using data collected from multiple projects.”

Key Process Areas:

  • CAR – Causal Analysis and Resolution
  • OPM – Organizational Performance Management

Tool Use: The requirements for the tool at this level is that it is “intelligent” and will supply the process with transformative input that is not realized at any earlier levels. It could be intelligent estimation or market analysis.

Recommendation: There are no tools at this level yet, so either it should be integrated with general AI systems or dedicated niche players

Examples: IBMs Watson is an interesting new general purpose AI, that could oprobably be used here. Another example is Qmarkets who supply prediction markets, for improving project delivery by using market dynamics. Employees can “gamble” on what projects or products will succeed.

 Conclusion

There are many options for tool use and many options for process improvement. The best thing is to be very selective and start from the process side. Tools with out a process are like hammers without a nail: they can make a lot of noise. When you know what process areas to focus one you should try to find a tool that suits this area and the maturity level you are aiming for. The tools are all good, but they are built for a particular purpose, so if you use it for something different the result may lack.

 

 

When Choice Is A Bad Thing – The Marginal Utility of Choice

Being able to choose between different options is a good thing for the user! right? but when you can choose between 65 different kinds of blue, 1122 different fonts and whether a display should only work on Sundays between 11 and 12 for a special group, giving MORE choices to users start to be not so good or, to put it bluntly: bad!

In general most are brought up in a democratic state, where expecting to have a choice is as basic as eating or breathing. This is why choice in all its guises has a positive ring to it. But there are actually situations where limiting your choice is the best strategy. It has worked for artists and musicians to enhance creativity, but it also works for ordinary people. This is important to consider when you are designing and building new products.

 

The marginal utility of choice

In order to understand why and when more choice stops being a good thing I will introduce the concept The Marginal Utility of Choice: “The marginal utility of choice is the perceived benefit that the option of one more choice will offer”.

Let us look at an example. If your product is a car rental service, then increasing the choice of color of cars from 1 to 2 may be a significant increase in utility for the user. Now you can suddenly choose whether you want the car in black or white. Adding blue will also offer great utility. Continuing to do this will continually add some utility to the car rental service as a whole. But when you already have, say, 35 different colors to choose from how much will adding color number 36 improve the utility of the service as a whole?

This example shows that it is not a linear function, that is, adding one more choice will not indefinitely result in the same increase in utility. In the following I will offer a posible explanation based on human psychology that could help explain when having more choice stops is a bad thing.

 

The optimal number of choices

In general the value of an extra choice increases sharply in the beginning and then quickly drops off. Given the choice of apples, oranges, pears, carrots and bananas are great, but when you can also choose between 3 different sorts of each the value of being offered year another type of apple may even be negative. The reason for this phenomenon has to do with the limits of human consciousness.

1-7 

In order to make a conscious choice there is one fact we know with Cartesian certainty: you need to be conscious about it. From half a century of psychological research starting with the seminal article by George Miller from 1956 “The magical number 7, plus or minus 2” we know that our consciousness has some severe constraints on how many things it can work with. It seems to be able to maximally hold 4 to 7 items at the same time (depending on the type of test and training). When the number of choices exceeds 4 to 7 items you can’t hold it in your consciousness anymore and the choices can’t be evaluated against each other. Therefore the marginal utility of choice quickly stops at the other side of 7.

I once got into a discussion with Chris Anderson (author of “The Long Tail”) about this observation. I argued that companies that offered only a very limited number of choices of products and functions of their products would be more successful, contrary to Anderson’s argument that the long tail and infinite choice was the way to go. We never really reached consensus on that though, but consider the following:

How many different phones does the worlds leading phone manufacturer, Apple, produce? 4, iPhone 6, iPhone 6 Plus, iPhone 5s, iPhone 5c

How many different choices does the worlds best restaurant offer their customers? 3 choices: 1 menu, a wine menu and a juice menu

How many car models does the worlds most hyped autoproducer, Tesla Motors, offer to their customers? 2, Model S and Model X

All of these leave the consumer with a number of choices that is below the threshold of our consciousness.

Expressed more formally we can stipulate that “the marginal utility of choice rises sharply between 1 and 7 choices and then decays”

7-49 

Now the question is “what happens after that?” When choice cannot be held inside consciousness, it will try to group them into different segments. That could work well to some point probably around 50 (7 chunks time 7 choices). In this interval a new choice can still be grouped with others with some effort, but it will take mental effort to understand and compare it to all the other choices. This is why adding a new choice will be a bad thing, since it is cognitively costly.

In this range the marginal utility of choice is slightly negative, because the added mental burden of yet another choice detracts more than the utility of it.

More than 50

After about 50 everything will just be a blur because the possibility of comparing it to all the other choices has broken down in our consciousness.  So, adding another choice will not make any difference. The marginal utility of choice is zero.

This means that the aggregate utility might even fall below zero some and then stabilize. This means that if you keep adding choices there might come a time where the utility is lower than having no choice.

 

The marginal utility of choice curve

Please keep in mind that this is a hypothesis, that is based on theoretical observation and some, albeit anecdotal, empirical evidence. I think it aligns well with a lot of observations, but it should be tested more rigorously.

We can now draw a function for the marginal utility of choice. It looks something like the above

We see that the total utility rises sharply in the beginning, we can call this “The climb to enlightenment”. This is the section where adding another choice gives the most marginal utility until the it approaches zero. The curve then evens out and decays which we could call “The slope of attrition” where adding another choice reduces the perceived benefit. The marginal utility of choice is below zero, which means that every new choice added decreases the aggregate utility of the service. The reason is that each new choice adds cognitive friction. Finally it will stabilize “This we can call the plateau of indifference”, this could be above or below zero, because having a lot of choices could very well be more frustrating than having none at all.

 

Case study – SaaS pricing models

It would be interesting to see if we could see this in real life. Let’s examine the case of Software as a Service pricing. SaaS companies live from selling subscriptions. Usually the user can choose between several different plans. This is obviously a case where the number of choices is important. If it were true that there is an optimum of choice between 1 and 7 we should be able to see this in how many tiers are actually offered.

In an excellent report by Price Intelligently “The SaaS Pricing Page Blueprint”, the authors studied 270 SaaS companies’ pricing page. If we look at how many plans the companies have it is overwhelmingly evident that most companies (88%) only let their customers choose between less than 7 options. About half of all companies offer three or four choices (55%). So, clearly SaaS companies have most success with offering less than 7 choices.

This seems to be an indication that there could be evidence for the stipulated rule that the marginal benefit is postitive between 1 and 7 .

Limiting cases

Now, this curve holds under the assumption that the user is doing his choice unaided by anything other than his or her own consciousness. It is important to note that this assumption doesn’t hold in all cases. Today we can often use AI to help us cut down the number of choices. When we look at a book on amazon a number of books are presented below. It is not a list of all books amazon has, but a subset based on what the algorithm thinks is relevant. The utility in this case depends on the precision of the algorithm, which is a completely different problem.

Another condition that should hold is that the choice should be comparative, that is, it should be a choice where the alternatives are compared rather than evaluated one by one. An example is finding a movie on Netflix or iTunes. you may go through a long list but you are not usually comparing every single movie to each other. Either you will create a short list (which will probably not be much longer than 4 to 7 movies) or you will just choose movie by movie: “do I want to see this now?” (a binary choice).

 

So, if you have a situation where the user should be given a comparative choice and there is no way to make AI support for that choice, then the marginal utility curve stipulates that about 4-7 choices is the best from a general point of view.

 

 

A Practical Guide To Doing Cost Of Delay Based Prioritisation

It is often very difficult to prioritise what to build and when. One of the most efficient methods of prioritising features is prioritising according to cost of delay.

Originally invented by Don Reinertsen in “Managing the Design Factory” as a new way of looking at how to build stuff, it has inspired many agile teams to apply this thinking in their sprint planning efforts.

The problem is always how exactly to find out what the cost of delay is. It is notoriously difficult to put a price tag on a feature. This is probably what has discouraged most people from doing it. But the fact that you can’t put a precise price tag on the cost of delaying a feature shouldn’t keep you from applying this kind of thinking.

One very good solution to how you might do this is provided by Dean Leffingwell in his book: “Agile Software Requirements”. He argues that cost of delay can be broken down into three components: User value, Time value and Risk reduction.

How To Break Down Cost of Delay

User value is the potential value of the feature in the eyes of the user. Product managers or product owners usually have a good feeling for this, but other parts of the company like consultants or sales people who spend a lot of time will also have a pretty good understanding this as well. One should not forget that asking the user him or herself is the most obvious solution. The reason why most people don’t so this is probably that it is relatively cumbersome at scale. At Sensor Six however we have seen customers use our product with great success to engage directly with customers to get input on the user value. Often a company will have a customer panel or a mailing list, where it is natural to ask your customers. It doesn’t have to be real value, but something simple as a 10 point scale could easily be used. More sophisticated uses we have seen with our customers is the use of forced ranking, which is an excellent way of measuring if you don’t have too many features to rank.

Time value is based on how user value decays over time. Many features are more valuable if they are delivered to the marketplace early, so they can provide differentiation. This depends very much on an analysis of competitors current state and what is assumed they are working on. This is why it is usually business analysts or product managers who will be able to rate this. Again it is possible to just use a 10 point scale to measure it.

Risk reduction/opportunity enablement describes the degree to which a feature helps us mitigate risks or exploit new opportunities. We live in a world with many unknowns and therfore it is important to be able to guard yourself against the unknowns that are threatening (risks), but also to remain open to the ones that could help us (opportunities). This evaluation will always be very subjective and dependent on the person doing the rating. Since people have very different perceptions of risk it is a good thing to invite several people to ascertain risks.

I would say that these are really good suggestions, but there could be other approaches as well. Leffingwell argues for rating thes on a scale from 1 to 10, but that can also depend on the context. If you have very few features and want a very precise measurement you should use a relative method such as ranking or pairwise comparisons. But in the end it is more important to have some sort of indication, so an easy method like a 10 point scale will be a good place to start.

How to Use the Cost of Delay Calculation For Prioritisation

The cost of delay of a feature is the same as the value it has if it is not delayed, ie. produced. Now that you have this value the next thing you would want to do is hold it up against the effort it would take to produce the feature. Effort can be estimated in the same way with a 10 point scale. If you work in an agile context you may as well use story point if that is possible. Here is where you can get your engineers to look at the features and give some sort of estimate.

Once you have som data on the effort there are several ways you could attack the problem. According to Reinertsen in The Principles of Product Development Flow there are two ways:

Shortest job first is the method to use if you want to look at minimising effort. You simply start with the features that are smallest and work your way through them regardless of the cost of delay. If features all have the same cost of delay, this is the best way to do it.

High Delay cost first is where you simply start with the features that have the highest cost of delay regardless of the effort. This is efficient in producing a high economic output. If features have the same effort this is the best way to do it.

Weighted shortest job first is where you weigh the value against the effort. If features have different effort and value, which is most often the case, this is the most efficient method to use.

How To Do It In Practise

Strangely, given the popularity of this thinking, no tools seem to support this particular way of prioritising, so it is always something that needs to be done in spreadsheets. To our knowledge Sensor Six is the only product management tool that does all of the above out of the box and even makes it possible to engage different stakeholders directly. In the following I will show how you can do exactly what Leffingwell recommends in Sensor Six.

You can see how the above would be set up in Sensor Six by going to our website and logging in with the credentials below or you can follow the step by step guide to prioritise your own features.

https://sensorsix.com/login
 CODdemo
7uJI%1JK

Doing Cost of Delay Prioritisation In Sensor Six

First you set up the different criteria: User Value, Time and Risk Reduction. You configure them as benefits and a 10 point scale. Supply them with a description.

Now you can rate them solo and evaluate everything by yourself. If this is fine and sufficient for your needs you may skip the next section.

If you want the product manager, sales rep or any other stakeholder group who are fit to act as a proxy for the user to evaluate user value simply go to the collaborate section. Here you can configure a workspace that will allow the user to give input to the user value. Simply copy the link to the workspace and send it to a list of people who you think could give you this input. Have the competitive intelligence function whether it is located in marketing or some other division rate the time value to get input on the time cost of delay per feature. Let the business analyst give input on the risk reduction opportunity dimension. Finally you need to know something about the effort. This you will invite your engineering department to work on.

The actual persons or roles can be cut differently, so maybe you will ask the same persons to rate the time and risk reduction domains (typically business analysts), engineering will estimate the effort while sales or support may evaluate user value.

When you have input on the cost of delay, you can plot the total cot of delay on the y axis and the effort on the x axis.  You should then choose those with the best cost of delay/effort tradeoffs first in order to arrive at the most efficient development plan, if your features have varying effort and cost of delay.

So this simple process can save you hundreds of hours and deliver more value to the market place quicker.

 

 

 

 

From the Super Bowl to Super Products

Later this evening Super Bowl XLIX is played. One of the teams playing has reached it more frequently than any other team in recent decades. The Patriots are a remarkable team, that we could learn a lot from about product development and winning against the competition in a highly competitive market (disclaimer – I always hope the Patriots loose so this is not a fan post).

Rarely has a team performed consecutively at such high a level for so long. That fact alone proofs that it can not just be explained as luck or random variation. There must be something fundamentally right about what they do. That I think is to be found in their culture. I think that culture could be transplanted into product development as well. After all some companies, like the Patriots, also consecutively produce amazing products.

The secret sauce

I read too many sports books, I know, but I excuse myself with the slim possibility that what I read could potentially be transferable to real life. Some years ago I read “Patriot Reign” by Michael Holley. He was the first to gain continuous access to Bill Belichick and the Patriots over a period of two years. And Belichick is the key to understand Patriots success and their secret sauce.

When Bill Belichick came to the Patriots they were nothing special. They had a losing record and no one expected anything from them. One of the first controversial moves was firing all-star quarterback Drew Bledsoe and substituting him with a 6th round draft pick rookie quarterback: Tom Brady. This move, in my opinion describes what is truly unique about the Patriots.

What this move shows is that the organization is a rigid meritocracy. It doesn’t matter what you did last year or even five minutes ago; it is what you are doing now that counts. It also means that you don’t hire the rock stars and if you do be ready to fire them the minute they don’t perform. A meritocracy is built around rewarding performance continuously. This is opposed to being rewarded for friendship with management, good looks, past achievements, loyalty or even a good sense of humour (although these things can be very good).

In a meritocracy you trust potential when it is demonstrated and give inexperienced employees a chance if they have proven worthy. Now this means that you also have to have a good idea about what merit is, that is, what counts as good and what counts as bad. If you are developing new products a 40 yard dash may not be the thing like it is for a football team.

 

Establishing a framework for merit

What you need to do is to establish a framework for merit. This framework doesn’t have to be very formal. You actually just have to have an idea of what good performance is and then measure employees up against this.

These performance goals should however be tightly aligned with the strategy of the company and the industry it is in. The Patriots are not necessarily looking at the same KPIs as everyone else. They have found their own KPIs based on their analysis of the game. This is interestingly visible in the recent “Inflateagate”: It was discovered that the balls the Patriots played with were not properly inflated. Puzzling as that information it is tied very closely to strategy and performance goals. A recent article by Warren sharp uncovers the reason

The article starts from a peculiar fact: The Patriots were freakishly outperforming all other teams in number of plays per fumble since 2006 when a new rule was made that allowed you to bring your won balls. Why are fumbles important? It turns out that you significantly increase your chance of winning a game if you have fewer fumbles than your opponent. The more plays you have per fumble the better.

Part of the Patriots strategy was to improve plays per fumble, since this would increase their chance of winning. One way of doing this is to deflate the balls, but I am sure that fumble ratio is a very important parameter in player evaluations as well.

 

From Super Bowls to Super products

This is an example of how an analysis of the game or market if you are a tech company can help you spot a weakness to exploit. The challenge is then to find a KPI and establish a framework of merit for evaluating performance and then implement a rigid meritocracy around these insights. Doing that will help you consecutively outperform the competition because your organization will consecutively produce super products.

Product Idea Triage

When I was training to become a fire fighter in my younger days, we also had training in wartime disaster relief. This is where you learn how to set up emergency hospitals in tents and rescue injured people from collapsed and collapsing buildings while everything is on fire around you in the middle of a firestorm. Good stuff to know.
But one of the more relevant and well known techniques we learned, once you actually locate and rescue someone is triage. In general triage is a way of sorting people into groups of priority, that is, who needs most immediate medical attention. There are several different schemes practiced around the world, but we learned to use three different groups:
A) Those that were not critically injured and who needed medical attention at some point, but could wait
B) Those that were seriously injured, but were able to survive if they were given immediate medical attention
C) Those that were mortally wounded and who, given the circumstances, would not be able to be saved
This is a tough decision to make, but we were training for wartime or serious catastrophes (I have actually tried triage myself at an American hospital. That was a very pleasant experience, because I got put into the B category apparently and skipped the long line in the emergency room. My serious, though not yet mortal wound, was being bit in the nose by a dog. Yes I know, I should have known better since I have also trained sheep dogs, but I feel we are drifting a bit off topic here…

Idea triage
I often encounter the gordian knot of prioritisation in software companies: it is impossible to find out which ideas to build first. That leads to prioritisation paralysis. There are too many ideas to work on and you don’t know for sure which ones will be of most value. You need to specify in more detail the idea before you can ask your engineers to build it. It may be necessary to do a mock up or describe some business rules. It could be that it is necessary to do graphical design. You can immediately see that some ideas are more well shaped and don’t need so much work, while others are just too complicated. Some ideas are urgent and need to be built immediately. They could be bugs, while others can easily wait. How do you get started? You probably don’t just start from idea number one.
In this case you can use the same triage technique as I mentioned above. You just have to tweak the meaning of the triage categories a bit:

A) ideas that have some potential but are in no particular rush to be made. They may need a bit of polishing and precision, but there is no need for immediate action. Ideas in category “A” can just be queued for when you have a moment to work on them. They could be customer requests, infrastructure features, annoying but non critical bugs etc. These should be treated as buffer tasks that can be picked by engineers when they are idle.

B) Ideas that have some potential, need to be made immediately. Ideas in category “B” are critical bugs, responses to competitors, or any other thing that has a large time to market constraint. They should be put into the most immediate sprints or releases. If needed other ideas should be taken out of the Work In Progress and parked.

C) Ideas that have very limited potential of immediately becoming fruitful. Category “C” ideas are those that are typically too complex to build quickly. It will take a long time to conceive, design and build them. However there may lie a huge potential here. The trick is therefore to selectively choose some that have a huge potential benefit. They could be big bets or the like. Find a percentage of work effort that can be dedicated to following unlikely successes, so you don´t miss out on great opportunities.


The process
Ideally every idea should be scored according to the strategic parameters of the company, but I realise that this is often not feasible. Either because it is too cumbersome or because there are simply too many ideas. This is why we suggest this faster, but more crude feature triage. The product manager can either do it himself or together with the engineering team. It is a good idea to have both market knowledge and technical knowledge present since you will have to gauge the feasibility and market urgency of the idea.
One way is to get three signs (for example blue, white and cyan. For each idea, each participant holds up the triage sign he or she thinks it belongs to. In case all signs are the same color this feature is simply given this triage category. In case of disagreements the reasons for this disagreement should be discussed. If after 5 minutes the disagreement has not been settled you go to a vote. It is important to have an unequal number of participants. If this is not possible and the votes are equally distributed the triage is decided with a coinflip.
This way you will quickly have escaped prioritisation paralysis and be ready to do some actual and sensible work.

What is a successful product?

Every company wants to be a success. One key ingredient in that is successful products. A successful product will look different to different companies, but usually it can be tracked by KPIs. For any business it is very important to find the right KPI and be wary of so called vanity metrics.

What’s your “On Base Percentage”
In Michael Lewis’ “Moneyball” the Oakland A’s changed their scouting and talent acquisition philosophy to focus on only one metric. In the world of baseball and the hundreds of possible metrics you could use it sounds like insanity. Never the less the philosophy went from the premise that the number of bases is what wins games. Not number of home runs, how handsome the batter looks or stolen bases. It all comes down to one thing: “on base percentage”, how often will the player get on base. Never mind how he is doing it or how he looks. Now this insight allowed the Oakland A’s to perform above average given their limited budget and exploit some loop holes in the market.

This is a powerful reminder that focusing on one thing is very powerful. It will eliminate discussions about which of the KPIs is more important in case one goes up and another goes down. It will also ensure that everyone is one the same page and no-one is in doubt about what success looks like. So the first step towards product success is to find your “On base percentage”

Different types of KPI
There are two main types of KPIs objective and subjective. They have different properties and can be used in different circumstances.

Objective KPIs are the best, because they pick out a measure that can be verified regardless of human perception and interpretation. They are things that can be measured like: Frequency, volume, amount, duration. Most of the classical webanalytics and economic parameters fall in this class. A good example of an objective KPI is downloads per day for an app. There is no way you can argue with that. The same goes for units sold per day.

Subjective KPIs measure things depend on human subjective assessment like interpretation feeling and interpretation. That does not mean that they need to be less quantifiable. When you measure satisfaction, there are many good frameworks for doing that. It could be Facebook likes, customer satisfaction ratings, retweets, NetPromoter Score, shares etc. These are all quantifiable measures of a subjective quality, whether that is satisfaction, interest, pride or something else.

The business model determines the KPI
The absolute central KPI will always be monetary, like revenue, margin, equity and the success of a product will in most cases be measured directly by the amount of value it generates. The typical product will be measured by the revenue it generates. It could be margins as well, but it is a bit more complicated to use that since the company´s cost structure doesn’t necessarily have any connection with the success of the product. Equity in that sense is even more complicated to track (but more about that below).

Some people may object to money being the central purpose of a product, especially if they are of an idealist anti-capitalist inclination. But since money pays for salaries electricity the success of the product must somehow be traceable to this. Even if you are a non profit charity organisation you need to pay the rent, electricity and taxes. Some products are offered for free and are viewed as successes even though they generate little or no revenue, such as twitter, Facebook, snapchat. But the reason they are valuable in the eyes of investors is the promise of revenue, a kind of deferred revenue. Typically the KPI is number of users. But this figure is usually recalculated into revenue by a value that you think you can generate from them (eg. $10 per user). In that sense you are back at money as a measure for product success.

Revenue
The most straight forward measure of product success is the revenue it generates. The reason is that it is the cleanest and most intuitive way. Nobody can argue with that figure – money in the bank is money in the bank. Sometimes that is more important than anything else, since you can’t argue with liquidity. Examples of these types of products are SaaS products where it is popular to track Monthly Recurring Revenue per product. That is an accepted standard. More traditional software companies may track revenue from licenses sold. Transaction based companies like credit card companies or other types of infrastructure companies who charge by transaction may track the number of transactions.

It may however occur that the revenue is not directly referable to the product in itself. That is the case for many free products (not freemium products). A good example are open source software products. If we take the database Cassandra as an example. It is free and can be downloaded. It is built and maintained by Datastax which received gazillion in funding recently $45 million ($83,7 in total). You could track Cassandras success in installed base. Then again, most people need support, since it is after all a complicated product, and guess who is selling SLAs, support and implementation consultancy for Cassandra? Datastax, so again the KPI is a proxy for revenue (more interestingly this creates a drive towards making the product more needy of consultancy, so what looks like a successful product to the company is not necessarily the same as to the customer in this case).

Revenue is a good around indicator if the business model dictates you earn money on all the units shifted.

Margins
Margins are a more precise way of tracking product success because it directly supports business success. Revenue may not, since you could be selling products for a lower cost than the production cost. More sales would therefore not lead to success for the company. Margins will almost always do that.

It is a good thing to track margins when cost per unit is easy to calculate. That is usually the case for hardware and physical products, where cost per unit is often already calculated. So, in the manufacturing industry that would be a good measure. The same goes for retail, where the cost per unit is the purchase price, plus transport and storage. Sometimes the margins are just calculated as the purchasing price minus the sales price. That is good rough measure of success.

In other industries, like the service industries, it is a bit more difficult. If we take a hotel, the cost per Unit is somewhat more difficult to calculate, since heating, cleaning and electricity is not typically calculated per room. The same goes for professional services, where one our sold may have the cost of the consultant for that hour, but also all the other hours he or she didn’t bill to any customers and that amount is very variable.

Margins are a good KPI for industries where the Cost per unit is easy to calculate and intuitive to understand.

Equity
In some cases revenue and margins may fall short as measures of success. To measure product success in terms of equity is more prevalent in isolated fields. In accounting terms, equity is the the amount of liabilities minus the assets. For a product that would mean how much you owe (for buying, producing, building etc.) minus the market value. That doesn’t make much sense in terms of serial produced products like cell phones.
If we take real estate it makes a lot of sense. If the product is a house or an apartment, the ability of investments in this apartment to raise the market value is a very good measure of success. Something similar is also the case for incubators, where the start up itself is the product. Here you may want to track the investment against the market value in order to see whether the product is successful. I would guess that Y-combinator, Rocket internet or 500 start ups would track the equity of each individual start up rather than their revenue or margins.
One problem with equity though is that market value may be very hard to assess before you sell the product. That makes it hard to use as a proactive KPI. But usually there are other proxy indicators. In consumer technologies, the standard one is number of users, downloads, visitors etc. because that can be used to calculate predicted revenue and therefore market value.

Equity is a good measure if the product is unique.

Non monetary measures
There may still be instances where you would measure product success by some other KPI, which is not monetary. One example is satisfaction. Many products from state departments are offered as a service to citizens. They are still products, but they do not generate any revenue, margins or equity, indeed, they were never meant to. Instead they generate a service, the success of which can be measured by satisfaction in one sense or another.
Satisfaction is somewhat more difficult to measure, because it depends on subjective measures. A rating scale is a typical solution, but it could also be sentiment analysis from social media or clicks on icons such as likes and smileys. Number of issues registered or complaints related to a product would be another way of tracking satisfaction. Sharing on social media is similarly usually a good indication of satisfaction, but you can’t tell whether it is shared because of a positive or negative experience. Some public sector products are however not meant to have high user satisfaction ratings: Jails will not be measured by user satisfaction either, since the users are meant to not have great satisfaction from it.

Foundations are atypical, but they will typically have a KPI that can be derived from their particular purpose. If it is investing in alternative energies, or saving the world like the Bill and Melinda Gates foundation, then somehow a KPI would be found for that use. But then again, foundations typically don’t have that many products.

Non monetary measures are good when the business does not generate it’s operating budget from the product.

Recommendations
Product succes is key to the success of the entire enterprise. You should carefully study the business model of the enterprise and pick just one KPI as the measure of success. The KPI should be quantifiable and possible to track continuously. Otherwise you don’t have anything to steer after.

You could and should still track other KPIs if they in some way influence the central KPI. That way you can learn the best ways to optimise your product and see early warnings of problems.