A Citywide Mesh Network – Science Fiction or Future Fact?

I recently finished Neal Stephenson’s excellent “Seveneves”. The plot is that the moon blows up due to an unknown force. Initially people marvel at the now fragmented moon, but due to the intelligent analysis of one of the protagonists it becomes clear that these fragments will keep fragmenting and eventually rain down on earth. The lunar debris turns into comets that start making the earth a less than pleasant and very hot place to live. In order to survive the human race decides to build a space station composed of a number of individual pods (designed by the architects!). This design is chosen in order to have the opportunity to evade incoming debris like a shoal of fish evades a shark.

Naturally there is no Internet in space but the natural drive towards having a social network (called spacebook) forces the always inventive human race to find another way to implement the internet. The resulting solution is a mesh network.

The principle of a mesh network:

“is a local network topology in which the infrastructure nodes (i.e. bridges, switches and other infrastructure devices) connect directly, dynamically and non-hierarchically to as many other nodes as possible and cooperate with one another to efficiently route data from/to clients”

The good thing about mesh networks is that every node can serve as a router and even if one or a few nodes fail (as they might in a space orbit filled with lunar debris) the network would still work. Contrast this with a network typology where one or even a few pods had central routers, like our present day Internet which is based on the hierarchical Domain Name System where traffic depends on a few top-level DNS servers. If these were all taken out the whole network would not work. With a wireless mesh network, the network would continue to work as long as there are nodes that can reach each other. But enough of the science fiction let’s get back to the real world.

The City Wide Mesh Network

New York City, where I work, has had its own share of calamities. Not quite the scale of the moon blowing up, but September 11, 2001, was still a significant disaster. The effect was that the cell network broke down due to overload. This greatly reduced first responders’ ability to communicate. In order for this not to happen again, NYC built its own wireless network: we call this NYCWIN. For years this network has served the City well, but the cost of maintaining a dedicated citywide wifi network is high compared to the price and quality of modern commercial cell networks.

However, the cellular network is also patchy in some parts of the city, as most New Yorkers have noticed. It is also expensive if we want to supply each IoT device in the City with its own cellular subscription. Typically a cellular connection will have a lot more bandwidth than most devices will ever use anyway. So, might it be possible to rethink the whole network structure and gain some additional benefits in the process? What if we created a citywide mesh network instead? It could function in the following way:

A number of routers would be set up around the city. Each would be close enough to reach at least one other router. When one router fails there are others nearby to take over the network traffic. These routers would form the fabric of the citywide mesh network.

Some of these primary routers would be connected to Internet routers either through cables or cellular connections. These special routers would serve as gateways to the internet. In this way the network would effectively be connected to the Internet and we would have a mesh Internet. This is actually not something new, in fact it already exists! It has been implemented by a private group called NYC Mesh: They have created their own proprietary routers for this, but wouldn’t it be cool if the City scaled a similar solution to use by all New Yorkers and visitors. Free of charge, like the LinkNYC stands. Oh and could they not maybe be the Internet gateways we thought of above? Think about it, what if wifi was just pervasive in the air of the City for everyone to tap into?

Better than LTE

The beauty of this is that this network may even be better than the cellular network, since it can better be extended to parts of the city that have patchy coverage from cell towers. We would just have to set up routers in those areas and make sure there was a line of connection to nodes in the existing network or an internet gateway. It would even be possible to extend the network indoor, even to the subway.

With thousands of IoT devices coming online in the future years, costs will increase significantly for Smart City solutions. Today it is not cheap to have a device connected through a cellular carrier to the Internet. Since it is essentially a cell phone connection, it also costs about the same typically. This may economically make sense compared to alternatives for the number of devices connected today. But scaling towards millions of devices, this approach is untenable in the long run. The City Wide Mesh Network could be a scalable low cost alternative for all of the City’s IoT devices to connect to the Internet.

Building and maintaining the network

It is quite an effort to implement this network and maintain it, but there is also a way to get around that. Today it is possible for commercial carriers to put up cellular antennas on City property if permission is granted. What if we made all permissions contingent on setting up a number of mesh routers for the citywide mesh network? Then, for every time a cellular or other antenna was set up, the citywide mesh network would be strengthened.

It could simply be made the obligation of the carriers that are granted use of City property for commercial uses, that they maintain their part of a free city wide mesh network. The good thing about a mesh network is that there is no central control and making it operational would just entail following some standards and add and replace network nodes. The City would have to decide on the standards to put in place: what equipment, what protocols etc. Not an easy task perhaps, but also not impossible.

In order to maintain the health and operation of the network monitoring would have to be in place. We could see in real time what nodes were failing and replace them. It would also be possible to elastically provision nodes when traffic patterns and utilization makes it necessary.

World Wide Standard

Now here is where it could get interesting, because the issue today in Mesh networking as in most other IoT is that there are no common standards. Vendors have their own proprietary standards and no interest in making it compatible. History has shown us that the only way to impose standards on any industry is through governmental mandate. New York could of course not mandate a standard, but what if the City forced all vendors who wanted to sell to the New York City Wide Mesh Network to comply with a given standard? The industry would have to develop their products to this common standard. Since New York has the size to create a critical mass this could possibly be the start of a new Mesh Network standard.

New York works together with a lot of other cities that often take inspiration from us in issues of technology. An example is open data, which originated in New York, but is now spread to virtually every city of notable size. The same could be the case for the City Wide Mesh Network design and standards used. That way, cities could have a blueprint for bringing pervasive low cost wifi to all citizens and visitors.

Fiction or Fact?

If a similar catastrophe to 9/11 were to ever happen again, then the mesh network would adapt and through healthy nodes still be able to send data around, possibly slower, but it would not fail. Only the particular nodes that were hit would be out, but the integrity of the network would be intact. It is, of course, possible that islands without connectivity would appear but that is to be expected. As long as the integrity of the network is unaffected it is ok.

It is actually possible to create a robust low cost, citywide network that would be developed and maintained by third parties with better coverage than cell phones all the while helping the world by forcing the industry to implement standards that would improve interoperability for IoT devices. This is not necessarily science fiction: everything is within the realm of possibilities.

The Data Deluge, Birds and the Beginning of Memory

One of my heroes is the avant garde artist Laurie Anderson. She is probably best known for the unlikely hit “Oh Superman”  in the eighties and being married to Lou Reed, but I think she is an artist of comparable or even greater magnitude. On one of her later albums is a typical Laurie Anderson song called: “The Beginning of Memory”. Being a data guy this naturally piqued my interest. It was sort of a win-win scenario. The song is an account of a myth from an Ancient Greek play by Aristophanes: “The Birds”. Here are the lyrics to the song :

There’s a story in an ancient play about birds called The Birds
And it’s a short story from before the world began
From a time when there was no earth, no land
Only air and birds everywhere

But the thing was there was no place to land
Because there was no land
So they just circled around and around
Because this was before the world began

And the sound was deafening. Songbirds were everywhere
Billions and billions and billions of birds

And one of these birds was a lark and one day her father died
And this was a really big problem because what should they do with the body?There was no place to put the body because there was no earth

And finally the lark had a solution
She decided to bury her father in the back of her own head
And this was the beginning of memory
Because before this no one could remember a thing
They were just constantly flying in circles
Constantly flying in huge circles

While myths are believed to be literal truth by very few people they usually point to some more abstract and deeper truth. It is rarely clear exactly how and what it means. But I think I see the deeper point here that may actually teach us something valuable. Bear with me for a second.

The Data Deluge and The Beginning of Memory

The feeling I got from the song was eerily familiar with the feeling I get from working with Internet of Things. Our phones constantly track our movements; our cars record data on the engine and performance. Sensors that monitor us every minute of our lives are silently invading our world. When we go through the streets of Manhattan we are monitored by the NYPDs system of surveillance cameras, Alexa is listening in on our conversations and Nest thermostats sense when we are home.

This is what is frequently referred to as the Internet of things. The analogy to the story about the birds is that until now we have just been flying about in circles with no real sense of direction or persistence to our movement. What is often overlooked is that the fact that we can now measure the movement and status of things only amplifies the cacophony of the deafening sound of billions of billions of birds, sorry, devices.

This is where the birth of memory comes in. Because not until the beginning of memory do we gain firm ground under our feet. It is only with memory that we provide some persistence to our throngs of devices and their song. We capture signals and persist them in one form of memory or another.

The majority of interest in IoT is currently dedicated to exactly this process, how do we capture the data? What protocols do we use? Is MQTT better or does AMQP provide a better mechanism? What is the velocity and volume of the data? Do we capture it as a stream or as micro batches?

We also spend a great deal of time figuring out whether it is better to store in HDFS, Mongo DB, or Hbase, should we use Azure SQL Data Warehouse or Redshift or something else? We read studies about performance benchmarks and guidelines to making these choices (I do at least).

These are all worthwhile and interesting problems that also capture a large part of my time, but it also completely misses the point! If we refer back to the ancient myth, the Lark did not want to remember and persist everything, it merely wanted to persist the death of its father, it only wanted to persist something because it was something that mattered!

What Actually Matters?

And this is where we go wrong. We are just persisting the same incessant bird song frequently without pausing to think about what actually matters. We should heed the advice of the ancient myth and reflect on what is important to persist. I know this is against most received wisdom in BI and Big Data, where the mantra has been “persist as much as possible, you never know when you are going to need it”

But actually the tides are turning on that view due to a number of new limiting factors such as storage, processing and connectivity. Granted, storage is still getting cheaper and cheaper and network bandwidth more and more ample. Even processing is getting cheaper. However, if you look closely at the fine print of the cloud vendors, services that process data and move data are not all that cheap. And you do need to move the data and process it in order to do anything with it. Amazon will allow you to store anything at next to no cost in S3, but if you want to process it with Glue or query with Athena it is not so cheap.

Another emerging constraining factor is connectivity. Many devices today still connect to the Internet through the cellular network. Now, cellular networks are operated by carriers that pay good money for the frequencies used. This money is passed on to the users. On average a device is not different from a cell phone, so naturally you have to pay something close to the price of a cell phone connection, around $30 to $40. I do get the enthusiasm around billions of devices, but if the majority of these are connecting to the internet through the cellular radio spectrum, then the price is also billions of dollars.

Suddenly, the bird song is not so pleasant to most ears and our ornithological enthusiasm is significantly curbed. These trends are sufficient to warrant us starting to think about persisting only what actually matters. That can be a lot, if you really have a feasible use case for storing for example for storing all your engine data (which you might), it could also be that the 120 data points per second from your connected tooth brush may turn out to probably not matter that much.

And I haven’t even started to touch on how you would ever find sense in all the data that you persisted to memory. Most solutions do not employ adequate metadata management or data catalogs or other solutions that would tell anyone what a piece of data actually “means”. If we don’t know or have any way of knowing what a piece of data means there is absolutely no reason to store it. If you have a data feed with 20 variables but you don’t know what they are, how is it ever going to help you?

Store what matters

This can actually be turned into a rule of thumb about data storage in general: The data should be stored only to the extent that someone feels it matters enough to describe what it actually is. If no one can be bothered to pin down a description of this variable and no one can be bothered to store that description anywhere it is because it doesn’t matter.

 

 

 

 

 

https://en.wikipedia.org/wiki/The_Birds_(play)

 

Pragmatic Idealism in Enterprise Architecture

Being an enterprise architect I am not insensitive to the skyward gazes that project managers or developers make when being “assigned” an architect. The architect is frequently perceived as living in an ivory tower of abstraction in perfect disjunction from the real world. At best he is a distraction, at worst a liability

The architect frequently lives in a completely idealized world and he is tasked with implementing these ideals. However often this fails precisely because the ideals rarely conform to the reality. The architect fails to appreciate, what in military parlance is sometimes referred to as “the facts on the ground”. He is too often the desktop general.

Symptoms of an idealist regime is

  • There is a guideline for that
  • Templates for any occasion
  • We have it documented in our Enterprise Architecture tool, any other questions?
  • More than 3% of the IT organization are Architects
  • CMMi level 5 is viewed as the minimum requirement for doing any kind of serious work

Now consider the architect’s counterparts: project managers, developers or sys admins that just want to get the job done in a predictable way. These guys live “the facts on the ground”. They know all the peculiarities of the environment or system being worked on.

Symptoms of a pragmatist regime is the following

  • If something breaks we fix it and get back to our coffee break
  • Upgrade what is already in place when it runs out of support (urgency promotes action)
  • Enhance existing functionality, it already works
  • New technology is like the flu, it will pass, no need to get it
  • A big pot of Status Quo (not the band) with a dash of Not-invented-here

The pragmatist will never really fundamentally transform the situation because he always wanders from compromise to compromise. He is wandering from battle to battle. Now this will rarely win the war.

It seems that we are left between a rock and a hard place. One, the idealist, will never move anything but has the sense of direction. The other, the pragmatist, will move plenty but has no sense of direction so it will mainly be in circles. Let us turn our attention to a possible way out of this conundrum. The answer is a philosophical stance first attributed to John Dewey at the start of the previous century: Pragmatic Idealism. Well, duh. Was that obvious?

It is just as obvious as it is rare in my experience. Pragmatic Idealism is a term often used in international policy, but is enterprise architecture not often similar to just that? It posits that it is imperative to implement ideals of virtue (think perfect TOGAF governance and templates for any and all possible architectural artifact), but also that it is wrong not to discard these ideals and compromise at times in the name of expediency.

What does this mean in practice? Here are a number of principles to help you live by the ideals of pragmatic idealism (if that makes sense)

 

Have ideals and communicate them frequently. If we become too pragmatic we lose the purpose of being an architect. We have to remember that the direction has to be set by us and we need everyone to know about it. Even if it is not immediately clear how we will get there. We need to provide input on whether we should go all in on open source or whether Microsoft is a preferred vendor. Here one caveat is that we have to be very sure about the ideal, because if we first have started to communicate it there is no way back. You will lose all credibility as a visionary if you stand up one day and say open source is the way forward and the next you sign an enterprise license agreement with Oracle.

This means that you have to bring a very good knowledge of where your organization is and where it wants to go. Without a solid understanding of both, you are better off playing it safe and going with the flow. That said it should quickly be possible to pick up one or two key ideals.

Ideals are ideally expressed as architecture principles. I often use TOGAFs formula of Name, Statement, Rationale and Implications:

  • Name – Should be easy to remember and represent the essence of the rule
  • Statement – Should clearly and precisely state the rule. It should also be non trivial (“don’t be evil” does not pass the test)
  • Rationale – Provides a reason for the rule and highlights the benefits of it
  • Implications – Spells out the real world consequences of this

The first thing you should do then is to flesh out these ideals and create a process through which you can create buy in to them. Chances are that the organization already has some that you can work from, but make sure that they also align with what you feel they should be going forward.

Oh, and also, don’t have too many ideals, that is, don’t have too many principles. We are shooting for something around the “magical number 7 plus or minus 2” as the title of George Miller’s groundbreaking article had it. In this article Miller demonstrated that the number of different items of information optimal for being remembered was 7 plus or minus 2. While later research has shown that it is probably even lower, this is still a good rule of thumb. Ideally you would want to be able to remember it your self but, more importantly, you want everyone else to remember your principles as well.

 

Approach every problem with the minimum amount of energy and structure necessary

Say what? Now is he advertising laziness or what? Not quite, there is actually hard science behind this. We know from the second law of thermo dynamics that disorder is the only thing in the universe that comes for free and automatically. Conversely order requires energy. Any person or organization only has a limited amount of energy.  What this means is that the net effect of your architecture endeavors will be maximized with the minimum amount of order necessary. Consequently the more thoughtfully you can use that energy the more effective you will be.

In practical terms this means that you should not develop 25 item templates for 9 different types of meeting minutes if you are the sole architect in a 9 man start up. You are clearly spending too much energy. It may be your ideal to have a template for every purpose but maybe it can wait until the purpose actually arises. Similarly you should not do all your architectural documentation in your code if you are building an application with 100 million lines of code. Even if you your ideal is Lightweight Architecture Decision Records as Thoughtworks advocates as the highest ideal.

Every problem is different. The architectural skill you have to develop is to find out how important it is. The more important a problem is the more structure and energy it deserves. This is why documentation is higher in regulated industries like pharma and banking; it is simply a necessity to stay in business.

There are different ways to gauge importance. First of all, if something is recurring frequently, chances are that it is important. At least from the perspective of efficiency it is worthwhile to bring structure to frequently recurring events. This is why many people took the time to structure an email signature with their name and phone number. That way they do not have to write it every time someone needs it.

Secondly, important stuff is tied to the business model. If you are in banking, data management, access control and auditing is important. In this case you might want to bring as much structure and predictability to that as possible.

 

Make every compromise count. You have to make sure that the ideals you are following are known and that every compromise you make is registered as such by the people on the ground. If no one knows the direction, then we are just back to basic pragmatism where everything is just another step in a random direction. You have to make sure that every compromise somehow leads to a larger goal.

If you want to move to a cloud first strategy and a given project has reservations about the cloud and wants to implement the solution on a local VM. Don’t just say ok, even if you think it is ok for this project. Make sure that you make clear what the advantages are and agree on non-trivial reasons why this particular project does not have to go to Azure or AWS. Sometimes a compromise can also be used as leverage for other architectural decisions, since people know you are there to implement ideals. This can even work doubly to your advantage in that you are seen to be pragmatic and possible to work with and they will feel like they owe you, or at least be on friendly terms. But beware, because it may just as well be perceived as weakness if there isn’t a good reason for the compromise.

 

The path forward

The world is divided into idealists in ivory towers watching and shouting and pragmatists scurrying about in their trenches as rats in mazes, but only the pragmatic idealists can effect real change towards the better. If we lean toward one we should try to be aware of the merits of the other. I have given a couple of principles I have found helpful: Have ideals and communicate them, approach every problem with the minimum amount of energy and structure and make every compromise count. There are many more ways to affect change since it all starts with having a pragmatic idealist spirit.

 

Architecture – Turning Fiction into Fact

I am an admirer of my compatriot Bjarke Ingels who is a real architect. His buildings always stretch the boundaries of the possible. For example, can you create an idyllic ski slope in a flat country like Denmark and put it in the center of a  city with more than a million inhabitants? Sure, just put it on top of an old power plant that needs re-building, and oh, maybe you could have the power plant’s chimney puff smoke rings? As Ingels puts it:  “Architecture is the fiction of the real world”   and this is what he did.

But buildings rarely exist in isolation; they are usually part of a city. Ingels continues: “The city is never complete. It has a beginning but no end. It’s a work in progress always waiting for new scenes to be added and new characters to move in”. While Ingels is talking about real world brick and mortar buildings and other constructions, there is no reason why this would not also apply to IT architecture as well.

This quote also applies to any modern enterprise. There will always be an IT landscape and it is always a work in progress. You will never finish. The only thing you can do is to manage the change in a more or less efficient way. When we create the IT architectures of the future we are in essence turning the fiction of user stories and personas into new scenes and characters of this ever evolving city. Our architectures will be evaluated on whether how real characters will inhabit the structures we create. Will our designs turn out like the Chinese ghost town of Ordos or the smooth coordination of the Tokyo subway. Just like the buildings and towns we create will only be successful if they become liveable for the people they are meant for, the IT systems we build will have to fulfill the functions of the users and the surrounding systems.

The Frontier of Imagination

Typically, we will ask people what they want and document this as requirements, user stories or use cases. This is all well, but if Ingels had gone out and asked the people of Copenhagen what they wanted we would have gotten more of the same building blocks and villas that are already prevalent. There would be no power plant with ski-slope simply because people would never have thought about that. If Steve Jobs and Henry Ford had just settled for what people wanted, we would still be speaking in Nokia phones hacking away at clunky black computers and riding in horse drawn carriages. We cannot expect our users, customers or managers to have the imagination. This is something we as architects have to supply.

The real frontier for IT architecture is imagination. We need to be able to imagine all the things that the requirements and user stories don’t tell, we need to be able to be bold and create stuff that no-one ever asked for. This is difficult for multiple reasons. First of all an architect is usually measured on how well the solution he designs solves the requirements put in front of him or her. That means there is little incentive to do anything more. Second, it is often difficult to gauge what would be needed in the future. Third, there is a tendency towards best practice and existing patterns, which does not further innovative solutions.

An Example

However, these obstacles to imagination can be managed. As Ingels has shown, it is sometimes possible to cover all the basic requirements, in a cost effective way that does the same or better than traditional solutions. The same is the case for IT architecture.

Let’s look at an example: as part of the continuous evolution of the IT landscape architects are often asked to re-architect legacy solutions. Some legacy solutions build on costly message queueing software. These can be upgraded and replaced easily with similar solutions. You can also usually show some cost reduction and performance improvement by shopping around but at the end of the day you would still just have the same basic functionality and missed out on an opportunity for improvement.

Let’s take a step back and consider what this architecture does and what it could do. Basically it moves data between endpoints in a secure way without ever losing messages. Now we have to ask ourselves, can we do this in a better way? One obvious way to never lose a message would be to just store it permanently. So, when a message that would usually be written to a queue comes in we will now store it on a persistent medium instead and keep it in as long as it makes sense. We can impose a retention schedule to move the data between different types of storage, that is from hot to cold storage and delete messages if we ever get tired of having them around. The first step is therefore to catch all data and just store it.

Once the data is stored we still need to get the data to target endpoints. Instead of sending messages on a queue we will just send a pointer to where we stored the message as an event. This will inevitably be a smaller and more uniform message than the traditional message in a message queue. When we have small and uniform it is a lot easier to optimize everything from speed to size etc.

This move also allows us to open up for different ways to consume the event. Now the target endpoints are not forced to have a queue client from the queue software vendor. They could just as well choose to have a REST interface, ODBC or Kafka. Our event generator just has to be able to connect to these different types of interfaces. That means it has to be able to call a web service, write to a table through ODBC, publish to Kafka or any other type of type of endpoint that is relevant.

The target endpoints now have much more freedom in how they receive notifications and how they will handle them. The center just has to have a number of different channel adapters for the different types of end points. These should be simple and easily configurable since the message is always uniform in its format and size.

Target endpoints now have to retrieve the payload from the message store based on the metadata in the event. Easy, we just put an API in front of our message store for them to call with the message ID. This API could be a web service or an ODBC connector or just a crude URL offered with the payload.

This approach lets us build a solution that does the same as a traditional queue, take a message from a source endpoint and ensure guarantied delivery to a target endpoint. Only we are able to do it with lower latency, more cost efficiently, we have made sure archiving is a built in feature. If something fails the process can be retried any number of times, since the message is never dropped from the queue and there is an API always open.

This architecture supports the basics but also allows us a number of new possibilities. Everything that has passed through our messaging system is now available through an API. The target end points can call any subset of messages on the subscriptions they follow at any point in time as opposed to the traditional approach where something was on the queue and once it was read it was gone.

We can even create a search API for messages in our message store. If we need to find some particular message we can now let the target endpoints do that automatically. This can be expanded further because we might have created a Hive table on the message store, so now we can access all the data that went through our pipeline through SQL/HiveQL. In the traditional world we would have to set up a separate solutions to aggregate messages to a different store and then ETL them into a Data Warehouse and create models exposed through BI tools for end users to gain access.

Turning fiction into fact

This solution turns a simple queuing solution into a cheaper, faster version of queueing that also suddenly is an API and Data Mart. This shows that there is no reason to be limited by the mental frameworks of legacy technologies that we are replacing. We need to think about the possibilities available to us today and imagine how we would solve the problems that legacy technologies solved given our current technology and the wider needs beyond this particular use case. We can, in fact, turn fiction into fact, sometimes we just have to be bold and let go of best practice and received wisdom.

 

 

 

How to come up with a product that is truly unique

How do you come up with a product idea that the whole world is not already selling? This is an interesting question that I think every entrepreneur asks him or herself regularly. I don’t have the answer for it, but I can tell you something about how to end up with the answer.

Ban TechCrunch 
The first step is to stop reading start-up media. Any start up media! That’s over – period. These just promote Groupthink and turns your attention to products and services that everybody is already doing. This is why the world is flooded with instant messaging and photo apps and to-do lists.

Think of it as entrepreneur information detox. You need to get it out of your system. If you absolutely need to read something, read something that nobody else reads. I can recommend Kafka’s short stories, Thomas Tranströmers poems or Mike Tyson’s biography.

If you have special knowledge…
Do you know something that most other people don’t? Have you worked in a niche? If you do then think hard about how to leverage that knowledge for a product or service. Is there some problem that is frequent in this special area you know, preferably a problem that someone would pay to get rid of. If that is the case you have your first lead there.

If for example you work in a cinema you may have noticed that it is a problem to clean chairs quickly enough between showings if somebody spilled something. Maybe the solution is a special coating for the chairs, maybe a cover that can be changed.

A good example of a company that did this is Zendesk | Customer Service Software & Support Ticket System. Zendesk started from the founders’ working with customer support systems, which they found to be too complex and difficult to implement and use.

If you have no special knowledge…
If you don’t have any specialised knowledge, which is often the case if you are fresh out of school or have spent most of your youth playing Fifa, there are several options. Think about stuff that you absolutely wouldn’t like to work with. Stuff that would be really boring, disgusting or socially awkward. It should be something you would lie about it if you were telling about it on a first date.

Think along the lines of condoms for dogs, reading stories for senior citizens, avoiding sewage blockage or code review. Now come up with a product/service that would make this thing easier.

“But why would I do something I don’t want to do?” you may ask. The thing is that this is usually a good indicator of what other people think as well and that is where you have the opportunity.

One of my favourites in this area is the company The Specialists who employ people with autism to do tasks that others find tedious like testing. What is incredibly boring or difficult for other people is something they like to do. Another example is Coloplast who makes products for continence care. Essentially they just make plastic bags, but for a special purpose.

Go datadriven
Another option is to find some way to pick up on a demand that is currently not well served. It could be selling niche stuff on amazon, which can be amazingly lucrative (see this thread on Quora). There are even tools for discovering such opportunities like Jungle Scout (Jungle Scout makes product research on Amazon EASY). But there are also other general SEO tools that can give you the same effect like Moz.

Get out into the world..
Now that you have some vague directions you have to go out into the world to find out how to build a business model around this. This takes research about the users and customers, but also about competitors and suppliers. Strategyzer | Business Model Canvas is a good short hand for figuring out what to think about and where to go.

Lean start up, MVP etc…
I’m not going to go into more detail about this here. A quick search will flood you with quality material on how to build a product from an initial idea and turn it into a success.

Building a Product Strategy for a Backend Product

When you learn and read about product management you will quickly learn how important it is to engage with your customers, be agile and make experiments, but when your product is a back-end system with no end users, but just other applications and it is considered key infrastructure that others depend on to work in a predictable way, it is not so easy to be agile do A/B tests lean start up style experiments and user testing.
This is a classical problem and one very often ignored in product management literature. Here it seems always to be about products that have users that you can sit down and talk to and learn what to do. There are however a few things you can do if you are the product manager of a back end product and need to build a product strategy.


Align Strategy

It is necessary to sit down and look at all the consumers of your product. They are essentially your customers. That means identifying all other products that depend on or will depend on your product. Unfortunately product managers don’t always have a strategy. Then you need to look at other artefacts like road maps, visions, even marketing material. It is also a good idea to talk to them to understand where they are moving. Here you actually don’t need to concern yourself with the end users.

Once this is done find out what the strategy is for their product. Doing this may uncover some contradictory demands. One product may want you to focus on microservices another on batch deliveries another wants a message based architecture. Some may prefer REST/JSON type services, others SOAP/XML and others just FTP/CSV in a scheduled batch. Welcome to the world of Agile development where teams decide inside their own bubble what would be most agile for them.

Unfortunately it is your problem to reconcile these differences with the different consumers. In order to do this you need stakeholder management.

Manage Stakeholders

It is necessary for you to chart the different stakeholders and weigh their importance and actually do a typical stakeholder analysis where you find out what their interests are and how you should communicate with them. Unfortunately most product managers leave it at that and forget the art of stakeholder management. In the best case they will fill out a stakeholder analysis and store it on their harddrive never to be opened again. But stakeholder management is more like politics. Watch Game Of Thrones or House Of Cards for inspiration.
You have to understand the different fractions and their powerbase. Understand the different persons their culture. You have to lobby ideas, be the diplomat, explain the positions of other stakeholders. Look at key persons social network profiles in order to find out what type they were, where they live, what they do in their sparetime. Understand their concerns, apply pressure when needed and yield when it is necessary. Remember politics is all about compromise. But you can only do that once you have a plan.

Draft a plan

All the input you have got from the above points now has to be integrated with your own knowledge about the product. What are the possibilities, the technical limitations, technical debt etc? Given your knowledge of the status of your product and the possibilities and available resources you have to plan for how it should change. Draft a plan on a few headlines. Focus for example on capabilities you would like to develop, data you want to capture or ways of working with consuming products. Find out only a few key goals you have, but have suggestions for more.

Reiterate

Now, start over again, because product strategy, like any strategy takes time and you need to form a coalition behind it if it should succeed. You are not finished until you have that coalition behind you. Not until then will you have a proper product strategy.

Wyldstyle or Emmet? Lego lessons for product managers

This holiday season offered a chance for me to see the Lego movie once again. Since I had seen it once already, my mind, not so tied up with following the action and intricate plot, was free to see the deeper perspectives in the film and put it into a product management context.
At the core the movie is about two different ways of building with legos. On the one hand we have Emmet, the super ordinary, construction worker and his friends who always build according to issued plans. On the other hand we have Wyldstyle and the master builders, who build innovative new creations from what ever is available.
The master builders are the renegades, “the cool kids”, those that fight the evil president business. They are extremely creative and anarchistic. The prophecy of Vitruvius states that the chosen one, a master builder, will save the universe.
When Emmet becomes the chosen one, a certain friction arises because he definitely does not have much in way of creativity or innovation potential. But he redeems himself in the end, because he is able to make plans and have the everyone work as a team. He gets the master builders to work together to infiltrate the corporate offices etc.

Working as a team
So, what does this mean? we could generalise lego building to any kind of building and therefore also building software. There are two modes of creation: the heroic genius way of the master builder  or the dull plan based of the team. Just as in the movie, we in the tech industry celebrate the master builders: we cheer the work of the lone geniuses: Steve Wozniack, Linus Thorvalds, Mark Zuckerberg etc.
But just as Walter Isaacson’s latest and highly recommendable book “The Innovators” show, the geniuses NEVER made anything entirely by themselves. It was always as part of some sort of team effort.
Further, every day the wast majority of software out there is built by lifeless ordinaries like Emmet, who are just following plans. Maybe it is time for their vindication and time to take seriously that software development is a team effort. It is never the result of the mythical master builder and there is no prophecy that a chosen one will save the universe. The ability to work is just as important as being a genius.

Worth keeping in mind for the product manager
In practise there are three lessons we could learn from the lego movie
1) Don’t frown upon a plan. Even if it might be changed along the way, a plan is not a bad thing in itself. Agile development for example is often pitted against plan based development. There can be different kinds of plans like roadmaps, specifications or project plans. Following your gut and just jumping from sprint to sprint entirely on inspiration and a spur of the moment will not suffice. It will, metaphorically, only let you charge towards the front door, while a plan may take you all the way towards the top.
2) There is an I in team – it’s hidden right in the “A” hole. A team effort is a team effort, and if you can’t control your ego you are an A hole. It is  important to keep egos in check, because the power of a team will always be superior to that of any individual.  Most people are not geniuses, but that doesn’t mean that their effort is less worth. The entire team may loose motivation and coordination will diminish if egos prevail.
3) Master builders are great and necessary. It is from the individuals who dare think differently that new impulses come. Prototypes, drafts, wild ideas are the domain of the master builder. He or she is not sufficient, though a crucial source for innovation. It is therefore also necessary to allow room for the innovators in a team, but not so much that their ego takes over, but enough that they don’t wither and die.
As a product manager or any type of manager it is therefore important to keep these three lessons in mind: have a plan, keep egos in check and give room for the innovators.

Bloatware is a law of nature. Understanding it can help you avoid it

Today software can be churned out with an impressive speed, but few have stopped to ask the question of whether all the features they build were really necessary in the first place. Lean start up, Agile, Dev-Ops, automated testing etc. are frameworks that have made it possible to develop quality software  at impressive speeds. Are all the features they build really used by real users or were they just clever ideas and suggestions. Not too much research exists, but the Standish Groups CHAOS Manifesto from 2013 has an interesting observation on the point.

“Our analysis suggests that 20% of features are used often and 50% of features are hardly ever or never used. The gray area is about 30%, where features and functions get used sometimes or infrequently. The task of requirements gathering, selecting, and implementing is the most difficult in developing custom applications. In summary, there is no doubt that focusing on the 20% of the features that give you 80% of the value will maximize the investment in software development and improve overall user satisfaction. After all, there is never enough time or money to do everything. The natural expectation is for executives and stakeholders to want it all and want it all now. Therefore, reducing scope and not doing 100% of the features and functions is not only a valid strategy, but a prudent one.”

CHAOS Manifesto 2013 

20% of features are the most often used. It looks like the Pareto principle is at work here. The Pareto principle states that 80% of the effect comes from 20% of the causes. Many things have been described with it from the size of cities to wealth distribution  to word frequencies in languages. There has even grown an industry from it based on the bestselling book “The 80/20 Principle: The secret of achieving more with less” by Richard Koch. Other titles expand on this:  “the 80/20 manager”, “The 80/20 Sales and Marketing” and the “80/20 Diet”.

This could seem a bit superficial and you would be forgiven for thinking whether there really is any reality to the 80/20 distribution. It could just as well be a figment of our imagination, an effect of our confirmation bias; we only look for confirming evidence. Never the less, it seems that there is solid scientific ground when you dig a bit deeper.

The basis for the 80/20 principle
The Pareto principle is a specific formulation of a Zipf law. George Kingsley Zipf (1902-1950) was an American linguist. He noticed a regularity in the distribution of words in a language. He looked at a corpus of English text and noted that the frequency of a word is inversely proportional to its rank order. In the English language the word “the” is the most frequent and thus has rank order 1. It accounts for around 7% of all words. “Of” is the second most frequent word and accounts for 3,5 %. If you plot a graph with the rank order as the x-axis and the frequency as the Y-axis you will get the familiar long tail distribution, that Chris Anderson has popularised.

One thing to notice at this point is that the 80/20 distributions is relatively arbitrary. It might as well be 95/10 or 70/15. What is important here is the observation that a disproportionately large effect is obtained from a small amount of observations.

While Chris Anderson’s point was that the internet opened up for businesses opportunities in the tail, that is, for products that were sold relatively infrequent, the point for software development is the opposite, to do as little as possible in the tail.

Optimizing product development
We can recast the problem applying Zipf’s law. Take your planned product and line up all the features you intend to build. The most frequently used will be used twice as much as the second most used, and three times as much as the third most.

In principle you could save a huge part of your development efforts if you were able to find the the 20% features that would be used the most by your customers. How would you do that? One way is the lean start up way which is reaching mainstream. Here the idea is that you build som minimal version of the intended feature set of your product. Either by actually building a version of it or by giving the semblance of it being there and monitoring whether that stimulates use by intended users.

This is a solid and worthwhile first choice. There are however reasons why this is not always preferable. Even working with a Lean start up approach you have to do some work in order to test all the proposed features. That amount of work need not be small. Remember the idea of a Minimal Viable Product is just that it is minimal with regard to the hypotheses about its viability. Not necessarily a small job.

The Minimal Viable Product could be a huge effort in itself. Take for example the company Planet Labs. Their MVP was a satelite! It is therefore worthwhile to consider even before building your minimal viable product what exactly is minimal.

Ideally you want to have a list of the most important features to put into your MVP. That way you will not waste any effort on features that are not necessary for an MVP. The typical way this is done is for a product manager, product owner or the CEO to dictate what should go in to the MVP. That is not necessarily the best way, since their views could be idiosyncratic.

A better way
A better way you can do this is by collecting input on possible features to include from all relevant stakeholders. This will constitute your back log. Make sure it is well enough described or illustrated to be used to elicit feedback. Here you have to consider your target group and the language they use and the mental models they operate with.

Once you have a gross list of proposed features the next step is to find a suitable group of respondents to test whether these features really are good. This group should serve as a proxy for your users. If you are working with personas, you should find subjects that are similar to your core personas. Then you will simply make a short description of the intended product feature or even illustrate it, list the proposed features and ask the subjects in a survey or some similar fashion “If this feature was included in the product how likely is it that you would use it? On a scale from 1-5″

Once you have all the responses for every feature simply calculate the score by adding all the ratings they got. Then you can follow Zipfs lead and rank features from top to bottom. If you calculate the total of all scores you can find the top 20% features. Simply start with the highest scoring and continue until the cumulative score of features approaches 20% of the total score. It is still however a good idea with a sanity check, so you don’t forget the login function or similar (you can trust algorithms too much)

What to do
Now that you have saved 80% of your development time and cost, you could then use the effort to increase the quality of the software. You could work on technical debt, to make it more robust while you wait for results.

You could also use this insight in your product intelligence and look at the top 20% most frequently used features of your product. Once you have identified them optimize them so they work even better. That would be a short cut to getting happier customers. You could optimize response times for these particular features so the most important features work faster. You could optimize the visibility in the user interface, so they are even more easy to see and get to or you could be used the insight in marketing to help focus the positioning of your product and to communicate what it does best.

To sum up, product utilization seems to follow a Zipf law. Knowing the top 20% features could help you focus development effort, but it could also help you focus marketing effort, user interface design and technical architecture.

 

References:

Richard Koch: “The 80/20 Principle: The secret of achieving more with less

Chris Anderson: “The Long Tail

http://www.quora.com/What-is-the-deeper-physical-or-mathematical-logic-behind-the-pareto-principle-of-an-80-20-distribution

http://www.quora.com/Statistics-academic-discipline/What-is-an-intuitive-example-of-the-Pareto-Distribution

http://www.quora.com/Pareto-Principle/In-what-conditions-would-you-expect-a-power-law-distribution-curve-to-emerge

https://en.wikipedia.org/wiki/Feature_creep

https://en.wikipedia.org/wiki/Software_bloat

Photo by flickr user mahalie stackpole under CC license

 

 

is the Apple watch a Telegraph?

The coming of the Apple is the buzz of the moment. Apple is the champion of making things simpler, but have they gone too far with the apple watch and made it too simple.

One click bonanza

The received wisdom in new product development is that you should take out steps, and continually simplify the product. This is what amazon did with one-click and this is what apple did with [insert your favorite Apple product here]. The reason is that it increases usability.

But sometimes the simplification meets a point where it doesn’t improve usability any more. With any product you will have some measure of complexity. Complexity is conventionally conceived as the number of possible states the system can have. So, roughly a measure of complexity is the number of variables a user can choose between and the number of states they can assume.

Some products are the antithesis of the amazon One-Click. Microsofts office suite has heaps of functions that are never used. Other products like SAP have a lot of different screens with a lot of functions, which make them difficult to use. But the reason that these functions are there is often that users actually need these functions, so for them they are necessary. If you take a way that functionality you will make the user interface more simple, but the complexity of the task you wish to do remains, only now, because of the too simple interface, it is even more complex than it was before. This is what we could call residual complexity, that is, the complexity of a task that is not supported by the tool.

Let me give you an example of high residual complexity. We bought a dishwasher called something with one-touch (perhaps inspired by amazon?) where indeed there was only one button. Actually at the face of it good thinking: Simplify to the core of the problem. What do I want to do with a dishwasher? make it wash my dishes. That works very well. Under normal circumstances. That is, until I discovered after it had been installed that, it just didn’t work. Not much you can do with one button then. I called the store and they had me push some sequences on the button to do diagnostics. Suddenly I found that the dishwasher was stuck in Turkish language. A language I am not intimately familiar with. What to do when you have only one button?

Finally it was back to the original language and an operator came on site to fix it and it worked. Now we were happy until the dishwasher had finished its washing cycle. For some reason, the product manager or whoever was in charge thought it would be nice if the dishwasher played “Ode an die Freude” from Beethoven’s 9th symphony. I love that piece and especially the ode, but not when it is played in a  15 second melody sequence with clunky 8 bit soundgenerator and repeated three times. Now I wanted to turn it off, but what to do with only one button?

One click communication

To illustrate it further lets take the simplification to its extreme. Take a keyboard on a computer. It has about 50-60 keys. They can be on or off. That leaves us with a product with 100-120 different possible states (not counting combinations, since a keyboard records only one stroke at a time). If you would like to simplify this maximally you could introduce a One-click concept where the keyboard had only one key that could be on or off. We just reduced the complexity of the user interface with a factor of 100 or more popularly we made it a 100 times more simple.

That, however, has been done centuries ago (literally). It’s called a telegraph. The telegraph illustrates clearly the problem of residual complexity, because in order to carry out the necessary tasks with a telegraph (communication) where there is only one button, it shifts the complexity from the user interface to the task: you need to learn morse code in order to use it!

That means when there is an inherent task complexity you cannot simplify the user interface beyond a certain point if the goal is to increase usability.

Residual complexity and the Apple watch

Now let’s return to the Apple watch. As compared to a watch, the Apple watch is not more simple. Quite the contrary. On the other hand compared to a smartphone it is simpler. And many, including Apple compares it to exactly that. You can do many of the same things on the Apple watch as you can on the iPhone.

For example you can read and reply to messages only, there is no keyboard. So, if you want to reply you have to choose a preconfigured reply or dictate a reply.

You can read an email there, but if it is longer than a short message you will have to scroll incessantly. You can also listen to music, but what if you want to search for a song? You can look at my calendar, but what if the entry is more than 15 characters or you want to move an appointment?

All of these examples are examples of residual complexity. Could it be that Apple just made it too simple? Could it be that Apple just built a new telegraph for your iPhone?

 

Photo by Clif1066 @flickr under CC license

Product Management Maturity And Tool Support

A recent report on product management tools by Sirius Decisions has revealed that 50% of Product Managers are looking for product management specific tools.

There are a number of dedicated product management tools, such as those surveyed by Sirius Decisions, yet when you ask product managers only 13% seem to use such tools. What can be gleaned from another survey,  by Product Hunt  is that no dedicated product management tool seems to be on the radar of product managers. At first I thought it was a mistake, so I contacted Product Hunt to verify. The method they arrived at their list was the following

We came up with the PM tools list by polling leading product managers in the industry and that’s what they selected

Being the supplier of one such dedicated Product Management Tool, we wanted to dig deeper into why there were such discrepancies in the market. Looking at the list by product hunt, the tools are all generic tools or for some single purpose, but none were used for supporting a coherent product management process. At least not such as described in the reference models of AIPMM, ISPMA or similar industry standards.

Outstanding custom essay writing from scratch! We guarantee dedicated writers, delivery on-time and errors free final work!

In the ERP space there are numerous tools that cover for such industry standard processes like Procure-to-Pay or campaign management. So, why don’t we see more tools that support a best practice process, but only tools for either very generic purposes (Trello, Evernote or Excel) or very specific purposes (like KISSmetrics, Streak or Do)?

Maturity

I believe that the reason has to do with maturity. The maturity level a company has is a fairly good indication of what tools will work. If you want to implement SAP in a CMMi level 1 company it is going to be a tough ride, since SAP is wonderful for repeatable processes, and at level 1 you don’t have such. Conversely if you want to implement project management in a CMMi level 5 company with only trello, it might also be a hard sell.

The CMMi model is both loved, hated and misunderstood. Anyhow, given the right understanding and appropriation, I think it is a good framework for conceptualizing maturity.  We have to remember that it is not about any particular process, but a metamodel that stipulates something about the proces that you should follow. therefore it is not a competitor to the ISPMA syllabus or the AIPMM PRODBok. Rather these are particular ways of executing product management process.

Product Management is covered by the Development process in the CMMi model called CMMi-DEV and it should therefore be possible to single out process areas and look at what sort of tool support that fits. In the following I will go through the 5 maturity levels of the CMMi model and describe key processes and give recommendations for optimal tool support.

Level 1 – Initial (Chaotic)

It is characteristic of processes at this level that they are (typically) undocumented and in a state of dynamic change, tending to be driven in an ad hoc, uncontrolled and reactive manner by users or events. This provides a chaotic or unstable environment for the processes. As the CMMi-DEV says:

“Maturity level 1 organizations are characterized by a tendency to overcommit, abandon their processes in a time of crisis, and be unable to repeat their successes.”

There are no particular process areas pertaining to Level 1.

Tool use: Eclectic, Usually Microsoft office suite (Excel, Word, powerpoint)

Recommendation: Select one key part of the product development process to support with a tool (idea management, Bug fixing, development, planning). Find one central place and tool to document. The tool should be tactical and “light weight” and easily customizable

Examples: Trello is light weight and will fit for almost any work proces where you have work items (ie. tasks, features, userstories) that go through phases. Podio is another popular tool where the strength is in its customizability. There are plenty of Apps where one is guarantied to come close to your needs and then you can just adapt it. Uservoice is good if you want to manage the ideation process. Zendesk is for support and will be great if your primary pain is to address and fix users problems.

Level 2 Repeatable

It is characteristic of processes at this level that some processes are repeatable, possibly with consistent results. Process discipline is unlikely to be rigorous, but where it exists it may help to ensure that existing processes are maintained during times of stress.

Here is what the CMMi writes about Level 2:

“Also at maturity level 2, the status of the work products are visible to management at defined points (e.g., at major milestones, at the completion of major tasks). Commitments are established among relevant stakeholders and are revised as needed. Work products are appropriately controlled. The work products and services satisfy their specified process descriptions, standards, and procedures.”

Key Process Areas:

  • PP – Project Planning
  • PPQA – Process and Product Quality Assurance
  • REQM – Requirements Management

Tool Use: Usually one tool is used for part of the process, but often you will see differing tools across different departments in the organisation.

Recommendation: Converge on a common tool to use and focus on lowest common denominator across the people involved in the process. The most important here is that it should be possible to see the status of work products.

Examples: Jira is already used by millions and very good for assuring clarity regarding what is committed and the status of work products, Rally and Version One are similar and flexible. These tools are all good for the above mentioned process areas.

Level 3 – Defined

It is characteristic of processes at this level that there are sets of defined and documented standard processes established and subject to some degree of improvement over time. These standard processes are in place (i.e., they are the AS-IS processes) and used to establish consistency of process performance across the organization.

“A critical distinction between maturity levels 2 and 3 is the scope of standards, process descriptions, and procedures. At maturity level 2, the standards, process descriptions, and procedures can be quite different in each specific instance of the process (e.g., on a particular project). At maturity level 3, the standards, process descriptions, and procedures for a project are tailored from the organization’s set of standard processes to suit a particular project or organizational unit and therefore are more consistent except for the differences allowed by the tailoring guidelines.”

Key Process Areas:

  • DAR – Decision Analysis and Resolution
  • PI – Product Integration
  • RD – Requirements Development
  • RSKM – Risk Management

Tool Use: Usually a suite is used for a part of the process. And use of this is consistent across different departments.

Recommendation: Make sure the tool you have selected is a suite that is tightly integrated with up stream and downstream processes, because when you begin to reap the benefits of being at Level 3 you will usually want to expand the process reach. This is best done if it is already a suite.

Examples: Focal Point is often used for RD and RSKM and is very customizable. Sensor Six is aimed towards DAR and therefore worth considering if you want to focus on that process area. HP Quality Center, Rational Suite are sort of all round and has extensive functionality to support most processes.

Level 4 Quantitatively Managed

It is characteristic of processes at this level that, using process metrics, management can effectively control the AS-IS process (e.g., for software development ). In particular, management can identify ways to adjust and adapt the process to particular projects without measurable losses of quality or deviations from specifications. Process Capability is established from this level.

“A critical distinction between maturity levels 3 and 4 is the predictability of process performance. At maturity level 4, the performance of projects and selected subprocesses is controlled using statistical and other quantitative techniques, and predictions are based, in part, on a statistical analysis of fine-grained process data.”

Key Process Areas:

  • OPP – Organizational Process Performance
  • QPM – Quantitative Project Management

Tool Use: Consistent and mandatory use of a suite for the entire process

Recommendation: Make sure the tool supplies full fledged reporting facilities out of the box and customizable. Visualization is key to success here, because metrics that are not easily visualized are not going to help managemen.

Examples: same products as Level 3, but it is probably necessary to boos reporting: Qlikview, Mixpanel, Gekkoboard are good for visualizations of process trends, but if you need more sophisticated statistical analysis, SPSS, SAS or Rapid miner, to mention an open source alternative, are good options.

Level 5  – Optimizing

It is a characteristic of processes at this level that the focus is on continually improving process performance through both incremental and innovative technological changes/improvements

“ A critical distinction between maturity levels 4 and 5 is the focus on managing and improving organizational performance. At maturity level 4, the organization and projects focus on understanding and controlling performance at the subprocess level and using the results to manage projects. At maturity level 5, the organization is concerned with overall organizational performance using data collected from multiple projects.”

Key Process Areas:

  • CAR – Causal Analysis and Resolution
  • OPM – Organizational Performance Management

Tool Use: The requirements for the tool at this level is that it is “intelligent” and will supply the process with transformative input that is not realized at any earlier levels. It could be intelligent estimation or market analysis.

Recommendation: There are no tools at this level yet, so either it should be integrated with general AI systems or dedicated niche players

Examples: IBMs Watson is an interesting new general purpose AI, that could oprobably be used here. Another example is Qmarkets who supply prediction markets, for improving project delivery by using market dynamics. Employees can “gamble” on what projects or products will succeed.

 Conclusion

There are many options for tool use and many options for process improvement. The best thing is to be very selective and start from the process side. Tools with out a process are like hammers without a nail: they can make a lot of noise. When you know what process areas to focus one you should try to find a tool that suits this area and the maturity level you are aiming for. The tools are all good, but they are built for a particular purpose, so if you use it for something different the result may lack.