Architecture – Turning Fiction into Fact

I am an admirer of my compatriot Bjarke Ingels who is a real architect. His buildings always stretch the boundaries of the possible. For example, can you create an idyllic ski slope in a flat country like Denmark and put it in the center of a  city with more than a million inhabitants? Sure, just put it on top of an old power plant that needs re-building, and oh, maybe you could have the power plant’s chimney puff smoke rings? As Ingels puts it:  “Architecture is the fiction of the real world”   and this is what he did.

But buildings rarely exist in isolation; they are usually part of a city. Ingels continues: “The city is never complete. It has a beginning but no end. It’s a work in progress always waiting for new scenes to be added and new characters to move in”. While Ingels is talking about real world brick and mortar buildings and other constructions, there is no reason why this would not also apply to IT architecture as well.

This quote also applies to any modern enterprise. There will always be an IT landscape and it is always a work in progress. You will never finish. The only thing you can do is to manage the change in a more or less efficient way. When we create the IT architectures of the future we are in essence turning the fiction of user stories and personas into new scenes and characters of this ever evolving city. Our architectures will be evaluated on whether how real characters will inhabit the structures we create. Will our designs turn out like the Chinese ghost town of Ordos or the smooth coordination of the Tokyo subway. Just like the buildings and towns we create will only be successful if they become liveable for the people they are meant for, the IT systems we build will have to fulfill the functions of the users and the surrounding systems.

The Frontier of Imagination

Typically, we will ask people what they want and document this as requirements, user stories or use cases. This is all well, but if Ingels had gone out and asked the people of Copenhagen what they wanted we would have gotten more of the same building blocks and villas that are already prevalent. There would be no power plant with ski-slope simply because people would never have thought about that. If Steve Jobs and Henry Ford had just settled for what people wanted, we would still be speaking in Nokia phones hacking away at clunky black computers and riding in horse drawn carriages. We cannot expect our users, customers or managers to have the imagination. This is something we as architects have to supply.

The real frontier for IT architecture is imagination. We need to be able to imagine all the things that the requirements and user stories don’t tell, we need to be able to be bold and create stuff that no-one ever asked for. This is difficult for multiple reasons. First of all an architect is usually measured on how well the solution he designs solves the requirements put in front of him or her. That means there is little incentive to do anything more. Second, it is often difficult to gauge what would be needed in the future. Third, there is a tendency towards best practice and existing patterns, which does not further innovative solutions.

An Example

However, these obstacles to imagination can be managed. As Ingels has shown, it is sometimes possible to cover all the basic requirements, in a cost effective way that does the same or better than traditional solutions. The same is the case for IT architecture.

Let’s look at an example: as part of the continuous evolution of the IT landscape architects are often asked to re-architect legacy solutions. Some legacy solutions build on costly message queueing software. These can be upgraded and replaced easily with similar solutions. You can also usually show some cost reduction and performance improvement by shopping around but at the end of the day you would still just have the same basic functionality and missed out on an opportunity for improvement.

Let’s take a step back and consider what this architecture does and what it could do. Basically it moves data between endpoints in a secure way without ever losing messages. Now we have to ask ourselves, can we do this in a better way? One obvious way to never lose a message would be to just store it permanently. So, when a message that would usually be written to a queue comes in we will now store it on a persistent medium instead and keep it in as long as it makes sense. We can impose a retention schedule to move the data between different types of storage, that is from hot to cold storage and delete messages if we ever get tired of having them around. The first step is therefore to catch all data and just store it.

Once the data is stored we still need to get the data to target endpoints. Instead of sending messages on a queue we will just send a pointer to where we stored the message as an event. This will inevitably be a smaller and more uniform message than the traditional message in a message queue. When we have small and uniform it is a lot easier to optimize everything from speed to size etc.

This move also allows us to open up for different ways to consume the event. Now the target endpoints are not forced to have a queue client from the queue software vendor. They could just as well choose to have a REST interface, ODBC or Kafka. Our event generator just has to be able to connect to these different types of interfaces. That means it has to be able to call a web service, write to a table through ODBC, publish to Kafka or any other type of type of endpoint that is relevant.

The target endpoints now have much more freedom in how they receive notifications and how they will handle them. The center just has to have a number of different channel adapters for the different types of end points. These should be simple and easily configurable since the message is always uniform in its format and size.

Target endpoints now have to retrieve the payload from the message store based on the metadata in the event. Easy, we just put an API in front of our message store for them to call with the message ID. This API could be a web service or an ODBC connector or just a crude URL offered with the payload.

This approach lets us build a solution that does the same as a traditional queue, take a message from a source endpoint and ensure guarantied delivery to a target endpoint. Only we are able to do it with lower latency, more cost efficiently, we have made sure archiving is a built in feature. If something fails the process can be retried any number of times, since the message is never dropped from the queue and there is an API always open.

This architecture supports the basics but also allows us a number of new possibilities. Everything that has passed through our messaging system is now available through an API. The target end points can call any subset of messages on the subscriptions they follow at any point in time as opposed to the traditional approach where something was on the queue and once it was read it was gone.

We can even create a search API for messages in our message store. If we need to find some particular message we can now let the target endpoints do that automatically. This can be expanded further because we might have created a Hive table on the message store, so now we can access all the data that went through our pipeline through SQL/HiveQL. In the traditional world we would have to set up a separate solutions to aggregate messages to a different store and then ETL them into a Data Warehouse and create models exposed through BI tools for end users to gain access.

Turning fiction into fact

This solution turns a simple queuing solution into a cheaper, faster version of queueing that also suddenly is an API and Data Mart. This shows that there is no reason to be limited by the mental frameworks of legacy technologies that we are replacing. We need to think about the possibilities available to us today and imagine how we would solve the problems that legacy technologies solved given our current technology and the wider needs beyond this particular use case. We can, in fact, turn fiction into fact, sometimes we just have to be bold and let go of best practice and received wisdom.

 

 

 


Posted

in

by

en_GBEnglish