Data Is the New Oil – Building the Data Refinery

“Data Is the New Oil!”

Mathematician and IT Architect Clive Humby seems to have been the first to coin the phrase in 2006 where he helped Tesco develop from a fledgling UK retail chain to an inter continental industry titan only rivaled be the likes of Walmart and Carrefour through the use of data through the Tesco reward program. Several people have reiterated the concept subsequently. But the realization did not really hit primetime until the economist in May 2017 claimed that data had surpassed oil as the most valuable resource

Data, however, is not just out there and up for grabs. Just like you have to get oil out of the ground first, data poses similar challenges: you need to get it out of computer systems or devices first. When you do get the oil out of the ground it is still virtually useless. Crude oil is just a nondescript blob of black goo. Getting the oil is just a third of the job. This is why we have oil refineries. Oil refineries turn crude oil into valuable and consumable resources like gas or diesel or propane. It splits the raw oil into different substances that can be used for multiple different products like paint, asphalt, nail polish, basketballs, fishing boots, guitar strings and aspirin. This is awesome; can you imagine a world without guitar strings, fishing boots or Aspirin? That would be like Harry Potter just without the magic…

Similarly even if we can get our hands on it, raw data is completely useless. If you have ever glanced at a webserver log, a binary data stream or other machine generated code you can relate to the analogy of crude oil as a big useless blob of black goo. All this data does not mean anything in itself. Getting the raw data is of course a challenge in some cases, but making it useful is a completely different story. That is why we need to build data refineries. Systems that turn the useless raw data into components that we can build useful data products from.

Building the data refinery

For the past year or so, we have worked to design and architect such a data refinery at New York City. The “Data as a Service” program is the effort to build this refinery for turning raw data from the City of New York into valuable and consumable services to be used by City agencies, residents and the rest of the world. We have multiple data sources in systems of record, registers, logs, official filings and applications, inspections and hundreds of thousands of devices. Only a fraction of this data is even available today. When it is available it is hard to discover and use. The purpose of Data as a Service is to make all the hidden data available and useful. We are turning all this raw data into valuable and consumable data services.

A typical refinery processes crude oil. This is done through a series of distinct processes and results in distinct products that can be used for different purposes. The purpose of the refinery is to break down the crude oil to distinct useful by-products. The Data as a Service refinery has five capability domains we want to manage in order to break the raw data down into useful data assets:

  • Quality is about the character and validity of the data assets
  • Movement is how we transfer and transform data assets from one place to another
  • Storage deals with how we retain data assets for later use
  • Discovery has to with how we locate the data assets we need
  • Access deals with how we allow users and other solutions to interact with data assets

Let us look at each of these in a bit more detail.

Quality

The first capability domain addresses the quality of the data. The raw data is initially of low quality like the crude oil. It may be a stream of bits or characters, telemetry data, logs or CSV files.

The first thing to think about in any data refinery is how to assess and manage the quality of the data. We want to understand and control the quality of data.  We want to know how many data objects there are if they are of the right format or if they are corrupted. Simple descriptive reports like the number of distinct values, type mismatch, number of nulls etc. can be very revealing and important when considering how it can be used by other systems and processes.

Once we know the quality of the data we may want to intervene and do something about it. Data preparation formats the data from its initial raw form. It may also validate that the data is not corrupted and can delete, insert and transform values according to preconfigured rules. This is the first diagnostic and cleansing of the data in the DaaS refinery.

Once we have the initial data objects lined up in an appropriate format Master Data Management is what allows us to work proactively and reactively with improving the data. With MDM we will be able to uniquely identify data objects across multiple different solutions and format them into a common semantic model. MDM enables an organization to manage data assets and produce golden records, identify and eliminate duplicates and control what data entities are valid and invalid.

Data movement

Once we have made sure that we can manage the quality of the data we can proceed to the next phase. Here we will move and transform the data into more useful formats. We may, however, need to move data differently. Sometimes it is all well to move it once a day, week or even month, but more often we want the data immediately.

Batch is movement and transformation of large quantities of data from one form and place to another. A typical batch program is executed on a schedule and goes through a sequence of processing steps that transforms the data from one form into another. It can range from simple formatting changes and aggregations to complex machine learning models. I should add that what is sometimes called Managed File Transfer, where a file is simply moved, that is, not transformed can be seen as a primitive form of batch processing, but in this context it is considered a way of accessing data and described below.

The Enterprise Service Bus is a processing paradigm that lets different programmatic solutions interact with each other through messaging. A message is a small discrete unit of data that can be routed, transformed distributed and otherwise processed as part of the information flow in the Service Bus. This is what we use when systems need to communicate across city agencies. It is a centralized orchestration.

But some data is not as nicely and easily managed. Some times we see use cases where the processing can’t wait for batch processing and the ESB paradigm does not scale with the quantities. Real time processing works on data that arrives in continuous streams. It has limited routing and transformation capabilities, but is especially geared towards handling large amounts of data that comes in continuously either to store, process or forward it.

Storage

Moving the data naturally requires places to move it to. Different ways of storing data have different properties and we want to optimize the utility by choosing the right way to store the data.

One of the most important and widespread ways to store data is the Data Warehouse. This is a structured store that contains data prepared for frequent ad hoc exploration by the business. It can contain pre-aggregated data and calculations that are often needed. Schemas are built in advance to address reporting needs. The Data Warehouse focuses on centralized storage and consequently data, which has a utility across different city agencies.

Whereas Data Warehouses are central stores of high quality validated data, Data Marts are similar local data stores. They are similar to Data Warehouses in that the data is prepared to some degree, but the scope is more local for an agency to do analytics internally. Frequently the data schema found are also more of an ad hoc character that may not be designed for wide spread consumption. It also serves as a user driven test bed for experiments. If an agency wants to create a data source and figure out if it has any utility, the data mart is a great way to quickly and in a decentralized manner create value in an agile manner.

Where Data Warehouses and Data Marts store structured data, a data lake is primarily a store for unstructured data, like csv, XML, log files as well as binary formats like video and audio. The data lake is a place to throw data first and then think about how to use it later. There are several zones within the data lake with varying degrees of structure: like the raw, analytical, discovery, operational and archive zones. Some parts like the analytical zone can be as structured as Data Marts and be queried with SQL or similar syntax (HiveQL), where others like the raw zone requires more programming to extract meaning. The data lake is a key component in bringing in more data and transforming it to something useful and valuable.

The Operational Data Store is in essence a read replica of an operational database. It is used in order not to unnecessarily tax an operational, transactional database with queries.

The City used to have real warehouses filled with paper archives that burned down every now and then. The reason for this is that all data has a retention policy that specifies how long is should be stored. This need is still there when we digitize data. Consequently we need to be in complete control of all data assets’ lifecycle. The archive is where data will be moved when there is no more need to access the data frequently. Consequently data access can have a long latency period. Archives are typically used in cases where regulatory requirements warrant data to be kept for a specific period of time.

Discovery

Now that we have ways to control the quality, move the data and store it we also need to be able to discover it. Data that cannot be found are useless. Therefore we need to supply a number of capabilities for finding the data we need.

If the user is in need of a particular data asset, search is the way to locate it. Based on familiar query functions the user can use single words or strings. We all know this from on line search engines. The need is the same here: to be able to intelligently locate the right data asset based on an input string.

When the user does not know exactly what data assets he or she is looking for we want to be able to supply other ways of discovering data. In a data catalog the user can browse existing data sources and locate the needed data based on tags or groups. The catalog also allows previews as well as additional meta-data about the data source, such as descriptions, data dictionaries and experts to contact. This is good if the user does not know exactly what data asset is needed.

In some cases a user group knows exactly what subset of data is needed. The data may not all reside in the same place or format. By introducing a virtual layer between the user and the data sources it is possible to create durable semantic layers that remain even when data sources are switched. It is also possible to tailor specific views of the same data source tailored to a particular audience. This way the view of the data will cater to the needs of individual user groups rather than a catch all lowest common denominator version, which is particularly convenient since access to sensitive data is granted on a per case basis. The data virtualization will make it possible for users to discover only the data they are legally mandated to view.

Access

Now that we are in control of the quality of data and who can use it, we also need to think about how we can let users consume the data. Across the city there are very different needs for consuming data.

Access by applications is granted through an API and supplies a standardized way for programmatic access by external and internal IT solutions. The API controls ad hoc data access and also supplies documentation that allows developers to interact with the data through a developer portal. Typically the data elements are smaller and involve a dialogue between the solution and the API.

When files need to be moved securely between different points without any transformation a managed file transfer solutions is used. This is also typically accessed by applications, but a portal also allows humans to upload or download the file. This is to be distinguished from document sharing sites like sharepoint, work docs, box and google docs where the purpose is for human end users to share files with other humans and typically cooperate on authoring them.

An end user will sometimes need to query a data source in order to extract a subset of the data. Query allows this form of ad hoc access to underlying structured or semi structured data sources. This is typically done through SQL. An extension of this is natural language queries thorough which the user can interrogate a data source through questions and answers. With the advent of colloquial interfaces like Alexa, Siri and Cortana this is something we expect to develop further.

A stream is a continuous sequence of data that applications can use. The data in a stream is supplied as a subscription to streams in a real time fashion. This is used when time and latency is of the essence. The receiving system will need to parse and process the stream by itself.

Contrary to this, events are already processed and are essentially messages that function as triggers from systems that indicate that something has happened or should happen. Other systems can subscribe to events and implement adequate responses to them. Similar to streams they are real time, but contrary to streams they are not continuous. They also resemble APIs in that it is usually smaller messages, but differs in that they implement a push pattern.

Implementing the refinery

Naturally some of this has already been built, since processing data is not something new. What we try to do with the Data as a Service program is to modernize existing implementations of the above-mentioned capabilities and plan for how to implement the missing ones. This involves a jigsaw puzzle of projects, stakeholders and possibilities. Like most other places we are not working from a green field and there is no multi million-dollar budget for creating all these interesting new solutions. Rather we have to continuously come up with ways to reach the target incrementally. This is what I have previously described as pragmatic idealism . What is important for us, as I suspect it will be for others, is to have a bold and comprehensive vision for where we want to go. That way we can hold up every project or idea against this target and evaluate how we can continuously progress closer to our goal. As our team’s motto goes “Enterprise Architecture – One solution at the time”

Information System Modernization – The Ship of Theseus?

The other day I was listening to a podcast by Malcolm Gladwell. It was about golf clubs (which he hates). Living next to two Golf Courses and frequently running next on them this was something I could relate to. The issue he had was that they did not pay proper tax. This is due to a California rule that property tax is frozen on pre 1978 levels unless more than 50% of ownership had changed.

The country clubs own the golf courses and the members own the country club. Naturally more than 50% of the members had been changed since then. However, according to the tax authorities this does not mean that 50% of the ownership had changed. The reason is that the gradual change of the membership means that the identity of the owning body had not changed. This is to some people a peculiar philosophical stance but not without precedence. It is known through the ancient Greek writer Plutarch’s philosophical paradox known as Theseus ship, here quoted from Wikipedia:

“The ship wherein Theseus and the youth of Athens returned from Crete had thirty oars, and was preserved by the Athenians down even to the time of Demetrius Phalereus, for they took away the old planks as they decayed, putting in new and stronger timber in their places, in so much that this ship became a standing example among the philosophers, for the logical question of things that grow; one side holding that the ship remained the same, and the other contending that it was not the same.”

For Gladwell it was not clear that the gradual replacement of members in a country club constituted no change in ownership. Be that as it may, the story made me think about information system modernization, which is typically a huge part of many enterprises and government IT project portfolios. Information systems are like Theseus ship, you want to keep it floating but you also want to maintain and make it better. The question is just: is information system modernization Theseus ship?

The Modernization effort

Usually a board, CEO, CIO, commissioner or other bodies with responsibility for legacy systems realize that it is time to do something about them. Maybe the last person who knows about it is already long overdue his retirement, the operational efficiency has significantly declined, costs expanded or the market demands requirements that cannot be easily implemented in the existing legacy system. Whatever the reason, a decision to modernize the system is made: retire the old and replace with the new.

Now this is where it gets tricky because what exactly should the new be? Do we want a car or a faster horse? For many the task turns into building a faster horse by default. Because we know what the system should do, right? It just has to do it a bit faster or do a little bit better. The problem is that we are sometimes building Theseus a new rowboat with carbon fiber planks where we could instead have gotten a speedboat with an outdoor kitchen and a bar.

When embarking on a legacy modernization project, there are a few things I believe we should observe. I will use a recent project that we have done to modernize the architecture of a central integration solution at New York City Department of IT and Telecommunication. This legacy system is itself a modernization of an earlier mainframe based system (yes, things turn legacy very fast these days).

Some of the things to be conscious of in order not to end up in the trap of Theseus’ ship when modernizing systems are the following.

Same or Better and Cheaper

A modernized legacy system has to fulfill three general criteria: It should do the same or more as today, with the same or better quality at a cheaper price. It is that simple. When I say that it should do the same as today I would like to qualify that: if the system today sends sales reports to matrix printers and fax machines around the country, we probably don’t need that even if it is a function today. The point is that all important functions and processes that are supported today should also be supported.

When we talk about quality we mean the traditional suite of non-functional requirements: Security, Maintainability, Resilience, Supportability etc. Quite often it is exactly the non-functional requirements that need to be improved, for example maintainability or supportability.

At a cheaper price is pretty straightforward. It is not always possible, such as when you are replacing a system that was custom coded with a modern COTS or SaaS solution. Nevertheless, I think it is an important ambition and realistic because most legacy technology that used to be state of the art is now commodities due to open source and general competition. An example is Message Queueing software. That used to be offered at a premium by legacy vendors, but due to open source products like Active MQ and Rabbit MQ as well as cost efficient cloud offerings, it has become orders of magnitude cheaper.

Should the system even be doing this in the new version?

Often there is legacy functionality that has become naturally obsolete. One example I found illustrates this. The integration solution we had is based on an adapter platform that takes data from a source endpoint, formats it and puts it on a queue. At the center a message broker routes it to queues that are read by other adapter platforms. They then format and write the messages to the target endpoint. This is a fine pattern, but if you want to move a file, it is not necessarily the most efficient way since the file has to be parsed into multiple bits to be put on a queue and then assembled again on the other side. This is a process that can easily go wrong if one message fails or is out of order. Consequently multiple checks and operational procedures need to be in place. Rather than having the future solution do this, one could look to see if other existing solutions are more appropriate, such as a managed file transfer solution. Similarly when the system merely wraps web calls, an API management solution may be more appropriate.

Why does the system do it in this way?

Was this due to technological or other constraints when it was built? When modernizing it can pay off to look at each characteristic of the legacy system and understand why it is implemented in that way rather than just copying it. For example, our integration solution puts everything on a queue. Why is that? It may be because we want guaranteed delivery.

This is a fair answer but also a clue to how we can make it better, because what better way is there to make sure that you don’t loose a message than to just store it in an archive for good as soon as you get it? In a queue that message is deleted. This presumably has to do with message queueing’s origin on the mainframe where memory is a scarce resource.

It is not any more so rather than use a queue, lets just store the message and publish an event on a topic and let subscribers to the topic process it at their convenience. This way the integration can also be rerun even if a down stream process fails such as the target adapters writing to a DB. If this were a queue-based integration the message would be lost because it would have been deleted off the queue. With this architecture any process can at any time access the message again. Now a message is truly never lost.

What else can the system do going forward?

Keep an eye out for opportunities that present themselves with rethinking the architecture and the possibilities of modern technologies. To continue on our example with the message store, we can now use the message archive for analytical solutions by transforming the messages subsequently from the archive into a Data Warehouse or Datamart. This is also known as an ELT process.

Basically we have turned our legacy queue based architecture into a modern ELT analytics type architecture on the side. What’s more is that we can even query the data in the message store with SQL. One way is to make it accessible as a HIVE table. Imagine what that would take in the legacy world: for every single queue we would have had to build an ETL process and put it into a new schema that would have to be created in advance.

Being open minded and having a view to adjacent or related use cases is important to spot these opportunities. This may take a bit of workaround the institutional silos if such exists though. That is just another type of constraint, a non-technical constraint, which is often tacitly built into the system.

 

Remember that we wanted the modernized system to be “Same or better and cheaper”. Now we can still get all of the functional benefits of a queue, just better, since we can always find a message again. On top of that we have offered new useful functionality in an analytics solution that is sort of a by-product of our new architecture. Deploying it in the cloud allows us to have better resilience, performance, monitoring and even security. Add to that the cost, which is guaranteed to be significantly less than what we were paying for our legacy vendor’s proprietary message queueing system.