4 steps to the right data-sharing architecture for your agency

 I recently performed an architectural analysis for a government customer whose requirements were generic and ambiguous -- as most requirements initially are. The client wanted to share data across an enterprise that consisted of multiple sub-units each with its own stove-piped information systems.  Sound familiar?

To make matters more confusing, the IT vendor community has muddied the information-sharing space by offering products with names that have similar but not equivalent meanings like federated search, federated query, enterprise search, data virtualization, data aggregation, data integration, entity resolution, master data management and business intelligence. You get the picture.

Nevertheless, I boldly (some may say foolishly) ventured into this zone of ambiguous customer requirements compounded by confusing vendor terminology to guide the client through to safety and success. To help with the architectural selection, I used a four-part process:  survey the architecture landscape, refine the requirements, analyze the dependencies and, finally, recommend an architecture. 

Here are 4 steps for how to choose the right information-sharing architecture for your agency.

1. Survey the architecture landscape. It is important to educate all customers about their architectural alternatives and how their requirements affect each option. The two key elements here are understanding the various architectural perspectives in a project (loosely where, what and how) and the notion of “linchpin” requirements that tilt the architectural direction in one way over the others. 

The three main architectural perspectives (or views) we examined were the physical architecture, the functional architecture and the technical architecture. The physical architecture perspective concerns itself with “where” processing should occur or where data should be located -- for example, centralized, distributed or a hybrid. The functional architecture perspective is the “what” the end-user wishes to achieve, like federated query or decision support. The technical architecture perspective focuses on “how” to achieve a particular end, for example, via an enterprise service bus or a data warehouse. 

2. Refine the requirements. After understanding how requirements affect the architecture, customers are more willing to narrow and clarify their requirements. The best way to encourage this conversation is to derive key business outcomes via use cases or user scenarios.  Given use cases, customers can derive the linchpin requirements that drive the architectural direction. In that analysis, I subscribe to the “form follows function” philosophy.

3. Analyze the dependencies. Once we understand the customer’s linchpin requirements and key outcomes, we can trace them back to specific capabilities in the information management stack and then understand their dependencies. Nothing exists in a vacuum, and the client must understand the cost of each capability. The information management stack for the data/information sharing space has roughly the following layers: data asset awareness, data asset availability, data aggregation, data quality, data governance, data transformation, data integration, change data capture, decision support, business intelligence and predictive analytics. This information stack (with predictive analytics being the top layer) shows the layered dependencies, and those dependencies equate to increased cost and complexity.

4. Recommend an architecture.  At this point the customer is seeing the lay of the land and the cost of the possible architectural paths. It is easy to offer at least three alternatives and let the customer decide. In my case, the customer chose wisely based on defining the minimal requirements, reducing risk and reducing cost. 

So, although vendors offer a confusing array of products and product suites in this space, the four-step process outlined here can guide you to the right information-sharing architecture.

About the Author

Michael C. Daconta (mdaconta@incadencecorp.com) is the Vice President of Advanced Technology at InCadence Strategic Solutions and the former Metadata Program Manager for the Homeland Security Department. His new book is entitled, The Great Cloud Migration: Your Roadmap to Cloud Computing, Big Data and Linked Data.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above