Internaut | Inside the life of data
- By Shawn McCarthy
- Jul 12, 2006
Shawn P. McCarthy
Information lifecycle management is slowly becoming more than just a buzzword. In the right context, ILM can evolve into a useful strategy that merits serious consideration as government IT managers decide how data will be obtained, tagged, entered into databases and, eventually, put into archiving systems.
And it's especially important as government organizations move toward services-oriented architectures.
The ILM concept emerged a few years ago from the data storage industry. It began as a way to manage multiple sources of information while dealing with data growth, storage and retrieval. In a typical ILM deployment, groups plan with an eye toward data's age, as well as its short- and long-term importance to the enterprise.
But as it's moved beyond buzzword status, ILM has evolved into a set of business practices that include rules governing the way data must first be entered into a system. For example, today's ILM can include metadata rules, plus guidance on other types of data collected within a system.The key to success
And that's the key to ILM's success. Metadata is an important, but often overlooked, piece of the information management puzzle. Metadata supplies information about, or the documentation of, other data that exists in an application or system. It includes data elements or attributes such as name, size and data type; information about records or data structures such as length, fields and columns; and details about the data itself, such as where it's located, how it's associated with other data or systems, plus ownership information and expiration dates.
When properly implemented, ILM begins with user practices, setting standards at the very beginning for things such as document or database creation, plus naming and metadata rules. As information moves through automated storage procedures, many attributes of the data can be tagged and tracked. This enables ILM systems to provide multifaceted standards for both content management and storage management, beyond the traditional things such as display rules, access frequency and data age.
In this way, government organizations can manage data from its creation and storage through its eventual obsolescence and long-term archiving. Meanwhile, in an SOA situation, systems can be sure they're always calling up the freshest available data.
The best ILM designs are able to track data and information regardless of its native format. Today, content across government networks resides in many different formats.
Much of it is structured data residing in databases. But other content exists on Web pages, in spreadsheets or in text documents and Adobe Acrobat files. And the types of content continue to increase. As they do, ILM should track not only what a document contains, but also what kind of application is needed to open and view it. And search systems should be able to know what resides inside every available document.
Finally, in the best of all ILM worlds, one source of data should be deemed the authority for each data field. For example, the Social Security Administration should always be the final authority on a citizen's Social Security number, even thought the number might be recorded in multiple databases. Most government systems do not yet manage their data to this level, but that will change as ILM rules improve.
If your organization does not yet identify ILM as a key business practice, it's worth learning about the concept. The complexity of both data retention rules, and the data itself, will only increase in the years ahead.Former GCN writer Shawn P. McCarthy is senior analyst and program manager for government IT opportunities at IDC of Framingham, Mass. E-mail him at
Shawn McCarthy, a former writer for GCN, is senior analyst and program manager for government IT opportunities at IDC.