Greg Hanson


Y2K 10 years later

It seems like just yesterday many of my friends and colleagues in the federal government and industry were spending New Year’s Eve in Y2K Command Centers watching with great anticipation as the ball dropped in Times Square that New Year’s Eve, 1999.

I had been chief technology officer at Telos for two and a half years and thought back to when I arrived in the spring of 1997. Like most companies, Telos had not taken any significant action regarding the potential impact of Y2K, or the “Millennium Bug”, as it was often called.

I was keenly aware of the issue, as I had just retired from the U.S. Air Force, where I had been in charge of the Air Force’s Year 2000 program. In that capacity, I had worked with software engineers Air Force-wide and briefed the chief of staff, secretary of the Air Force, and the deputy secretary of defense on the issue from the ground up. We began that effort in earnest when the federal government received a grade of “D” on the first Y2k report card that Rep. Stephen Horn (R-Calif.) issued.

Having taken the technology helm of a global corporation that served the Defense Department, I and my colleagues quickly got to work looking at our office automation, production, financial, human resource, inventory and tracking systems, and our entire suite of product offerings. As a result, I spent New Year’s Eve 1999, not in a command center, but at a New Year’s Party and on 1 January 2000, our systems experienced no ill effects as the new millennium dawned.

But this does not mean that it was a non-event or that we didn’t learn a lot from the Y2K experience. Exactly the opposite--many people worked very hard to ensure that our nation’s information systems continued to operate correctly. Moreover, I believe there are some valuable lessons we can learn from the experience.

Thankfully, we didn’t observe the calamity many had predicted. Infrastructure control systems did not fail, military weapon systems were not automatically armed or launched and bank accounts were not wiped out.

The Y2K problem affected systems which perform calculations based on Julian Dates–primarily business systems. Control systems, such as those in GPS satellites, typically use different timing schemes.

Additionally, Y2K was not some sort of software conspiracy.

I was once asked, “How could you software guys let a stupid thing like this happen, with your SEI and CMMI?”—referring to the Software Engineering Institute and its process improvement approach called the Capability and Maturity Model Integration. In an age when mass storage can be had at less than a dollar a gigabyte (a billion bytes), it is difficult to recall -- or increasingly for younger IT professionals, imagine -- a time when memory was so precious that programmers used two-digit dates to conserve two bytes of memory.

People still consider software engineering to be an inexact science, but I believe we learned a lot from the Y2K experience that translated into better quality software and more disciplined development and testing practices. Yes software still has bugs, but when you think of how pervasive it is and the capability it gives us--“I’ve got an app for that”--we’ve come a long way since the dawn of the millennium.

The Y2K experience also produced a unique opportunity for public and private sector to conduct a significant house cleaning with their information systems.

That effort began in 1996 at the Air Force Y2K Program Office and led to the diligent inventorying and triaging of Air Force software systems. It also led to efforts to prioritize systems in terms of mission criticality and assessing which ones could be remediated and which ones needed to be replaced.

From the commercial side, we used the Y2K problem at Telos as an opportunity to transition to state-of-the-art office automation systems. We scrapped our proprietary e-mail and database systems in favor of modern, industry-standard, Y2K compliant systems.

However, despite our best efforts to clean house -- to eliminate legacy software systems -- we might not have done enough. Many legacy systems still remain. I was struck by the statistics at a major technology conference earlier this month when a top software acquisition official briefed us on just how much time and cost it adds to a project to have to build interfaces to the thousands of legacy systems that still exist.

But consider where we might be had we not had to confront Y2K.


About the Author

J. Greg Hanson is executive vice president, Defense & Homeland Security at Criterion, a former CIO for the U.S. Senate and former chief software engineer for the U.S. Air Force.

inside gcn

  • power grid (elxeneize/

    Electric grid protection through low-cost sensors, machine learning

Reader Comments

Sun, Jan 31, 2010 Oliver Graham Boston

I'm glad the world didn't end (which I knew it wouldn't, but made for scintillating headlines), but worry at what was NOT learned. Like what happened to the inventories of systems & their connections? We're only going to become MORE connected as we go forward. Heck, even the local Seven-Eleven keeps an inventory.

Sun, Jan 10, 2010 Howard Camarillo

People often ignore the fact that many systems were simply turned off before Midnight Y2K. They didn't crash then, they just didn't work the next day. Like the LA Assessors office, which couldn't handle data gathered before midnight after Y2K. Or how about the 3 nuclear plants that automatically shutdown at midnight. Why wasn't that counted as a failure - it's because they claim that their first priority is safety, not power production. There were many instances, in many areas,like this across the country. What kept many things working, was turning them off before the midnight transition to the year 2000. Oh my, why weren't publicised? - Politics and CYA.

Tue, Jan 5, 2010 Steve Oregon

International Time Clock technology is largely deployed using 32-bit counters. These counters are embedded in file systems, computer operating systems, control systems, atomic time clocks, and the various GIS and communications satelites in orbit around the earth. These counters are expected to cause global havoc in 22-24 years as the counters go from positive-negative or rollover to zero. Microsoft operating (32-bit) operating systems still have the 32-bit counter vulnerabilities. Unix and Unix-like systems also have these 32-bit counter vulnerabilities. The file systems such as EXT2, FAT16, FAT32, NTFS, ISO9660, CD-ROM and DVD-ROM and many others used to house computer data files have these 32-bit counter vulnerabilities. Internet time protocols have these vunlnerabilities. Microwave communications equipment has these vulnerabilites. Satelites orbiting the earth are not easy to repair or upgrade and their service life often extends for decades, and many satelites have these vulnerabilities. Many of the databases currently used in finance and business probably have these vulnerabilities - related to timestamp dating of their records. We have been encouraged to buy new and faster computers. But how much of our old technoligies will be used by societies that cannot afford the newest equipment? What will the digital time-management infrastructure landscape and digital debris look like in the coming years? I hope that the technicians and designers keep watch of these global time-scale armagedon issues so as to prevent the mega-catastrophies of global time management.

Mon, Dec 28, 2009 mad monk Kaua'i, Hawai'i

I first ran into the date problem in 1970...program wouldn't work due to a 1 digit year code! When I offered to make it a 4 digit year code, my boss said no, 2 is plenty... I could see Y2K coming.

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group