The dirt on clean data

Dirty Data: How clean is your data?

Data is clean if it is:

  • Fit for use in business operations

  • Accurate at the time collected

  • Complete

  • Consistent across systems

  • Definition has no ambiguities

  • Calculated data is reproducible across applications

  • Source is reliable

What makes data dirty:
  • Poor or inconsistent processes

  • Poor data practices

  • Blank fields

  • Content variation

  • Typographical errors

  • Unreliable sources

Example of dirty data:
  • Date of birth in U.S. and European formats: 1/11/2006 vs. 11/1/2006

Technologies or applications to clean data or calculate matches:
  • Algorithm processes

  • Data administration tools, which house data about data

  • Searching and matching applications

  • Statistical imputation to determine missing data from other data elements

  • Standards and data translators to link nonstandardized elements

  • Mutual authentication

Data profiling tools, which can include:
  • Descriptive statistics of what data files look like

  • Measurement and analysis of data for defects and to determine what needs to be fixed

Data visualization techniques to make data quality errors apparent graphically
  • Standardization and formatting

  • Correction

  • Enhancement

  • Combination and consolidation

We aren't going to catch terrorists with just finger scans but also by improving the quality of data.'

'Glenn Norton, U.S. Visit

Rick Steele

With a little elbow grease, agencies can make their data presentable

More than ever, an agency's ability to do its job depends on the quality of its data. From delivering Social Security payments on time to managing large projects to capturing terrorists, agencies are finding that inferior-quality, or dirty, data can really gum up the works.

Clean data, which essentially means data that is accurate and accessible by outside users, has the opposite effect.

The Office of Management and Budget is trying to get agencies to clean their data by requiring departments to adopt the Federal Enterprise Architecture's Data Reference Model, while a host of other agencies are scrubbing their existing information to make it more functional.

Version 2.0 of the DRM, which OMB released last month, enables architects to describe information so it is easy to find and use across multiple federal agencies and provides the resources to standardize the description, context and means of sharing data.

While the DRM is one step toward clean data, most agencies are struggling with the flip side'dirty data, which is inaccurate and inconsistent. Dirty data has the potential, over time, to impede the wheels of government, said Kimberlee Mitchel, senior technical adviser in the Social Security Administration's Office of Systems.

Domino effect

Dirty data increases the time it takes to process transactions, requires manual intervention and causes backlogs. It also can cause errors'for example, in Social Security benefit payments'which can set off a chain of unwelcome consequences.

SSA shares its earnings reports with other agencies. If the IRS or state agencies obtained bad data, they might send a notice saying a person hadn't paid the right taxes, when in fact they had, leading to a bad credit report.

'The consequences of clean data are that you're able to facilitate automated processing. If we can move to the point where you have computers talking with computers, sharing and exchanging data, the productivity of this country would just soar,' Mitchel said.

As agencies increasingly share data across their own business units and across government, their need for clean data has grown. But even basic data becomes complex because of variations in formats, cultures and definitions.

One agency dealing with the problem is the Homeland Security Department's U.S. Visitor and Immigrant Status Indication Technology program, which disseminates information on foreign nationals in the country to DHS' Immigration and Customs Enforcement (ICE) directorate and the State Department.

U.S. Visit verifies the identity of foreign nationals at U.S. ports of entry and checks them against databases of terrorists and criminals.

The program builds its immigration database on the visitors' name, date of birth and passport number. Biometrics of two-fingerprint scans ensure the accuracy of data collection, said Robert Mocny, U.S. Visit deputy director. The finger scan can compensate for some variation, such as the format of the date of birth.
'If we don't have good data on the bad guys, and we're not doing a good finger scan or not doing the due diligence from the biographic side, then we might miss someone at some point,' Mocny said.

Incomplete and inaccurate data also leads to missed opportunities.

'When we first started giving data to ICE, we were giving them thousands of records that they really couldn't follow up on,' Mocny said. 'Now, we're up to 70-plus arrests in the last few months based on our records that we've given them.'

But biometrics don't always come into play, such as when DHS has to track and monitor visitors when they get a legal extension to stay. That puts the emphasis on the quality of other records.

'The higher you can raise the integrity of data, the better. We aren't going to catch terrorists with just finger scans but also by improving the quality of data,' said Glenn Norton, U.S. Visit mission operation data management chief.

Beginning this month, U.S. Visit will evaluate data for its usability before transmitting it to ICE border agents, Norton said.

Sophistication

U.S. Visit sifts through 10 federal systems that have records for aliens with manual querying or a search algorithm. DHS anticipates incorporating more sophisticated algorithms into some applications to automate the search for matches this year, Norton said.

The department put together a data integrity group of analysts, the Smart Border Alliance, from among U.S. Visit lead contractor Accenture LLP of Chicago's partners to spot trends in data errors that ICE agents report and determine if the errors are related to training or systems, Norton said.

If data corruption makes federal administrative systems ineffective, citizens begin to lose confidence in their government, Social Security's Mitchel said. Knowing that the source of data'called its pedigree'is reliable is a key element of clean data.

Automated tools (such as data profiling, search and matching, statistical analysis applications, algorithms and edits) can clean data, but the presence of dirty data might not be apparent until its consequences appear later.

'People are starting to realize that data integrity is the fundamental thing that an administrative government organization deals with,' Mitchel said.

To foster data integrity, agencies need a governance and business model, said Scott Schumacher, chief scientist at Initiate Systems Inc. of Chicago, which provides data cleansing, matching and integration for the Veterans Affairs Department and other agencies.

'You need a governance model for what data you will share from an agency and what rules you have in place for the receiving agency to look at the information,' he said.

DHS in February plans to implement a data governance framework to establish data stewardship, accountability and responsibility processes.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above