NARA Web site harvest yields 75 million pages

Mark Giguere

Racheal Golden

When the caretaker of the government's history took recent snapshots of agencies' Web sites, it found hidden among 75 million pages evidence of one federal employee's obsession with a word search game and another's partiality for hangman.

These two games were among 10 the National Archives and Records Administration found during its second harvest of agency Web pages. NARA undertook the effort, required by the Interagency Committee on Government Information, to compile an archive of the government's first- and second-domain Web sites.

NARA was set to make the collection available on the Web late last week.

'We didn't capture high-end Web information, just simple link clicking and copying,' said Mark Giguere, NARA's chief of IT policy, planning and electronic records management.

NARA hired Information Systems Support of Gaithersburg, Md., to carry out the $337,000 project. ISS subcontracted the Web harvesting to Internet Archive, a San Francisco nonprofit. Internet Archive used a seed list of URLs provided by NARA for the site scans. For each scan, Internet Archive's software 'traverses an entire Web site tree by clicking on all hyperlinks and makes copies of those pages,' Giguere said.

In all, the Web harvest collected 6.5T of data from 1,300 civilian and 70 unrestricted Defense Department and intelligence agency domains.

The government began conducting Web harvests in 2000 at the end of a president's four-year term. Four years ago, following President Clinton's second term, agency webmasters transmitted information directly to NARA, which resulted in the agency having to sort through hundreds of different data formats.

Giguere said NARA still is processing the results from the first collection. The Clinton collection won't be finished until later this year. So the harvest for Bush's first term will be the first complete snapshot of agency Web pages, Giguere said.

For the Clinton harvest, 'it was a rushed experience, so we didn't have time to pull together a coherent set of requirements,' he said. 'Since then, the e-government initiative promulgated guidance on transferring Web records to NARA, and we had the experience of the first harvest to build upon.'

Gordon Mohr, technical lead for Web projects at Internet Archive, said his organization developed an open-source application in Java that runs under Linux to pull data from agency sites.

The software used the home pages of each agency as a starting point, and for 24 hours a day, seven days a week for about five weeks went through the first and second domain of every federal Web site.

Mohr said the Web harvest found that the typical government Web page is more than seven times larger than the average nongovernment Web page'150K to 20K.

Giguere said the depth and breadth of some sites, such as those of the military services and NASA, were the reason the pages were larger on average, as well as why the entire volume of data was so great.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above