Exploring the deep web
Internal and external federated systems lead users to treasures that regular search engines can't find
GCN Illustration by Sam Votsis
For the past decade, the Energy Department's Office of Scientific and Technical Information in Oak Ridge, Tenn., has been using the Internet to speed research processes.
'When we first started posting information on the Web in 1997, we relied on search engines provided by the database vendors,' said OSTI Director Walt Warnick. 'It soon occurred to us that it would be helpful to provide our patrons with the ability to search across multiple databases at one time.'
That led the agency to install federated search software ' a search engine that simultaneously executes a query against a number of databases in real time, then aggregates and ranks those results. In April 1999, OSTI launched the EnergyFiles site (www.osti.gov/EnergyFiles/), providing access to over 500 DOE databases and sites. That was followed in 2002 by Science.gov, which allows a single query to pull data from 30 scientific research databases at 12 federal agencies. February 2007 saw the release of Science.gov 4.0 with greatly enhanced relevance ranking. OSTI is now working to expand the system to include government research sites worldwide.
'Our mission is to accelerate the spread of knowledge to accelerate the advance of science,' Warnick said. 'Federated search is a very useful way for making that happen.'The dark Web
Google may dominate the search market, but it has two major shortcomings. The first is that it barely accesses what is known as the deep Web, invisible Web or hidden Web ' data that is available over the Internet but cannot be indexed by Web crawlers, at least not without Webmasters preparing a text file listing all the entries of that database. All this material that resides in databases can only be summoned by dispatching a query or by filling out a form.
'In 2000/2001 we did some analysis and realized that the quantity of documents from these deep-Web databases was far bigger than what everyone was calling the Internet,' said Jerry Tardif, vice president at search firm Bright Planet.
Tardif estimated that the deep Web is several hundred times the size of the surface Web ' the data that search engines normally capture. Others give a lower figure ' Abe Lederman, president at Deep Web Technologies, the company that makes the Explorit search software used by Science.gov and the Defense Technical Information Center (DTIC) ' said the deep Web contains about 94 percent of what is on the Internet. But whatever that size, if you are only using Google or Yahoo, you are missing most of what is out there.
'Google makes search look simple, but in fact, search is not simple, particularly when completeness is important,' said David Fuess, a computer scientist at Lawrence Livermore National Laboratory's Nonproliferation, Homeland and International Security (NHI) directorate.
The other problem is information overload. Public search engines may be fine for locating a hotel in Singapore, but not for professional research.
Federated search engines address both of these problems. By searching multiple databases simultaneously ' an organization's own internal databases, in addition to other public or private databases ' they expose that massive quantity of data hiding on the invisible Web. They address information overload by searching only those databases required by a particular type of information customer. The Science.gov search engine, for example, doesn't even access all the data available on the DOE site.
'Science.gov is mostly [research and development] findings,' Warnick said. 'There are a lot of things that Science.gov does not have on it. For example, the Energy Information Administration is not a Science.gov site.'
Instead, it gives searchers in-depth access to research papers from CENDI (originally the Commerce, Energy, NASA, Defense Information Managers Group), an interagency working group of senior scientific and technical information (STI) managers from a dozen agencies, including DTIC, the National Agricultural Library, the National Library of Medicine and the National Science Foundation. Together, CENDI members control more than 95 percent of the federal R&D budget, so accessing their databases provides a near-comprehensive overview of federally funded research. OSTI also hosts several other federated search sites including E-Print Network (www.osti.gov/eprints) and Science Accelerator (www.scienceaccelerator.gov).
DTIC (www.dtic.mil) has its own federated search engine ' STINET (Science and Technical Information Network) Federated Search at www.dtic.mil/ dtic/search/federated_search.html ' specializing in providing research information to the Defense Department community. Databases include DTIC's own research collection, periodicals from the Air University Library and Joint Forces Staff College, and certain databases maintained by other federal agencies. Users can click on which databases they want to search before submitting their query.
'Our customers wanted to come to a single site and search for scientific information from both the DTIC and our sister organizations in other federal agencies,' said Ricardo Thoroughgood, chief of the STINET Management Division. 'Initially, it was an internal DOD resource, but we shut down that site and made it available to the public with all unclassified and unlimited information, so that data is readily available to the public through the STINET databases.'
In addition to the publicly available federated search sites, both Energy and DOD use federated search internally. Lawrence Livermore National Laboratory in Livermore, Calif., for example, uses Bright Planet's Deep Query Manager to provide custom searches for different types of users. In the case of NHI, Fuess said, federated search is used to find information on non-U.S. users and consignees who may receive dual-use, export-controlled goods from U.S. vendors.Search setup
Setting up a federated search system is not simply a matter of installing software.
'Information technology staff need to understand that this is not a trivial undertaking,' Lederman said. 'It is very unlikely that this is something an IT person at an agency can just purchase a copy of, set up and run.'
The process starts with defining exactly what types of searches your users will perform and what databases contain the desired information.
'If an agency is federating search on their own databases, they generally know what they have, where it is and the type of information that is in there,' Tardif said. 'But if they are doing something on the outside, they need subject-matter expertise on what public sources are available.'
Then there is the matter of setting up the user interface to be intuitive and easy to use, but also with enough detail to let users narrow searches to the exact source of relevant information. The California Digital Library (www.cdlib.org), for example, uses MetaLib from Ex Libris but found that extensive customization was needed.
'Since the user interface of the commercial product was not as flexible as we required, we needed to build our own user interface layer and use the application program interface of the commercial application to handle the connections to multiple sources, the searching, merging of search results, deduplication and ranking,' said Roy Tennant, the library's user services architect.
Then comes the matter of establishing the links to the data sources and keeping those up-to-date. BrightPlanet has scripts for searching more than 70,000 public databases, and the appropriate ones can be used as part of an agency's federated search engine. You might need custom scripts for any internal databases. But establishing those search links is not a one-time activity; they must be updated whenever the database owners make changes to their sites.
Finally, there is the matter of ensuring that the data returned is comprehensive and relevant.
'The real question you must answer is the consequence of missing a critical piece of available information vs. overwhelming your researchers with huge volumes of information,' Fuess said. 'To be effective, you must strike a proper balance that maximizes the probability that the information you seek is in the results and that the results can be reviewed within the response time allowed.'