Another View | After net-centricity: Assurance

This commentary is an unabridged version of a guest column that originally appeared in GCN's Sept. 29 print issue.

Susan Alexander

Contributed

In January 1998, Vice Adm. Arthur Cebrowski and John Garstka first wrote about and named net-centric warfare, making the case that new IT paradigms would and should radically change military operations just as they were changing American business. A few years later, John Stenbit, then-Defense Department chief information officer, elaborated further: 'Power to the edge means making information available on a network that people can depend on and trust, and populating the network with new, dynamic sources of information to defeat the enemy while denying the enemy advantages and exploiting its weaknesses.'

The early years of net-centricity focused on connectivity. The Global Information Grid (GIG) was chiefly an assortment of big communications programs ' the Global Information Grid Bandwidth Expansion (GIG-BE), the Transformational Satellite System (TSAT), the Joint Tactical Radio System (JTRS). Though it is surely a prerequisite for net-centricity, Stenbit's words remind us that connectivity is just the first step of a larger strategy.

Over the past 10 years, real world developments in cyberspace have taught us much about the other two pieces of the strategy and their implications for information assurance.

A first observation is that today's network participants and network data interact in ways we could never have predicted; a second is that the network has turned out not to be a safe haven, where we can expect to operate unharassed. Exploring these observations takes us from connection-focused net-centricity through content-centricity to mission-centricity and suggests some interesting lessons for information assurance.

Content-centricity

Net-centricity did not ignore content, but it treated data as essentially static and impersonal. TPPU (Tasking, Posting, Processing and Using, or post before process) claimed advantages over the previously existing paradigm TPED (Tasking, Processing, Exploiting and Dissemination) by cutting down the time between the acquisition of data and its availability to potential consumers. It gained this speed by posting data immediately to repositories while forking off further transformation to occur in parallel.

TPPU, further, attempts to take away originator control of dissemination, presumably so data can be found by all those who need it. To obtain this more universal availability of data, however, TPPU breaks traditional relationships - between collectors and the specialists who transform the data into information or intelligence and between consumers and originators.

Despite frequent exhortations to 'populate the 'Net,' in the Defense Department we find it very hard to get data owners to post. One reason for this reluctance may be the anonymity of a publish-and-subscribe system. We don't know those who is pulling our data. We don't see why they should have our information. Though TPPU has captured the data, it hasn't replaced the interactions of the participants in the old system with something as compelling.

A second idea to have been embraced by net-centrists is service-oriented architecture (SOA). A SOA extends the server-client relationship often found in local networks across the enterprise. Commonly needed services may be performed in a central location, rather than on a user's machine or the local server.

Though originally embraced for their IT economies, SOAs are also a form of posting, providing a means to proliferate a rich set of services across the enterprise, even to users whose own enclaves might be incapable of supporting or inventing such capabilities on their own. Like TPPU, the success of SOA is limited by the willingness of service-developers to offer those services. For the user or host of the service, however, it also shares with TPPU integrity concerns: In a nutshell, is this new posted element -- whether data or service -- safe to use?

Given sufficient investment, today's information assurance approaches seem capable of successfully tackling the issues involved in securing TPPU and SOA. These frameworks remain essentially transactional, much like the point-to-point communications of the pre-networked world.

Cryptographically bound metadata will address issues of provenance and integrity, and we'll eventually work out secure middleware and a way to certify and accredit services. TPPU and SOA were a good start, but they are not what the Internet looks like today. Although it starts with people posting, posting doesn't begin to describe the complex associations that emerge among contributors, consumers, data and services. The Internet content revolution presents a fundamental challenge to our IA paradigms.

The dynamism of the Web depends on two scary properties from an assurance perspective: the ultimate wisdom of the collective and the constructability of ephemeral services. On the 'Net, earned reputation replaces pedigree and analysis. That the Internet works at all is testament to the Law of Averages. While there is no guarantee that any individual fact is correct or any service is bug-free at any given time, on average, things work pretty well, and functionality is increasing.

Studies have shown Wikipedia to be about as accurate as bound encyclopedias. Mash-ups and mark-ups get me to my destination as well as or better than my road atlas. However, if our assurance strategy requires we issue gold disks of carefully-vetted data or software, Web 2.0 is an assurance nightmare. The challenge of the dynamic, content-centric environment for IA is tolerance: error-tolerance, fault-tolerance, and the ability to trust unfinished and imperfect systems'just well enough to use them.

No safe haven

It was probably not long after the first roads were built that thieves began to lie in wait of those who traveled them. The past 10 years have brought a great variety of 'highwaymen' to the information highway, much to the dismay of e-merchants.

Ironically, for our community this surge in cybercrime has had a benefit in that it has enabled us to speak openly of exploits which are now no longer theoretical or secret. Masquerade and privilege-elevation attacks coupled with the lack of strong authentication on the Internet allow all that rich content, conveniently posted for maximum discoverability, to be exfiltrated with relative ease.

The Storm Worm botnet has shown us that the network itself is a resource that can be stolen. It seems its creators decided they could make a lot of money by getting millions of computers around the world to send spam, and they've mounted both an elaborate command and control structure and defensive penalties for those who interfere with them.

Right now it is in the botmaster's interest for those computers to stay up, but other cybercriminals make money by threatening to take computers down. Companies now must confront the dilemma of whether to pay cyber-extortionists to avoid Web site defacement, public disclosure of vulnerabilities and fatal denial-of-service attacks.

Today we know we can expect to be vulnerable to the nation state versions of the same categories of attacks dogging the commercial world, but early net-centricity dogma was fairly silent on this subject. Secretary Stenbit spoke of 'populating the network with new, dynamic sources of information to defeat the enemy while denying the enemy advantages and exploiting its weaknesses.' Whether intended or not, this language makes the network sound like a library when it is, in fact, a battlefield!

If we use the network to disseminate content, the adversary will try to see that information. If we use it for command and control, the adversary will try to subvert command integrity. If the enemy can't co-opt it, he'll take it down so we can't use it either. Safe-haven thinking was extremely unhelpful to information assurance, because it not only set out the wrong problem for solution but created specious expectations. 'Secure the library' sounds easy, but 'secure the battlefield' is impossible. The battlefield is where we engage the enemy, and it's about surviving in order to prevail.

Our second observation teaches us another lesson in tolerance, this one very close to home: tolerance of IA's inability to provide a perfect defensive shield. It is unreasonable to expect to keep the enemy off the battlefield. But though we may not be able to keep up with rapidly-evolving exploits well enough to guarantee the adversary will not find his way into the network, that doesn't mean we give up either.

Our responsibility is to the mission, not the network. We can create mechanisms that enable the mission to operate through attack. We can develop architectures that buy us time to evolve our defenses and make it easy to deploy new countermeasures. We can and should think of IA more like a weapon. No weapon is omnipotent, and no weapon dominates forever, but new and better weapons raise the cost and risk to the adversary of attacking us.

Mission-centric IA

So what will mission-centric IA look like? Adopting a philosophy of advantage over perfection lies at the center of the shift. We have a commercial-off-the-shelf strategy in the DOD because commercial technology brings useful functionality to market so quickly, but we have an IA evaluation paradigm which is severely stressed by change.
The philosophy of advantage gives us permission to factor time into our calculations of assurance and appropriateness. An advancing force can build a pontoon bridge across a river because it doesn't expect it to be there forever.

We will always hold back the mission until we make time our friend. The idea of advantage can also counterbalance our tendency to let the perfect be the enemy of the good. As Web 2.0 demonstrates, the ability to deploy and redeploy quickly is much more important in the long run than the ability to deploy flawlessly the first time.

Advantage helps us in content space by speeding our access to capability. How does it help with our no-safe-haven problem? If we concentrate on the mission advantage and not on the network, we find, in contrast to today's approach, that we gravitate first to solutions that don't require that we know much about specific attacks.

Our first priority is to ensure we can do the mission while under attack, which we accomplish by reducing the damage successful attacks can do and creating and exercising mission plan B's.

Next, we re-architect to eliminate easy entry points and attack vectors.

Then, we deploy control structures capable of responding to the attacks we detect.

Finally, we address detection, but architecturally ' recognizing that the adversary will constantly change his game and that our detection strategies must be just as agile.

Injecting this new thinking into our current institutions will take time and creativity, but if we're successful, we'll greatly increase our freedom to maneuver. This work should keep us pretty busy over network-centric warfare's next decade.

Alexander (Susan.Alexander@osd.mil) is Chief Technology Officer, Office of Secretary of Defense, Information and Identity Assurance.

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above