The bottom LINE
- By Richard W. Walker
- Nov 16, 2004
Successful vetting of technology is tied tightly to business aims
'We're seeing a lot of capability demonstrations as part of the acquisition process. That's a good move.'
For government agencies today, technology evaluation isn't the sole province of the IT shop. It is'or should be'part and parcel of an agency's mission-based management processes. Mary Mitchell, deputy associate administrator for electronic government and technology in the General Services Administration's Office of Governmentwide Policy, says technology evaluation doesn't begin on the workbench. Far from it. She says it starts with the question: What problem are you trying to solve? GCN associate editor Richard W. Walker recently interviewed Mitchell by phone.GCN: What are the first steps for agencies in evaluating technologies?
Mitchell: Before we get to assessing technology and the pros and cons [of a particular technology], we have to start a lot earlier in the process than that. It really starts with a very solid understanding of the problem you're trying to solve from a business and user perspective. It's later in the process that you worry what technologies might contribute to solving that problem.
That also means getting solid requirements on what functionality you need, what population you're trying to serve and how you expect utilization to grow over time.
The reality is, we don't buy a lot of [hardware and software] independently anymore'we're really looking for a total solution. But we still have to make sure we have a dialogue [between the government and a potential service provider] in some of the policy areas, because a government solution may differ from a purely commercial solution.
For example, government has a very different perspective on IT security, and one might say that we look at risk slightly differently than the commercial side.
Then there's accessibility [for the disabled] and privacy. Those tend to be areas that [vendors] responding to our proposals may not get. So there needs to be a dialogue with all the proposers.GCN: What's the role of market research in technology evaluation?
Mitchell: It's our responsibility to do an adequate job of market research before we ever get a solicitation on the street. There are lots of ways to do that.
I'm a fond user of industry days because it is a two-way dialogue. Industry [representatives] can hear what we're thinking. Also, time and time again, an audience member'not necessarily a supplier'will say, 'Did you think about this? Did you know about that?' So it's a very rapid and constructive way for us to elevate the whole dialogue of what [technology] is needed.GCN: What are some of the special challenges for government agencies in evaluating technology?
Mitchell: I've mentioned privacy. That's an area where we've gotten a little more formal than we used to be. Over the long haul, privacy really impacts the impression that the public and businesses [have of government] and even their willingness to work with us. There's the assumption on the user side that we're not doing enough and in some cases that may be true.
Another area is information security. The risks are growing and the incidents are growing and it's pretty hard to keep ahead of the herd.
Then there's accessibility. People might think we've got this very tiny disabled community, so why is that such a big deal? But achieving accessibility isn't such a clear-cut thing. It's not like an XML standard, where I can get objective criteria and objective tests to nail down whether it's working. [In the case of evaluating technology for accessibility], you've got the underlying technology but you've also got how it's being applied by the end-using community, which can take a technology that's fundamentally accessible and make it inaccessible in a heartbeat.GCN: How do agencies make sure that they're getting the 360-degree view of the technology that's out there and weighing all the pros and cons of the technology under evaluation?
Mitchell: There are multiple ways of doing that. Requests for information are great, but they can take a huge amount of analysis. You get a huge amount of junk back.
Also, it's not uncommon for us to bring in an expert who's had experience in the technology to augment our team as a consultant.
The composition of the assessment team also is really critical. We try to get a number of different perspectives on that team and, depending on the size of the acquisition, a lot of times we'll bring our customers into that team.
We're also seeing a lot of capability demonstrations as part of the acquisition process. That's a good move. Bringing in [competing vendors] gives you the ability to assess not only what they're really talking about from a functionality perspective but also the quality and strength of their management team.
Definitely invite the [bidding teams] in to have a dialogue. Obviously you're not asking for them to build the system for you up front, but many times they can demonstrate the concepts or the foundations of the proposal.
But, again, technology is just a piece of this whole puzzle. You want to come up with the criteria up front that assess the technology, but you also want to assess other very key features, [such as] objective criteria for the minimum functionality you're going to demand from any [bidder] that makes the competitive range.
Sharing experiences from other agencies [is another way to learn the pros and cons of various technologies]. I go out and search FedBizOps [www.fedbizopps.gov
] to see who's done something similar.GCN: Where do you look for best practices in assessing technology?
Mitchell: Talk to other agencies. Look at the studies'Gartner Inc. and Forrester Research Inc., for example. Sometimes we look at the experience of state and local governments. There's also the CIO Council's Solutions Exchange [www.cio.gov/tse
], which is going after best practices. This is a new approach to get to that.GCN: Is it useful for agencies to seek outside expertise in evaluating technology, and how strong a part should the consultant play in the evaluation process?
Mitchell: It's a fairly common thing to augment the [agency's evaluation] team with expertise in an area that is still a bit of an unknown.
But a federal requirer really needs to drive the process and understand the role of that external expertise. You're going to bring in that [consultant] for very targeted purposes.GCN: Once you get to the technology, what's involved in assessing new or emerging technologies and deciding whether or not to stick with a current technology?
Mitchell: The government typically says, 'We don't want to be on the cutting, bleeding edge. We want to be in the area where the bugs have been worked out.' But you also don't want to be pursuing [a technology] that's going to be overtaken by something that's way better.
So part of that is going back to knowing what problem you want to solve. It's a business decision. You really need to assess the potential for this [technology] to do a better job. What kind of process standardization and change do I need to put in place to make it happen? What are the risks of adopting this new technology?
A lot of times, you will see agencies use a limited pilot before making a big commitment to move to a new technology because you can't really determine how a technology is going to work to solve your problem just from market data and past experience.
GCN: Are the fundamental principles the same for evaluating different types of technologies?
Mitchell: Again, I'm going to go back to the business aspect. It really is looking at our current business processes'the kind of improvements [that] can be made and what technologies fit into making those improvements.
I'm a big standards bigot. I like to go out and see what's happening in the industry with respect to standards. Interoperability has gotten a lot easier, particularly with the ability to black-box it and do data exchange between things.
You've got other kinds of issues, some of which aren't really technology issues. We've heard the debate about open source. Where is open source? From a federal perspective, we don't really care'we don't want to be down there trying to figure out how to support this stuff ourselves. We want the services, so for us, open source versus a commercial alternative is really more of an issue of what's the maturity, what's the capability, what's the level of support? So it's really more, do I have a whole package? [Open source] is just one of the alternatives and I would treat it just as I would any other technology alternative to solve a problem.GCN: Are there any major differences between assessing hardware and software?
Mitchell: Hardware is really more about price points until you get into special hardware and devices. [For government,] it really is about price because we don't want to use those proprietary platforms. More so in the area [of] software, it is coming up with the requirements and the evaluation framework you're going to use. It really is a systems approach for a lot of this stuff and you can't look at it in isolation. You have to look at it as a big system.GCN: What are the most important factors in assessing hardware?
Mitchell: Scalability is definitely still an issue. We're also looking for stuff that plugs together and can be interfaced to other things. Rarely do you have the freedom to start over. We want solutions that fit with other pieces of pre-existing functionality. We're looking to modernize and provide additional functionality, but not necessarily rip out and start over.GCN: What are the most important factors in assessing software?
Mitchell: Interoperability is still a very big issue. The government really shouldn't need to be in the interoperability-testing world, so we look to industry to do that. On the other hand, in areas that we really care about, like security, we've had to [do interoperability testing]. I'll give you an example. For the identity management E-Authentication initiative, we've stood up an interoperability test lab. And the Defense Department has stood up a test lab for biometrics and smart cards, and we're leveraging that.GCN: Do frameworks like the Federal Enterprise Architecture help in the evaluation process? Do they help parse the technology under evaluation and make sure it matches with business processes?
Mitchell: My office has done a huge amount of work on the FEA, so I'm biased there. But the reality is, for this stuff, you're really looking more at the agency blueprint for the future than the FEA. The FEA gives you this nice framework to fit it all in, and it's helpful when you're trying to do these interagency things. But typically, with the exception of these big projects we're driving in e-gov, the thing to look at is the agency blueprint for the future. It really ties in with what businesses do you want to be in and what processes are currently successful.GCN: Do you agree that it shouldn't be just the IT shop or CIO driving the evaluation process, that you need all of the agency's 'business owners' at the table?
Mitchell: No offense, but the CIO ought to be more of the orchestrator. You've got to have user involvement. Everybody knows these things aren't successful without a champion. But the champion doesn't need to be the CIO. It could be somebody from a business unit. When it comes to that, we might have more parochial views or we might not have as good of a way of sorting all of the interdepartmental politics. But it's not that different than what you find in a big company.