Last Thursday, the World Wide Web Consortium kicked off a workshop for establishing e-gov standards for information sharing. There, attendee buzz was all around a fresh catchphrase: open-government data.
The phrase was a distillation of an idea that the new federal chief information officer Vivek Kundra has had for government agencies exposing more of their data for government use. And everyone was discussing it. One attendee asked how agencies would prepare the data in such a way that it could be useful to others. Maybe all government employees should have their own blogs so they could describe how they are fulfilling the agency mission, another suggested.
Kundra's vision was broad and would require a lot of work. But many federal managers — at least those charged with keeping and extending government data — were clearly passionate about making the whole idea of government data feeds work.
Now that vision may be evaporating as quickly as it arrived.
It was only two weeks ago that Kundra, formerly the chief technology officer for Washington, D.C., was named the first federal CIO for the U.S. government. Judging from his work in Washington, he had more than a few innovative ideas about how government could better use IT — from cloud computing and open data feeds to better reporting tools for displaying such data.
A week later and just a few hours after his keynote at the FOSE IT expo, the FBI arrested a member of Kundra's D.C. office on charges of bribery. Although Kundra was not charged, the Obama administration placed him on a leave of absence until the scope of the criminal case could be better understood.
Even if Kundra is cleared of malfeasance, the question is still out whether he will be returned to the federal CIO spot. "The fact that Kundra himself is not accused of any wrongdoing is beside the point, " Federal Computer Week news editor Michael Hardy pointed out. "As the District of Columbia’s chief technology officer since 2007, he was responsible for the actions of his 300 employees."
For an administration harping on government transparency and oversight, the fact that these shenanigans took place in Kundra's office may be reason enough to look elsewhere to fill the post.
Whatever the outcome for Kundra, it would be a pity to lose his ideas, even if they require additional work on the part of IT managers. As Kundra pointed out, the federal government has fallen behind the commercial sector when it comes to deploying new technologies, and this disparity leads to both greater costs and less citizen service. Opening government data and making greater use of commercial cloud technologies are both valid ideas that could close that gap.
Update (3/17/09): The White House has reappointed Kundra as CIO, according to sources.
Posted on Mar 16, 2009 at 7:05 PM3 comments
President Barack Obama today named Vivek Kundra, who had been the chief technology officer for Washington, D.C., to the job of federal chief information officer. The role will also encompass the former title of the Office of Management and Budget's Administrator for E-Government and Information Technology, a job that had been previously thought of as the federal CIO.
Kundra's appointment seems a logical step forward for the federal government, even it could cause a bit of duress on the part of federal agencies, many of which have valued deliberation over speed, a trait Kundra has railed against.
As Dan Munz at Government Executive blogged, the e-gov czar role, created to help carry out the E-Government Act of 2002, provides leadership for federal agencies in efforts to promote "electronic Government services and processes."
Historically, this role of e-gov czar has been about pushing agencies outside their comfort zones.
The former e-gov czar, Karen Evans, prodded agencies to go through the sometimes painful though necessary work of securing their networks and computers, through OMB mandates such as Trusted Internet Connections, the Federal Desktop Core Configuration and the migration to Internet Protocol version 6 (IPv6). Agencies are still toiling to meet some of those mandates.
Her predecessor, and the first e-gov czar, Mark Forman introduced agencies to the idea of looking at their IT systems in a systemic and holistic fashion, through the use of enterprise architecture. Forman was deep; many are still trying to decipher some of his more profound observations. Also, he showed how agencies could work together to get e-gov projects off the ground, despite slim funding from Congress.
With Kundra, agencies can expect to see more calls for change, in particular around greater transparency and more use of Web 2.0 and cloud computing technologies, at least if Kundra's role at D.C. is any indication. There, he championed such causes, often as a cheaper, faster alternative to the typical government ways of procuring technology.
"One the biggest problems in government is that process has trumped outcome," he said in one video interview. "As everyone is focused on compliance, no one is thinking about innovation."
Last fall, GCN had covered how the city of D.C. had contracted Google Apps licenses for 38,000 users, for a contract worth about $500,000 a year.
Kundra explained that going with Google Apps could cut the cost of procuring enterprise software while at the same time making it easier for the D.C. employee to interact with coworkers. "Why should I spend millions on enterprise apps when I can do it at one-tenth cost and 10 times the speed? It's a win-win for me," he has been quoted as saying.
During a Webcast GCN held in January, Rob Mancini, who is the D.C. Office of the CTO program manager for citywide messaging for D.C., talked about the city's use of the Google online apps. He said that Kundra had instructed each D.C. employee to set up a Web site within Google Sites. "By requiring people to create their own sites, they learn what they can do with the technology," Mancini said. And when they need to share information with other employees, they knew how to do so in an easy fashion. Mancini himself found the process quite easier than burning a CD or e-mailing search results or using other techniques to convey some piece of needed information.
Mancini also mentioned that the employee use of the online Google spreadsheet has been particularly successful. During a project, instead of mailing spreadsheets around, employees just log into Google and do their work on a common spreadsheet. This is also effective insofar that employees can log in from home without using dedicated virtual private networking (VPN) software. Google Apps is using D.C.'s user authentication system, so a separate log-on service is not needed.
E-mail is another app that OCTO is moving to the cloud. The e-mail of about 20,000 employees is now being forwarded from Microsoft Exchange servers to their Gmail accounts, Mancini said. One Webcast listener questioned the need for the Exchange servers at all, but Mancini noted the city is still testing Gmail, and wouldn't make any sort of statement yet as to if or when the in-house servers would be eliminated.
Behind this shift to Google apps seems to be a conviction that commercial IT services could be more efficient than in-house systems. In the GCN article, Kundra marveled at how the average person with a laptop and a broadband connection has just as much computer power as the average police officer or school teacher.
"He said one reason [for this technical reticence] is agencies' preference for in-house, proprietary or custom solutions, which officials believe offer more security," the article stated. Such security worries are overblown, at least in most cases, Kundra felt. "If you think about it, there is very little the government does that is private," he told GCN.
Another set of projects Kundra spearheaded shows a conviction to openness: that agencies could benefit by publishing their data so that others can use them for their purposes. Last year, D.C. published over 240 data feeds all of which came from internal systems, ranging from metro timetables to crime reports.
In order to spur applications that use this data, Kundra then kicked off a contest, called AppsForDemocracy. Instead of building out applications that the public could use itself, D.C. hoped to motivate volunteers to build apps, for either the Web or for mobile phones or some other platform. The effort led to more than 47 new apps being built, at a fraction of the cost of building them in-house.
Again, by relying on this new approach to IT system procurement, namely by using Web 2.0 tools and crowdsourcing, agencies could save money. The city estimated that, if it were to commission all these apps that were built for AppsForDemocracy individually, it would cost more than $2.6 million. Running the contest came to only about $50,000. (The OCTO also created its own site, the Digital Public Square to make data feed information available as well).
These are just a few of the new initiatives that have come from Kundra. His office has also opened the city's procurement process, publishing request-for-proposals on the Web and offering introductory information on YouTube. In-house, his employees use Wikis and Twitter-like messaging service, and he has floated the idea of "letting drivers pay parking tickets and renew driver's licenses on Facebook," the Post reported.
Behind these initiatives, obviously, is a belief that technology could be used for positive change. Not surprisingly, Kundra was a supporter of the Barack Obama presidential campaign, which also spoke of government change. Kundra has stated that his career goal is to "to affect change as a public servant."
And like Obama, Kundra shares an international upbringing. If Wikipedia is to be believed, he was born in India and grew up in Tanzania, speaking Swahili as his first language. When he was 11, his family moved to Gaithersburg, Md. He majored in psychology as an undergrad, and he earned a masters degree in information technology from the University of Maryland. He is also a graduate of the University of Virginia's Sorensen Institute for Political Leadership. Before coming to D.C., he served as Virginia's Assistant Secretary of Commerce and Technology. He's also spent time on the private sector at SAIC and as chief executive officer of startup Creostar.
Not surprisingly, he has brought some of the swiftness of the business world to his government work. For instance, he set up the OCTO office to run like an open trading floor. He even uses financial portfolio management software to track the success of IT projects. "The main work area is set up to resemble the trading floor of a stock exchange; the open seating format encourages collaboration, while monitors attached to the walls provide up-to-date information on all projects currently under way," NextGov reported.
Sometimes, however, his enthusiasm for sprightliness has run ahead of his execution. At D.C., he led a $4 million effort to seed over 6,300 computers to D.C. schools. The Washington Post reported that there have been "several glitches" rolling out computers to the schools system, the fault of which at least one education administrator has pinned to the OCTO. Another hiccup reported by the Post: He took his people out on a retreat before getting full authorization, costing the city $23,000.
So, it would not be a stretch to assume that, in his new role, Kundra would look to make better use of the rapidly evolving Web 2.0 world. "Vivek is someone who can bridge those sectors [of government and commercial IT] to really unleash innovation," Arun Gupta, a partner at venture capital firm Columbia Capital, told the Post.
Posted on Mar 05, 2009 at 7:05 PM0 comments
Last week, Google announced the fees for its Google App Engine (GAE) service.
Although the Web application hosting service was free before (and for moderate usage remains gratis), the new payment structure will allow organizations to use the service in a systematic fashion. In other words, GAE has gone from beta to production.
It was a bit of good timing last week when consulting company Information Concepts held a morning-long introductory seminar on GAE in the brightly colored Google offices in Reston, Va. Although primarily a company for Microsoft .NET projects, Information Concepts started a cloud-computing practice a few years back. Wayne Beekman, co-founder and co-owner of the firm, gave the presentation.
For those of you who are trying to figure out how this newfangled concept called cloud computing will work on the operational level, GAE is as good a place as any to start. And Beekman offered plenty to think about.
Most of the time was spent not on GAE, the details of which, Beekman admitted, are a bit anticlimactic. But, oh, was the conversation lively concerning the architectural issues that GAE raised, which is to say the issues that cloud computing in general brings up.
Basically, GAE is a Google-hosted platform that can run applications written in Python. (Other languages — such as PHP, Java and Ruby — are being considered.) With the downloadable software development kit (SDK) and a copy of the Python runtime, you develop your application on a local machine and then upload it to Google. Google will run the app and worry about bandwidth, CPU and storage issues. Google provides a dashboard that allows you to keep track of how often the application runs.
Beekman noted that the app engine could not be used by all enterprise applications. "It would be presumptuous to say that everything you need to do with every workload could be done by Google App Engine," Beekman said. “If you need a legacy stack to run some batch processing, this isn't the place to do that.”
But GAE is well-suited to transaction-based Web applications. And a surprisingly large number of Web applications fit that profile.
What is the advantage of using GAE instead of running those Web apps in-house? Scalability and costs, Beekman said. "It's all about building an application and deploying it quickly without investment in hardware or licensing," he said.
Usually, when an organization needs a new application, the project team building that app must requisition the necessary equipment from the IT department — Web servers, application servers and database servers. That approach can take a considerable investment of time and money before the first user is even served. Worse yet, the development team never knows how many servers you'll actually need.
Beekman offered a few numbers for consideration. Say you're setting up a Web application for 5,000 to 10,000 users. Conservatively speaking, you'd need three Web servers, two app servers and two database servers. Between buying the equipment and running it for three years, the total cost for the organization would be about $500,000.
The initial outlay for setting up that app through GAE? Zero, at least as far as hardware and software licensing costs are concerned.
Beekman didn't offer a long-term comparison of the costs of hosting an application in-house versus running it on GAE. But if you know the use characteristics of the app you plan to run, you can do a quick cost comparison. With GAE running in full-bore production mode, the bill accrues at 10 cents per CPU core hour, 10 cents per gigabyte of bandwidth inbound and 12 cents per gigabyte outbound, and about 15 cents per gigabyte of data stored.
However, beyond comparing those basic numbers, GAE also challenges how efficiently any IT department can respond to changes in utilization, Beekman said.
In the home-grown approach, if you buy too many servers for your app, you're wasting money because those servers are sitting idle. But if you order too few servers, then your users will experience slow — or nonexistent — service, which depresses the whole value of the service. Explained that way, Beekman made it seem as though running apps in-house is a losing proposition, though, in fairness, a shop that uses virtualization could shave a lot of that excess cost and underutilization.
In contrast, GAE can scale to however many users it serves. "That's the beauty if this," Beekman said. “You can put something out there, and whether it is a dirt path or 12-lane highway, the platform will expand to your needs.”
Computational, bandwidth and storage space are added on the fly in an automated fashion. The program manager doesn't have to designate those resources. Google adds them automatically and keeps track of how much you use, billing you accordingly. You can set filters that establish limits for how much you want to pay each month.
Thus far, about 45,000 apps have been built on GAE, and about 10 million developers have registered for the service. Because the SDK is open source, any new apps can be moved to other platforms, even internal ones, if Google's pricing grows too demanding. And data storage could be redirected to MySQL or some other database, Beekman said.
Although GAE's value proposition does sound fine and dandy, the audience had more than a few questions, many of which highlighted the limitations of using GAE, at least in its current incarnation. However, such concerns could easily arise over any cloud-based offering — not only Google’s but those from Amazon or Terremark.
For instance, when you keep your data in the cloud, where does it actually reside? Many organizations have regulations that prohibit storing their data out of the country. Also, knowing where the data is stashed is important in continuity-of-operations planning. Someone in the audience said his organization created a COOP plan that requires that data and apps reside in two separate geographic locations. Should a flood or earthquake or some other natural disaster wipe out the first data center, the second location would keep all the material safe and dry and the services running.
Here's the problem: Google does not divulge where it keeps your data, and the company makes no promises that your data will reside in the U.S. or be geographically dispersed.
In fact, the whole idea of keeping data in two locations would probably seem quaint to Google engineers.
The complicating issue is how Google saves data. When most people think of data being saved, they think of data being committed to a single database. But Google uses a more distributed approach, accomplished through a combination of the Bigtable distributed storage system and the Global File System. A Google engineer in the audience said the company considers a piece of data to be written only when it is written to three separate disks and the index is updated.
In other words, Google data is probably parsed out across multiple locations worldwide. And given how reliable the Google search is, that approach seems to work, even if pinpointing the geographic location of a piece of data is pretty difficult. But although that is probably the most secure approach engineering-wise, Google isn't talking in the language of the enterprise customer that needs to comply with a requirement that its data reside in two geographically distinct locations.
Beekman said Google would probably need to offer terms that are a bit more concrete for the enterprise market. Trusting in the technical goodness of Google in lieu of contract-enforceable specifics probably wouldn't cut the mustard with most government procurement officers, even as Google gives their managers the warm fuzzies.
Google’s distributed approach to storing data might also force some data managers to rethink how they store their information. If the data will not reside in a relational database, much of the relationships and other logic that used to be inherent in the database must be defined in the application layer instead.
"The app engine is not a relational database. You have to think differently," Beekman said.
The security issue came up as well. How do you know that your data and application can't be intercepted by other parties or even by other applications running in the GAE environment? Data separation was one of Gartner's seven security considerations for organizations thinking about moving to the cloud.
All the GAE apps are isolated, the Google engineer in the audience insisted. We later caught up with Google for more clarification: GAE starts each app as a single-threaded process. As traffic increases, the app is cloned into multiple processes. The various services that Google offers through GAE — such as the data store, data caching or access to e-mail —are accessible by remote procedure calls in a format called Protocol Buffers.
To keep apps from interfering with one another, each app is sandboxed through various techniques. Google won't divulge most of those techniques, but the company has said that Python libraries that could be used for snooping — the ones that rely on native code or allow a program to open a socket — have been removed from the library set available through GAE.
Uptime was another concern audience members voiced. Google offers no specific guarantees of how reliable you could expect the service to be, called a service-level agreement (SLA) by the industry. When your users come calling, you want to make sure the app is ready. The discussion of downtime is pertinent given that one Google service has had a few unscheduled downtimes of late.
Beekman argued that, to a certain extent, uptime concerns for cloud computing are a bit overblown. Whatever downtime that, say, Gmail users encounter in a month might be small compared to the average in-house Microsoft Exchange environment. Your in-house IT staff probably isn't as well trained in matters of keeping the servers running as is the average Google administrator.
It is worth nothing that, with the paid version of Google Apps, the company guarantees that your apps will be available at least 99 percent of the time in any given calendar month. Maybe the company will eventually apply the same SLA to GAE.
Ultimately, using Google to house data and apps takes a leap of trust, Beekman said, and he seemed to believe it is one worth taking.
"It's the same as moving your money out of the mattress and into a bank," Beekman said. "The give is trust, the get is the value proposition."
Posted on Mar 03, 2009 at 7:05 PM1 comments