The importance of keeping data centers cool has been getting some attention lately, particularly in light of some innovative new ways of keeping servers running under optimal temperatures.
Microsoft this week experienced what can happen things get too hot. Users of its Hotmail and new Outlook.com e-mail services, along with its SkyDrive file-hosting service, were out of luck for 16 hours March 12-13 during a service disruption the company blamed on a hot data center.
The outage hit at 4:35 p.m. EDT March 12 after the company performed a routine firmware update in its data center facility, according to a blog post by Microsoft Vice President Arthur de Haan. Although the update had been done before without a hitch, de Haan wrote, it “failed in this specific instance in an unexpected way.”
The result: “a rapid and substantial temperature spike in the data center,” he wrote.
It got hot enough to trigger the company’s safeguards, which prevent access to mail boxes and deter any automatic failovers, for a large number of servers in that part of the company’s data center, which houses the infrastructure for Hotmail, Outlook.com and Sky Drive, he said. A data center team got to work on the problem, gradually restoring service, but full restoration took until 8:43 a.m. EDT March 13. De Haan said restoration, atypically, required human intervention in addition to infrastructure software, which is why it took so long.
The outage prompted a flurry of activity on Twitter, naturally, with some tweeters putting the blame at the company’s feet.
Microsoft actually has been one of the companies developing new ways of running cooler, more energy-efficient data centers, building a new outdoor data center in Virginia. It’s hard to say whether even new cooling techniques would have made a difference in this case, however, since it appears the firmware update gave the servers something of a fever.
The company is in the process of switching its Hotmail users to the new Outlook.com, which has a cleaner interface and more social media integration. The full switch will come this summer, though users can move to Outlook.com now.
Hotmail, which Microsoft bought in 1997, was the pioneering Web e-mail system and dominated the market for years, but steadily lost ground and was eventually overtaken in the United States by Gmail and Yahoo mail. (Hotmail is second to Gmail worldwide, and third behind Yahoo and Gmail among U.S. users.)
The move to Outlook.com, which of course shares the name of the dedicated e-mail client so many people use, could boost the company’s Web mail prospects, but Hotmail’s travails were not lost on Twitter users during the outage.
In his post, de Haan said the restoration has given the company an understanding of why the crash happened and the team was working to ensure it doesn’t happen again.
Posted on Mar 14, 2013 at 8:29 AM1 comments
Amazon Web Service is making access to its Virtual Private Clouds automatic for new users of the Elastic Cloud Compute (EC2) service, which would give them access to features such as multiple IP addresses and expanded security controls.
VPC, which lets customers create virtual networks of EC2 instances and virtual private network connections to their own data centers, has until now been a separate service from AWS. Now, new users will get access to a VPN by default, Amazon said in a blog post.
The service is being rolled out by regions, starting with Amazon’s Asia Pacific Region, based in Sydney, Australia, and South America Region, based in São Paulo, Brazil, with others to be added one at a time, the company said.
The automatic access to VPCs applies only to new users. Current customers, including hundreds of U.S. government and other public-sector agencies, would have to either sign up for a new account or launch a service in a region they haven’t used before. (There are four regions in the United States, including those in Virginia, Oregon and California, and the Gov Cloud region, designed for sensitive government workloads.)
Regardless of how the service is launched, the VPC comes at no extra charge, AWS said. Once launched, customer will get features such as “assigning multiple IP addresses to an instance, changing security group membership on the fly and adding egress filters to your security groups,” according to AWS’ blog.
AWS, which in 2011 was accredited under the Federal Information Security Management Act, has proved popular for public-sector agencies moving services to the cloud, with reportedly more than 300 government and 1,500 educational customers.
The availability of VPCs to customers is, according to TechCrunch’s Alex Williams, another indication of Amazon’s further push into the enterprise.
Posted on Mar 12, 2013 at 12:54 PM0 comments
The Office of Management and Budget might want to take note: Online mega auctioneer eBay has developed a new system of metrics that can reveal how even subtle changes in its vast data center and IT operations can affect the cost of a single online auction.
According to eBay, the dashboard-type system can show how many kilowatt-hours of energy eBay data centers use to process an auction; how many auctions it runs per server; or revenues per kilowatt hour of energy consumed.
Should the company want to know how many metric tons of carbon dioxide it used per transaction, the system could show it.
The micro-metrics are the product of a methodology eBay has developed over the last 18 months called Digital Service Efficiency (DSE) that allows the company to map the interconnection of performance factors that support its business services.
eBay says DSE provides a miles-per-gallon-type of measurement on its data center operations, IT infrastructure or carbon footprint. DSE “dynamically tunes [eBay’s] infrastructure engine by systematically exposing the multi-dimensional knobs that developers, engineers and operators can turn to optimize all layers of the infrastructure stack,” according to an eBay white paper on the project.
“Tuning these variables in tandem is like solving a Rubik’s Cube,” the paper’s authors write. “Imagine each color as representing an independent variable (for example cost, performance, environmental impact and revenue), yet each is dependent on the others. It’s easy to solve the same color on one side of the cube independently, but solving all sides at the same time is difficult.”
“We can see more clearly now than ever before that our designs, purchases and operating decisions have real, tangible effects on the key indicators important to running a business: cost, performance, environmental impact and, ultimately, revenue,” the authors said.
While DSE is based on eBay operations, the methodology can be adapted to other organizations and data center operations.
Agencies have been pursuing energy efficiency through data center consolidation and other measures. Even the world’s largest supercomputers are using low-power architectures.
Technology such as DSE, which provides fine-grained metrics on power use, could help further those efforts.
Posted on Mar 08, 2013 at 9:56 AM0 comments
The federal government is seeking help from the public for ideas to boost cybersecurity measures for the nation’s critical infrastructure.
The National Institute of Standards and Technology has issued a request for information for what it calls the first step in the process to develop a Cybersecurity Framework.
The Cybersecurity Framework will be a set of voluntary standards and best practices to guide industry in reducing cyber risks to the networks and computers that support critical infrastructure vital to the nation's economy, security and daily life, according to the NIST announcement published in the Federal Register .
The RFI comes amid reports of widespread hacking attacks by China on U.S. and foreign institutions, as revealed by security firm Mandiant.
NIST is calling for ideas, recommendations and other input from critical infrastructure owners and operators, federal agencies, state and local governments, standards-setting organizations and other interested parties. It’s looking for information about current risk management practices; use of frameworks, standards, guidelines and best practices; specific industry practices; and more.
In announcing the initiative prior to releasing the RFI, NIST said it will use the input gathered to identify existing consensus standards, practices and procedures that have been effective and that can be adopted by industry to protect its digital information and infrastructure from the full range of cybersecurity threats.
The framework will not dictate “one-size-fits-all” solutions, but will instead enable innovation by providing guidance that is technology-neutral and recognizes the different needs and challenges within and among critical infrastructure sectors, NIST said.
President Barack Obama called for the framework to reduce cyber risks in a Feb. 12 Executive Order on "Improving Critical Infrastructure Cybersecurity" for essential institutions such as power plants and financial, transportation and communications systems.
Stakeholder meetings are also a part of the framework process. The first meeting will be held April 3 at NIST headquarters in Gaithersburg, Md. Registration information is available here.
Comments are due by 5 p.m. Eastern Time on April 8, and should be e-mailed to email@example.com with the subject line: "Developing a Framework to Improve Critical Infrastructure Cybersecurity."
Posted on Feb 28, 2013 at 9:10 AM1 comments
Although use of mobile devices continues to gain rapid acceptance by the general public and government agencies, BYOD security remains a work in progress, as enterprises look for a mobile device management (MDM) system that would focus on securing the network, not just devices.
F5 Networks has introduced a software solution, Mobile App Manager, that addresses that challenge by pushing security to the network, Networkworld reported. “Mobile computing is a network-centric compute model, meaning the network needs to play a bigger part in giving IT the control and security it had when IT owned the devices,” writes Networkworld’s Zeus Kerravala.
Mobile App Manager is a software-as-a-service offering that works with F5’s Big IP Access Policy Manager to securely connect applications to the user’s device. It separates personal data and usage from work-related content and functionality, allowing an enterprise to control its own functions without disabling personal apps or inspecting personal content, the company said.
Mobile app manager “creates a secure footprint on the device for enterprise data and access only,” F5 said. “Each enterprise application is securely wrapped, so there is no way to use it incorrectly.”
Soon after MDM first appeared, risk management associated with mobile computing became a chief security concern at government agencies, GCN contributing writer Shawn McCarthy wrote, because each new device added to a government network introduced substantial risks.
Smart phones essentially bring their own network into a facility, and data stored on a device can easily leave that facility, he noted.
“IT managers must develop a framework to evaluate the mobile security needs of their organization, and launch their own enterprisewide security framework focusing on risk management — all while deciding how mobile application management will be handled within their organization,” McCarthy wrote.
But as Networkworld’s Kerravala points out, the industry is still in the early days of BYOD, and many entities have looked to mobile device management solutions to help them handle the influx of consumer devices.
He predicts not only the emergence of new security solutions but also MDM-focused mergers and acquisitions designed “to simplify the implementation of mobile unified communications and managed mobility solutions.”
Posted on Feb 26, 2013 at 8:46 AM2 comments