Equinix has added an eighth data center to its sprawling campus in Ashburn, Va., that serves as a key hub for IP traffic on the East Coast.
The center, called DC11, provides colocation, International Business Exchange (IBX) and IT infrastructure services to government agencies and technology companies in the national capital region.
The Equinix campus was built north of a former UUNet facility that was a key hub in MAE-East, the Internet’s first major interconnection point, according to an article on Data Center Knowledge. The company’s first data center opened in Ashburn in 1998, “providing a ‘carrier-neutral’ facility where companies could gain access to Internet backbones operated by UUNet and AT&T,” the article stated.
Since then, the Ashburn campus has become known as the IP hub of the mid-Atlantic seaboard, said Equinix’s Jim Farmer, director of Americas marketing in the company's blog. It is the largest IP traffic hub in North America and the gateway to Europe, he added.
The new DC11 has space for 1,200 cabinets, with capacity for an additional 1,800 cabinets. Phase 1 offers 42,800 square feet of customer floor space, but the facility will provide 232,000 gross square feet in total, with additional capacity being built out in phases, the company said.
Equinix treats its eight buildings on the Ashburn campus as one virtual building, with fiber connectivity between each, according to Equinix regional operating chief for the Americas Raouf Abdel, as reported by Data Center Dynamics.
In support of its government customers, Equinix offers the following features:
- Connection to more than 900 network carriers, including all of the GSA Networx and WITS providers.
- Interconnection to more than 300 cloud providers and more than 500 managed IT service providers.
- Certification and accreditation support for federal compliance standards and security controls such as FISMA, DIACAP, NIST800-53.
- Secure data center facilities featuring secure cage spaces, 24x7x365 on-site security guards, multiple levels of biometric readers, CCTV, access control lists, motion detectors and comprehensive procedures for screening inbound deliveries.
- Back-up services through a comprehensive global service-level agreement that includes 99.999 percent power availability, 99.99 percent temperature and humidity availability, and 99.99 percent cross-connect availability guarantees.
DC11 also meets Federal Data Center Consolidation Initiative mandates for efficiency, as well as industry compliance frameworks such as HIPAA and PCI, according to Data Center Knowledge.
Posted on May 14, 2013 at 11:18 AM0 comments
The jury is in: Linux is the benchmark for open-source software quality, according to a study into defects occurring in the software development process. The study was started in partnership with Homeland Security Department, but is now managed by Coverity.
The finding is based on an analysis by the Coverity Scan Service, which for more than seven years analyzed 850 million lines of code from more than 300 open-source projects, including those written in Linux, PHP and Apache.
Using a measure of defects per 1,000 lines of code, the study found Linux consistently recorded defect densities of less than 1.0, with versions scanned between 2011 and 2012 having defect rates below 0.7.
Researchers also found high-risk defects were prevalent in the software development process, with 36 percent of defects identified by the firm’s 2012 report classified as a “threat to overall software quality and security if undetected.”
The most common high-risk defects included memory corruption, illegal memory access and resource leaks, “all difficult to detect without automated code analysis,” Coverity reported.
In general, Coverity found the average quality of open-source software was virtually equal to that of proprietary software. Open-source projects showed an average defect density of .69, the study found, a dead heat with the .68 for proprietary code developed by enterprise customers of the service.
Although the average rates of defects in the two types of code are nearly identical, researchers did find a difference in quality trends based on the size of the development project.
For instance, as proprietary software coding projects passed 1 million lines of code, defect density dropped from .98 to .66, a sign that software quality rises in proprietary projects of that size.
That trend reversed itself in the cost of open-source code, researchers found. Open source projects between 500,000 and 1 million lines of code had a defect density of .44, which grew to .75 when those projects went over the 1 million line mark.
Coverity said the discrepancy was caused by “different dynamics between the open source and proprietary development teams” and differences in when the teams “implemented formalized development processes.”
Posted on May 10, 2013 at 5:15 AM4 comments
Cray Computers is launching a new line of inexpensive supercomputers providing speeds of 22 to 176 teraflops. Targeting the “technical enterprise” market, the XC30-AC systems start at $500,000 and give users the reliability and resiliency of a high-end supercomputer, but in a smaller package and at a lower total cost of ownership, the company said.
The Cray XC30-AC is not only targeting customers in markets new to supercomputing, but also a broader class of users in more traditional high-performance computing (HPC) markets, such as academia, defense and Earth sciences.
Although the XC30-AC runs at a fraction of the speed of Titan, the Energy Department’s 17.59 -petaflop supercomputer, its processors are a step up from those used in Titan, according to an article on Ars Technica. Whereas Titan uses a mix of AMD Opteron and Nvidia processors and uses Cray's proprietary Gemini interconnect, the XC30-AC uses Intel Xeon processors and the Aries interconnect, which is even faster than Gemini, the article stated.
According to the company’s spec sheet, the Cray XC30-AC supercomputer leverages the same compute node, compute blade and daughter card architecture as the Cray XC30 liquid-cooled supercomputer. But the AC models are air-cooled and have physically smaller compute cabinets, with 16 vertical blades per cabinet.
“Cray has a history of leveraging the supercomputing technologies featured in their high-end systems and economically repackaging those same technologies to offer solutions to fit the needs of HPC users with smaller budgets," said Earl Joseph, IDC program vice president for HPC.
Oak Ridge Lab’s Titan supercomputer was originally named Jaguar, a Cray XT5 system. It was upgraded in 2012 to leverage the computing power of graphical processing units.
Posted on May 09, 2013 at 7:07 AM0 comments
Government agencies are making progress on becoming more economical by gradually consolidating data centers and moving their more prized – and expensive – data sets to the cloud.
Even so, many agencies find themselves losing the battle to contain data now growing at exponential rates. That’s the situation the state of Nebraska was in when it began to look for more efficient storage systems, including new virtual tape technologies to help automate more of its data storage tasks.
Nebraska’s Office of the Chief Information Officer found its legacy storage systems and practices had become a bottleneck to keeping up with the demands of providing data services to state program agencies.
Fred Lupher, a systems programming manager, said the OCIO’s “DASD farm” (direct access storage devices) was backed up to tape every weekend, accounting for 1,800 separate tape mounts, while another 900 tapes had to be copied. All of the cartridges -- 2,700 in all -- “had to be packed in containers and manually transported off-site for safe-keeping,” he told IBM Systems Magazine.
The routine started on Sunday and ran through mid-Monday. “It was just crazy,” he said. During the week, the OCIO often had to deal with 10,000 tape mounts, which led to processing backups when not enough drives were available.
That was especially the case after hours, Lupher said, during batch and backup processing. What’s more, the number of tape mounts affected the agencies and offices served by the OCIO, which were being charged $1.40 per mount, not including storage.
In looking for a remedy, the OCIO considered technologies that would improve tape backup as well as disaster recovery. After putting out a request-for-proposals, Nebraska picked a solution that included an IBM System Storage TS7740 Virtual Tape Server (VTS) loaded with 256 virtual drives, an IBM automated tape server and 12 IBM TS1120 tape drives.
The solution also included a Virtual Data Recovery tool from Open Tech Systems for disaster recovery. The tool identifies all data sets that have been written to the virtual tape server during the previous day and copies them to tape. Using it, the OCIO says it can now transport a single cartridge — containing 2,000 to 4,000 data sets — on a daily basis.
Altogether, according to IBM, the OCIO has been able to consolidate 40,000 cartridges down to 300. Improved security was another dividend. Earlier, tape media was copied and put in a container called a turtle case and carried off-site. Now, encryption is carried out as data is written to tape at the tape drive level.
Program analysts also are using the new technology to access data that was previously consigned to secondary storage media such as DASD. The virtual elimination of physical tape handling has allowed tape library employees to be trained and moved into other roles within the organization.
Altogether, storage requirements have been cut, processing time has decreased, staffing for tape mounts and handling has been nearly eliminated, and a disaster-recovery plan has been put into place, according to the OCIO. “We couldn’t have asked for more,” Lupher said.
Posted on May 09, 2013 at 5:45 AM0 comments
On April 21, NASA sent three PhoneSats -- Alexander, Graham and Bell – into orbit to test the feasibility of small, inexpensive satellites assembled from off the shelf components. For the week the miniature satellites were in orbit, they transmitted health data (battery levels, temperatures, magnetometer sensors, accelerometer sensors) and used their cameras to take pictures of Earth. The PhoneSats then used a UHF radio beacon to transmit data and images via bit-encoded packets to multiple ground stations.
Each of the picture packets carried a piece of the larger image. As the data became available, NASA invited ham radio operators to help piece together larger photos from the data packets using PhoneSat’s decoder. As packets were decoded radio operators then uploaded them to the PhoneSat website.
On the second day of the mission, Bell and Graham took 100 pictures and transmitted .webp images that were then converted into .png files using Google’s webp converter. The Webp formatted images, according to Google, are smaller (file size) and richer images than .jpg or .png files.
"Three days into the mission we already had received more than 300 data packets," said Alberto Guillen Salas, an engineer at Ames and a member of the PhoneSat team. "About 200 of the data packets were contributed by the global community and the remaining packets were received from members of our team with the help of the Ames Amateur Radio Club station, NA6MF.”
NASA researchers working with ham radio operators demonstrated "citizen science," NASA officials, said, crowd-sourced science research conducted in whole or in part by amateur or nonprofessional scientists, NASA officials said.
According to NASA, the PhoneSats “deorbited” on April 27 and burned up in Earth's atmosphere as predicted.
Posted on May 06, 2013 at 2:23 PM0 comments