The jury is in: Linux is the benchmark for open-source software quality, according to a study into defects occurring in the software development process. The study was started in partnership with Homeland Security Department, but is now managed by Coverity.
The finding is based on an analysis by the Coverity Scan Service, which for more than seven years analyzed 850 million lines of code from more than 300 open-source projects, including those written in Linux, PHP and Apache.
Using a measure of defects per 1,000 lines of code, the study found Linux consistently recorded defect densities of less than 1.0, with versions scanned between 2011 and 2012 having defect rates below 0.7.
Researchers also found high-risk defects were prevalent in the software development process, with 36 percent of defects identified by the firm’s 2012 report classified as a “threat to overall software quality and security if undetected.”
The most common high-risk defects included memory corruption, illegal memory access and resource leaks, “all difficult to detect without automated code analysis,” Coverity reported.
In general, Coverity found the average quality of open-source software was virtually equal to that of proprietary software. Open-source projects showed an average defect density of .69, the study found, a dead heat with the .68 for proprietary code developed by enterprise customers of the service.
Although the average rates of defects in the two types of code are nearly identical, researchers did find a difference in quality trends based on the size of the development project.
For instance, as proprietary software coding projects passed 1 million lines of code, defect density dropped from .98 to .66, a sign that software quality rises in proprietary projects of that size.
That trend reversed itself in the cost of open-source code, researchers found. Open source projects between 500,000 and 1 million lines of code had a defect density of .44, which grew to .75 when those projects went over the 1 million line mark.
Coverity said the discrepancy was caused by “different dynamics between the open source and proprietary development teams” and differences in when the teams “implemented formalized development processes.”
Posted on May 10, 2013 at 9:39 AM4 comments
Cray Computers is launching a new line of inexpensive supercomputers providing speeds of 22 to 176 teraflops. Targeting the “technical enterprise” market, the XC30-AC systems start at $500,000 and give users the reliability and resiliency of a high-end supercomputer, but in a smaller package and at a lower total cost of ownership, the company said.
The Cray XC30-AC is not only targeting customers in markets new to supercomputing, but also a broader class of users in more traditional high-performance computing (HPC) markets, such as academia, defense and Earth sciences.
Although the XC30-AC runs at a fraction of the speed of Titan, the Energy Department’s 17.59 -petaflop supercomputer, its processors are a step up from those used in Titan, according to an article on Ars Technica. Whereas Titan uses a mix of AMD Opteron and Nvidia processors and uses Cray's proprietary Gemini interconnect, the XC30-AC uses Intel Xeon processors and the Aries interconnect, which is even faster than Gemini, the article stated.
According to the company’s spec sheet, the Cray XC30-AC supercomputer leverages the same compute node, compute blade and daughter card architecture as the Cray XC30 liquid-cooled supercomputer. But the AC models are air-cooled and have physically smaller compute cabinets, with 16 vertical blades per cabinet.
“Cray has a history of leveraging the supercomputing technologies featured in their high-end systems and economically repackaging those same technologies to offer solutions to fit the needs of HPC users with smaller budgets," said Earl Joseph, IDC program vice president for HPC.
Oak Ridge Lab’s Titan supercomputer was originally named Jaguar, a Cray XT5 system. It was upgraded in 2012 to leverage the computing power of graphical processing units.
Posted on May 09, 2013 at 9:39 AM0 comments
Government agencies are making progress on becoming more economical by gradually consolidating data centers and moving their more prized – and expensive – data sets to the cloud.
Even so, many agencies find themselves losing the battle to contain data now growing at exponential rates. That’s the situation the state of Nebraska was in when it began to look for more efficient storage systems, including new virtual tape technologies to help automate more of its data storage tasks.
Nebraska’s Office of the Chief Information Officer found its legacy storage systems and practices had become a bottleneck to keeping up with the demands of providing data services to state program agencies.
Fred Lupher, a systems programming manager, said the OCIO’s “DASD farm” (direct access storage devices) was backed up to tape every weekend, accounting for 1,800 separate tape mounts, while another 900 tapes had to be copied. All of the cartridges -- 2,700 in all -- “had to be packed in containers and manually transported off-site for safe-keeping,” he told IBM Systems Magazine.
The routine started on Sunday and ran through mid-Monday. “It was just crazy,” he said. During the week, the OCIO often had to deal with 10,000 tape mounts, which led to processing backups when not enough drives were available.
That was especially the case after hours, Lupher said, during batch and backup processing. What’s more, the number of tape mounts affected the agencies and offices served by the OCIO, which were being charged $1.40 per mount, not including storage.
In looking for a remedy, the OCIO considered technologies that would improve tape backup as well as disaster recovery. After putting out a request-for-proposals, Nebraska picked a solution that included an IBM System Storage TS7740 Virtual Tape Server (VTS) loaded with 256 virtual drives, an IBM automated tape server and 12 IBM TS1120 tape drives.
The solution also included a Virtual Data Recovery tool from Open Tech Systems for disaster recovery. The tool identifies all data sets that have been written to the virtual tape server during the previous day and copies them to tape. Using it, the OCIO says it can now transport a single cartridge — containing 2,000 to 4,000 data sets — on a daily basis.
Altogether, according to IBM, the OCIO has been able to consolidate 40,000 cartridges down to 300. Improved security was another dividend. Earlier, tape media was copied and put in a container called a turtle case and carried off-site. Now, encryption is carried out as data is written to tape at the tape drive level.
Program analysts also are using the new technology to access data that was previously consigned to secondary storage media such as DASD. The virtual elimination of physical tape handling has allowed tape library employees to be trained and moved into other roles within the organization.
Altogether, storage requirements have been cut, processing time has decreased, staffing for tape mounts and handling has been nearly eliminated, and a disaster-recovery plan has been put into place, according to the OCIO. “We couldn’t have asked for more,” Lupher said.
Posted on May 09, 2013 at 9:39 AM0 comments
On April 21, NASA sent three PhoneSats -- Alexander, Graham and Bell – into orbit to test the feasibility of small, inexpensive satellites assembled from off the shelf components. For the week the miniature satellites were in orbit, they transmitted health data (battery levels, temperatures, magnetometer sensors, accelerometer sensors) and used their cameras to take pictures of Earth. The PhoneSats then used a UHF radio beacon to transmit data and images via bit-encoded packets to multiple ground stations.
Each of the picture packets carried a piece of the larger image. As the data became available, NASA invited ham radio operators to help piece together larger photos from the data packets using PhoneSat’s decoder. As packets were decoded radio operators then uploaded them to the PhoneSat website.
On the second day of the mission, Bell and Graham took 100 pictures and transmitted .webp images that were then converted into .png files using Google’s webp converter. The Webp formatted images, according to Google, are smaller (file size) and richer images than .jpg or .png files.
"Three days into the mission we already had received more than 300 data packets," said Alberto Guillen Salas, an engineer at Ames and a member of the PhoneSat team. "About 200 of the data packets were contributed by the global community and the remaining packets were received from members of our team with the help of the Ames Amateur Radio Club station, NA6MF.”
NASA researchers working with ham radio operators demonstrated "citizen science," NASA officials, said, crowd-sourced science research conducted in whole or in part by amateur or nonprofessional scientists, NASA officials said.
According to NASA, the PhoneSats “deorbited” on April 27 and burned up in Earth's atmosphere as predicted.
Posted on May 06, 2013 at 9:39 AM0 comments
The first full-scale smart grid is up and running in Florida, networking 4.5 million smart meters and more than 10,000 other devices.
The $800 million project by Florida Power & Light was completed last week, with the promise of fewer and shorter power outages and lower electric bills for customers, MIT Technology Review reports.
Many utilities have been installing smart meters and other components of a smart grid — parts of FPL’s grid have been operating for more than a year — but this is the first time it’s all been tied together, the article said.
The system uses smart meters that have replaced traditional meters in homes and businesses and use radio frequencies to communicate with automated feeder switches and other devices on poles and power lines, FLP’s Bryan Olnick wrote in a post on the utility’s website.
FPL, which serves 4.6 million customers in south Florida, said some of the benefits of the grid include:
- Real-time information on the health and performance of the electric grid.
- Ability to identify outages and diagnose their causes so FPL can restore power faster.
- Verification when power is restored.
- Early warning of power issues to enable rerouting electricity around trouble spots, thus confining outages to smaller areas.
- Remote communications with FPL through advanced technology.
- Greater information for FPL customers about their energy use so they can make smart decisions about conserving electricity.
"This technology truly is transforming how we create, transport and deliver electricity," FPL president Eric Silagy said at an event marking the project’s completion. "While we're marking important milestones today, this is just the beginning.”
The development of a smart electric grid, providing a two-way flow of power and data, is a national effort prompted by the Energy Independence and Security Act of 2007. FLP was one of six utilities in the country that received a $200 million grant from the Energy Department for a smart grid, and began work on it in 2009. FPL provided the rest of the funding.
The National Institute of Standards and Technology has devoped a guide, Framework and Roadmap for Smart Grid Interoperability, to help with the effort.
Posted on May 02, 2013 at 9:39 AM3 comments