The Defense Information Systems Agency will re-open to competition a sole-source contract it just awarded for big data storage after potential competitors emerged to pursue the advanced cloud services contract.
DISA said in a May 20 notice that it would cancel the $45 million non-competitive contract it made in April to Alliance Technology Group, a small disadvantaged business based in Hanover, Md.
The contract called for developing a large data object (LDOS) cloud service capable of storing the torrent of imagery files generated by Defense intelligence sensors and systems.
In justifying the original award, DISA had said it needed the LDOS technology because it lacked capacity in its own data centers to build it and did not have the funding to purchase the hardware to do the job.
The contract for a secure intelligence, surveillance, reconnaissance (ISR) storage cloud called for systems capable of handling an exabyte, or 1 million terabytes, of data, some of it generated by drones and other advanced data-taking technologies.
DISA wanted the cloud to be capable of storing full-motion video, including data from the Air Force’s Gorgon Stare surveillance sensor system.
In canceling the contract, DISA said the response period following the award to ATG, “provided an opportunity for industry to respond to the requirement,” and that “based upon capability statements and responses received, DISA plans to pursue competitive means through the National Security Agency Acquisition Resource Center to satisfy the requirement.”
Posted on May 23, 2013 at 6:51 AM0 comments
Using telemedicine to reduce costs and improve treatment outcomes for remote patients has been in the works since the early 1990s, and it has lately gained traction with public-sector agencies. The Veterans Affairs Department tested a system in 2011, and the Army tested a 4G battlefield system last year.
Employing telemedicine in correctional facilities also has obvious benefits, giving states and local jurisdictions the ability to get health consultation and treatment to inmates without the cost of securely transporting them to medical facilities. Adoption so far has been slow, but as state budgets tighten, prisons become ever more crowded and the inmate population ages, that is changing. Combined with the increasing maturity in mobile, health IT, videoconferencing and communication technologies, more states are launching or outsourcing telemedicine programs.
In June, the Colorado Department of Corrections and the Denver Health Medical Center will launch a pilot program using high-definition video conferencing for inmates who need consultations in rheumatology, infectious disease, orthopedics and general surgery, according to an article in Government Technology. Because both Denver Health and the Colorado Department of Corrections have modern video conferencing systems, the article said, there are no up-front costs associated with the program for either party, which has been cited as a barrier to entry for smaller hospitals and prisons.
In Wyoming, where almost everywhere is remote, teleheath services for prison inmates help address challenges of distance and distribution of doctors. According to HealthcareITnews, the Wyoming department of Prison Health Services has been able to dramatically increase the range of clinical services, including mental health and specialist services. In 2011, approximately 2,000 physician visits were conducted via remote connection.
Likewise, the Department of Corrections in Louisiana is on the verge of signing a contract to provide 17,000 annual checkups to thousands of inmates, increasing telemedicine by nearly 600 percent, a report on WBRZ stated.
Mental health services to prisoners also can be delivered via face-to-face consultations over mobile devices. Wind Currents, one of the providers of these systems, estimates a state can save $30,000 to $40,000 a month with its system, which includes a hosted Voice over IP platform, video software and special videophones, according to the Mobiledia website.
Posted on May 21, 2013 at 7:24 AM0 comments
Equinix has added an eighth data center to its sprawling campus in Ashburn, Va., that serves as a key hub for IP traffic on the East Coast.
The center, called DC11, provides colocation, International Business Exchange (IBX) and IT infrastructure services to government agencies and technology companies in the national capital region.
The Equinix campus was built north of a former UUNet facility that was a key hub in MAE-East, the Internet’s first major interconnection point, according to an article on Data Center Knowledge. The company’s first data center opened in Ashburn in 1998, “providing a ‘carrier-neutral’ facility where companies could gain access to Internet backbones operated by UUNet and AT&T,” the article stated.
Since then, the Ashburn campus has become known as the IP hub of the mid-Atlantic seaboard, said Equinix’s Jim Farmer, director of Americas marketing in the company's blog. It is the largest IP traffic hub in North America and the gateway to Europe, he added.
The new DC11 has space for 1,200 cabinets, with capacity for an additional 1,800 cabinets. Phase 1 offers 42,800 square feet of customer floor space, but the facility will provide 232,000 gross square feet in total, with additional capacity being built out in phases, the company said.
Equinix treats its eight buildings on the Ashburn campus as one virtual building, with fiber connectivity between each, according to Equinix regional operating chief for the Americas Raouf Abdel, as reported by Data Center Dynamics.
In support of its government customers, Equinix offers the following features:
- Connection to more than 900 network carriers, including all of the GSA Networx and WITS providers.
- Interconnection to more than 300 cloud providers and more than 500 managed IT service providers.
- Certification and accreditation support for federal compliance standards and security controls such as FISMA, DIACAP, NIST800-53.
- Secure data center facilities featuring secure cage spaces, 24x7x365 on-site security guards, multiple levels of biometric readers, CCTV, access control lists, motion detectors and comprehensive procedures for screening inbound deliveries.
- Back-up services through a comprehensive global service-level agreement that includes 99.999 percent power availability, 99.99 percent temperature and humidity availability, and 99.99 percent cross-connect availability guarantees.
DC11 also meets Federal Data Center Consolidation Initiative mandates for efficiency, as well as industry compliance frameworks such as HIPAA and PCI, according to Data Center Knowledge.
Posted on May 14, 2013 at 11:18 AM0 comments
The jury is in: Linux is the benchmark for open-source software quality, according to a study into defects occurring in the software development process. The study was started in partnership with Homeland Security Department, but is now managed by Coverity.
The finding is based on an analysis by the Coverity Scan Service, which for more than seven years analyzed 850 million lines of code from more than 300 open-source projects, including those written in Linux, PHP and Apache.
Using a measure of defects per 1,000 lines of code, the study found Linux consistently recorded defect densities of less than 1.0, with versions scanned between 2011 and 2012 having defect rates below 0.7.
Researchers also found high-risk defects were prevalent in the software development process, with 36 percent of defects identified by the firm’s 2012 report classified as a “threat to overall software quality and security if undetected.”
The most common high-risk defects included memory corruption, illegal memory access and resource leaks, “all difficult to detect without automated code analysis,” Coverity reported.
In general, Coverity found the average quality of open-source software was virtually equal to that of proprietary software. Open-source projects showed an average defect density of .69, the study found, a dead heat with the .68 for proprietary code developed by enterprise customers of the service.
Although the average rates of defects in the two types of code are nearly identical, researchers did find a difference in quality trends based on the size of the development project.
For instance, as proprietary software coding projects passed 1 million lines of code, defect density dropped from .98 to .66, a sign that software quality rises in proprietary projects of that size.
That trend reversed itself in the cost of open-source code, researchers found. Open source projects between 500,000 and 1 million lines of code had a defect density of .44, which grew to .75 when those projects went over the 1 million line mark.
Coverity said the discrepancy was caused by “different dynamics between the open source and proprietary development teams” and differences in when the teams “implemented formalized development processes.”
Posted on May 10, 2013 at 5:15 AM4 comments
Cray Computers is launching a new line of inexpensive supercomputers providing speeds of 22 to 176 teraflops. Targeting the “technical enterprise” market, the XC30-AC systems start at $500,000 and give users the reliability and resiliency of a high-end supercomputer, but in a smaller package and at a lower total cost of ownership, the company said.
The Cray XC30-AC is not only targeting customers in markets new to supercomputing, but also a broader class of users in more traditional high-performance computing (HPC) markets, such as academia, defense and Earth sciences.
Although the XC30-AC runs at a fraction of the speed of Titan, the Energy Department’s 17.59 -petaflop supercomputer, its processors are a step up from those used in Titan, according to an article on Ars Technica. Whereas Titan uses a mix of AMD Opteron and Nvidia processors and uses Cray's proprietary Gemini interconnect, the XC30-AC uses Intel Xeon processors and the Aries interconnect, which is even faster than Gemini, the article stated.
According to the company’s spec sheet, the Cray XC30-AC supercomputer leverages the same compute node, compute blade and daughter card architecture as the Cray XC30 liquid-cooled supercomputer. But the AC models are air-cooled and have physically smaller compute cabinets, with 16 vertical blades per cabinet.
“Cray has a history of leveraging the supercomputing technologies featured in their high-end systems and economically repackaging those same technologies to offer solutions to fit the needs of HPC users with smaller budgets," said Earl Joseph, IDC program vice president for HPC.
Oak Ridge Lab’s Titan supercomputer was originally named Jaguar, a Cray XT5 system. It was upgraded in 2012 to leverage the computing power of graphical processing units.
Posted on May 09, 2013 at 7:07 AM0 comments