Pulse


Pulse

By GCN Staff


Sensors, wireless tech protect police dogs from heat stroke

Police and military dogs face many of the same dangers as their human partners. Many of these dogs, also known as K9s, fall victim to heat related conditions such as heat stroke, which could result in death.

To combat K9 casualties, Massachusetts, Arizona and Texas law enforcement units have invested in a wireless monitoring system to convey the dog’s internal body temperature to its human partner. Data Sciences International and Blueforce Development Corp. have partnered to develop the new system.

The system continuously measures the K9’s body temperature using a small surgically implanted sensor. The sensor then relays the temperature to a receiver attached to the dog’s protective gear, where it can be monitored by the human partners. The receiver relays the information to the K9 officer's smartphone and will instantly alert him if the K9's body temperature exceeds safe health limits.

"Our active involvement in public safety revealed that officers have serious K9 safety needs," said Blueforce CEO Mike Helfrich. "We expect this solution to help save K9 lives by communicating real-time temperature."

The telemetry is communicated to anyone subscribed to the animal through the Blueforce Tactical mobile application for Android or iOS, according to Blueforce blog post. Those who are subscribed receive a notification when the dog’s body temperature exceeds or falls below prescribed values.

Posted on Mar 26, 2014 at 11:17 AM0 comments


Defense teams rapidly deploy mobile, cloud biosurveillance tools

Scientists as the Naval Research Laboratory (NRL) have spent the past two years helping the Defense Threat Reduction Agency (DTRA) better predict pending epidemics and regional disease outbreaks.

The objective is at the heart of two linked programs at DTRA. First, the 24 Month Challenge is a multi-agency project to identify and develop diagnostic devices needed to make biosurveillance analytics a reality.  Meanwhile, a parallel DTRA program is developing a cloud database that analyzes the incoming data, according to the NRL announcement.

In the first phase of the 24 Month Challenge, the NRL team solicited proposals for diagnostic technologies that met several core requirements: the ability to differentiate the cause of febrile illness and send the diagnostic data to the cloud database. Evaluations over the past year whittled down the original list to four technologies, enabling NRL to work with three companies to develop prototype technologies that directly address the program's requirements.

"NRL has developed a relationship with two companies, InBIOS International Inc. and ChemBio Diagnostics Systems Inc., that make lateral flow immunoassay strips or LFIs. For reference, the best known example of a LFI is the home pregnancy test," said the NRL principal investigator Shawn Mulvaney. "We then challenged these companies to make their new LFIs capable of detecting the causative agents for malaria, dengue fever, melioidosis, and the plague using only a blood sample obtained from a finger prick. These are some of the most concerning diseases found in theater, particularly for our troops stationed in tropical climates."

NRL has also partnered with Fio Corp. to use the Deki Reader for test analysis and communications. The Deki Reader is a portable unit built around an Android smartphone. It can use its camera feature to take pictures of every test result, and the software can guide the user, analyze the outcomes, and upload the data over the cellular network.

"This is a clever solution to multiple challenges," said NRL’s Mulvaney.  

Based on strong analytical data obtained during NRL's testing, the three technologies are set for field-trials in South America, Africa and southeast Asia. 

Posted on Mar 25, 2014 at 9:56 AM0 comments


Virginia Tech launches network exchange in Atlanta

Virginia Tech has created a new high-performance data network exchange in Atlanta, Ga., which it says is dramatically improving access to national and international research networks. The facility, named the Mid-Atlantic Research and Education Exchange Atlanta, will also streamline access to major research centers throughout the southeastern United States.

The new facility complements another data network exchange already operated by Virginia Tech in the Washington, D.C.-metropolitan area, which was formerly known as the NatCap Aggregation Facility.

Modern science and engineering research depends on specialized high-performance networks, such as Internet2, to link distributed computing, storage, visualization, sensor and laboratory resources with research scientists and students collaborating over a global cyber-infrastructure.

Since 1998, Virginia Tech has operated statewide network and aggregation systems that link Virginia’s major research institutions to national research networks and provide a regional hub for data transfer.

In 2012, the primary network aggregation facility in the Washington, D.C.-area was rebuilt using the latest technology, raising transfer speeds to 100 gigabit/sec.

The establishment of a second data network exchange in Atlanta provides geographic diversity, backup connectivity, and direct peering connections with major research institutions such as Oak Ridge National Laboratory and the Centers for Disease Control, according to Virginia Tech.

The university partnered with the Georgia Institute of Technology to house the new facility and to establish the regional peering connections.

Both data exchange facilities are linked to Virginia Tech via extremely fast fiber optic connections established through the university’s participation in the Broadband Technology Opportunities Program, which expanded the regional fiber network.

Having two connections, one going north and one south, greatly enhances reliability and up-time, according to Virginia Tech, which will remain connected even in the face of a fiber cut on one of the paths or a problem at one of the facilities.

The university’s Network Infrastructure and Services unit designed both facilities and operates them under contract to the Mid-Atlantic Research Infrastructure Alliance. Executive Director William Dougherty says he considers the data exchange to be a critical program for the university.

“Providing the best possible access to national research networks is vital to our mission to enable the competitiveness of Virginia Tech research,” said Dougherty. “By allowing other regional universities to use these facilities, we create economies of scale and support Virginia Tech’s commitment to engagement and leadership.” Participating institutions all contribute to the cost of operating the data network exchange facilities.

Posted on Mar 25, 2014 at 10:04 AM0 comments


DARPA to mine 'big code' to improve software reliability

During the past decade information technologies have driven productivity gains that are essential to U.S. economic competitiveness, and computing systems now control a significant portion of the critical infrastructure.

As a result, tremendous public and commercial resources are devoted to ensuring that programs are correct, especially at scale. Yet, in spite of sizeable efforts by developers, software defects remain at the root of most system errors and security vulnerabilities.

To address the predicament, the Defense Advanced Research Projects Agency wants to advance the way software is built, debugged, verified, maintained and understood by combining principles of big data analytics with software analysis.

DARPA said its Mining and Understanding Software Enclaves (MUSE) program would facilitate new ways to dramatically improve software correctness and help develop radically different approaches for automatically constructing and repairing complex software, according to its announcement.

“Our goal is to apply the principles of big data analytics to identify and understand deep commonalities among the constantly evolving corpus of software drawn from the hundreds of billions of lines of open source code available today,” said Suresh Jagannathan, DARPA program manager in the announcement.

“We’re aiming to treat programs—more precisely, facts about programs—as data, discovering new relationships (enclaves) among this ‘big code’ to build better, more robust software.”

Central to MUSE’s approach is the creation of a community infrastructure that would incorporate a continuously operating specification-mining engine, the agency said. This engine would use “deep program analyses and big data analytics to create a public database containing … inferences about salient properties, behaviors and vulnerabilities of software drawn from the hundreds of billions of lines of open source code available today.”

“The collective knowledge gleaned from this effort would facilitate new mechanisms for dramatically improving software reliability and help develop radically different approaches for automatically constructing and repairing complex software,” DARPA said in describing the project.

Posted on Mar 24, 2014 at 9:20 AM0 comments


What's next for predictive analytics?

Predicative analytics, a statistical or data mining approach that determines outcomes using a series of algorithms and techniques used on both structured and unstructured data, has become a technology high in demand.  Although not a new method, its ease of use, inexpensive computing power and organizations increasing amounts of data have driven its adoption.

The technology is being used for retention analysis, fraud detection, medical diagnosis and risk assessment, to name a few. Fern Halper, director of research for advanced analytics at TDWI, highlighted four trends about predicative analytics in a blog post on TDWI website.

Techniques: Although the top three predicative analytics techniques are linear regression, decision trees and clustering, others have become more widely used. These include time series data analysis, which can be applied to weather observations, stock market prices, and machine-generated data; machine learning, which can uncover previously unknown patterns in data; and ensemble modeling, in which predictions from a group of models are used to generate more accurate results.

Open source: Open source solutions enable a wide community to engage in innovation. The R language, a free software environment for data manipulation, statistics and graphics has become one of the most popular open source solutions addressing predicative analytics in the enterprise.

In-memory analytics: In-memory analytics processes data in random-access memory rather than on disk, which lets users to analyze large data sets more effectively.

Predictive analytics in the cloud: The use of the public cloud for analytics appears to be increasing. Organizations are starting to investigate the cloud for business intelligence, and users are putting their big data in the cloud where so can process real-time data as well as running predictive models on extremely large multisource data sets.

Posted on Mar 17, 2014 at 10:41 AM2 comments