FDA's prescription for big data analysis
- By Sara Friedman
- Jul 13, 2017
With new medical treatments increasingly built on big data analysis, the Food and Drug Administration is working to use more high-performance computing to make drug development and testing more effective.
The FDA’s Center for Drug Evaluation and Research already uses HPC-based computer modeling and simulations -- known as in silico tools -- in clinical trials that evaluate medical devices and drugs. FDA Commissioner Scott Gottlieb announced in a recent blog post the agency's plans to update how in silico tools can be used in different phases of drug development.
For example, the tools will help advance drug individualization, which takes aspects of an individual patient's physiology and genetics into account when determining how well a drug works. Researchers are using modeling and simulation to identify subgroups of patients that might need dosage adjustments to increase a drug's effectiveness.
Another research effort involves building natural history databases that collect information about individual patients. Data from these databases can spur development of “model-based drugs” for diseases like Parkinson’s, Huntington’s, Alzheimer’s and muscular dystrophy.
But both of these projects need high performance computing to meet their objectives.
“These computing capabilities are becoming a key requirement to the ability of our review staff to manipulate the large data sets that are now a common feature of drug applications,” Gottlieb said in the blog post. “FDA is actively working to expand the agency’s capabilities in high performance computing, and to explore modeling approaches and enhance their regulatory impact, through an effort enabled by the work of the agency’s Scientific Computing Board.”
In October 2016, the FDA’s Scientific Computing Board and Office of Management and Technology awarded a $112 million, five-year contract to Engility for high-performance computing architectures to support computational science and bioinformatics related to gene-based drug therapy and development.
In September 2015, the Office of Health Informatics also looked into cloud-based high performance computing for big data analysis. The Chillax prototype was developed to test HPC in the cloud as it relates to foodborne illness analyses. It took the complicated data retrieval, storage and analysis for whole genome sequencing of publicly available Salmonella to a cloud environment, where processing time was cut from 23 days to 20 hours.
Elaine Johanson, director of the Office of Health Informatics, told GCN that although the Chillax prototype test is complete, it allowed the FDA to “recognize the benefit of going directly to a cloud provider vs. through a third party.” But Chillax will not be operating in FDA’s larger testing environments.
“The FDA routinely evaluates new technologies, but not all of these technologies make it into a production environment,” Johanson said. “However, these small prototypes, such as Chillax, inform the broader production solution.”
Sara Friedman is a reporter/producer for GCN, covering cloud, cybersecurity and a wide range of other public-sector IT topics.
Before joining GCN, Friedman was a reporter for Gambling Compliance, where she covered state issues related to casinos, lotteries and fantasy sports. She has also written for Communications Daily and Washington Internet Daily on state telecom and cloud computing. Friedman is a graduate of Ithaca College, where she studied journalism, politics and international communications.
Friedman can be contacted at firstname.lastname@example.org or follow her on Twitter @SaraEFriedman.
Click here for previous articles by Friedman.