How big data can drive cost containment

INDUSTRY INSIGHT

How big data can drive cost containment

Many federal hospitals and health systems look toward the savings they expect from implementing big data models for health care analytics. However, cost containment without reducing quality of care is often both a central goal and a lead concern. By approaching the task properly, federal health care providers can leverage data analytics to control costs across care delivery and simultaneously account for the cost of implementation itself. The best place to start is identifying the type of problem where data analytics will have greatest return on investment.

Starting small: procurement and staffing

One example of cost containment through data analytics concerns the procurement of medical supplies. Every federal health care provider tracks what it purchases. That data can be put to great use in predictive analysis.

Consider the significant dollars a health care organization spends annually on commodity items used in surgery, from bandages and suture material to surgical gloves to pharmaceuticals and vaccines. By analyzing trends in purchasing and consumption, an organization can predict how much of each item it will need going forward and when it will need it. With that insight, the organization can ensure that it has the optimal inventory level of each item and that it rotates them all in stock for maximum efficiency. Having an accurate projection of future needs can also enable more favorable purchasing that can assist in cost containment.

Once the data is captured in an analytics platform, additional indirect analysis is possible. For example, analyzing past consumption to accurately predict future requirements applies equally well to understanding staffing and skill needs. By looking at usage trends of inventory items and correlating that with the types of the procedures associated with those items, staffing levels and skill needs can be refined. When the provider organization knows with greater certainty what clinical human resources it will need going forward and how those needs are typically distributed throughout a given time period, it can better avoid labor cost overruns and ensure that the organizations’ spending on training yields the greatest value.

This type of data analytics is a relatively simple way to get started with big data. It’s essential to make use of data that is continuously compiled and readily available, all within the confines of the hospital or health system. Mastering this level of analytics and predictive modeling can be an excellent way of preparing for more complex data models.

Going big: predictive modeling for preventive care

At the other end of the predictive analytics spectrum are data models built around preventive care, and they hold promise for tremendous health care cost containment. Proactively spending one dollar versus doing nothing now and spending 10 dollars later is the new normal in controlling health care costs.

Take a patient who has some risk for hypertension, for instance. By evaluating a person’s full clinical data, including personal and family medical history, data analysis can assist medical professionals in developing a care plan to keep  the patient from reaching a fully hypertensive state. This proactive approach can reduce the cost of additional medications or, even in a worst-case scenario, limit the need of health care spend on extensive testing, hospitalization or surgery accompanying any complications. The standard of care for putting a patient on blood pressure medicine is BP readings consistently at or above 130 systolic. For the patient who’s below that but still consistently above 120 systolic, has parents with hypertension, self-identifies as sedentary and is non-compliant with behavior modification, analytics can look at the patient’s records as a whole in the context of the available population of similar cases and make recommendations to medical professionals. This information can help determine whether the best course of action is to begin taking blood pressure medicine now, before the patient becomes fully hypertensive, or to recommend a different and possibly more accepted behavior modification.

Predictive modeling of this type is beginning to deliver cost containment in many more ways, including:

  • Continuous monitoring of patient vitals, labs and other available data feeds for early indications of sepsis or infection.
  • Assisting in identifying patients where additional interventions can reduce the likelihood of readmission.
  • Informing more effective interventions for chronic and preventable disease influenced by taking into account extra-clinical data.

Advanced analytics can help take various data into account to determine an individual’s risk and the optimum corrective course.

Cost containment of data modeling implementations

Cost containment within a data modeling implementation is largely a matter of doing more with less. It’s also a matter of seeing data as an asset rather than a liability. Instead of thinking of having to store large amounts of data as a burden, recognize the progress that can be made by making use of it. By giving people in different departments access to the same data warehouse with the same tools, organizations can realize cost efficiencies in training and operations while supporting scalability. Then, improved use of data and demonstrating return on investment can begin to break down silos.

Taking a data-centric approach, as opposed to focusing only on the tools to be used, is a path that health organizations should pursue. Adding data-centric analytics as needed -- and expanding it as demand or funding allows -- enables cost containment as capabilities progress. Along the way, experience informs estimations of project cost and complexity to avoid overruns. Starting with smaller projects, such as focusing on data-centric tasks like procurement and staffing before progressing to more complex analytics, is a good risk-mitigation strategy -- and one that can promote cost containment from the beginning of execution.

About the Author

Ryan Weil is principal scientist at Leidos.

inside gcn

  • data science (chombosan/Shutterstock.com)

    4 steps to excellence in data analysis

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

More from 1105 Public Sector Media Group