Machine learning improves contamination monitoring
- By Matt Leonard
- Aug 14, 2018
Because groundwater is susceptible to pollution from automotive fuel, fertilizer or naturally occurring substances like iron, the Environmental Protection Agency and its state-level counterparts conduct annual or quarterly sampling and analysis.
However, new research from Lawrence Berkeley National Laboratory shows that sensors and machine learning techniques can provide regular, even continual, monitoring.
At the Savannah River Site, a former nuclear weapons production facility, researcher Haruko Wainwright used in situ sensors to test for radioactivity by measuring groundwater levels of acidity and conductance, which proved be reliable indicators for tritium and uranium-238 concentrations. Sensor data was fed into an algorithm that used the acidity and conductance data to estimate radioactivity. When the results were compared to the historical record, they proved to be a “reliable” source of contamination data.
Using machine learning to quickly analyze sensor data can provide an early warning system to changes in contaminant levels, Wainwright said in a statement.
Matt Leonard is a reporter/producer at GCN.
Before joining GCN, Leonard worked as a local reporter for The Smithfield Times in southeastern Virginia. In his time there he wrote about town council meetings, local crime and what to do if a beaver dam floods your back yard. Over the last few years, he has spent time at The Commonwealth Times, The Denver Post and WTVR-CBS 6. He is a graduate of Virginia Commonwealth University, where he received the faculty award for print and online journalism.
Leonard can be contacted at firstname.lastname@example.org or follow him on Twitter @Matt_Lnrd.
Click here for previous articles by Leonard.