USPTO building data reservoir
- By Matt Leonard
- Jun 29, 2017
As federal agencies move to implement more DevOps practices, it's increasingly important to have a good handle on their data practices, according to Pamela K. Isom, the director of the Office of Application Engineering and Development at the U.S. Patent and Trademark Office.
”Data science and DevOps go hand in hand,” she told an audience at the June 28 ATARC Federal DevOps Summit.
The goal of better data, she said, is increased quality and speed. To that end, USPTO is building a data reservoir that will consist of patent and trademark data, the number of patents generated, the quality of the patents and other data from within the organization.
By analyzing this centralized data, Isom said, USPTO will be able to find incongruities in the patent process. ”Are there any anomalies in deciding who should be granted a patent? Are we consistent in making those decisions?” she asked.
USPTO's data reservoir has been in the works for about a year, with Chief Data Strategist Thomas Beach leading the effort. There are already a couple application programming interfaces populating the reservoir with a few more populating the data lakes, Isom said. Most of the APIs that will work off the reservoir won't go live until 2018, though.
The hope is that having the data centralized, instead of siloed throughout the agency -- as it currently is -- will lead to faster decisions.
“Everyone is collecting data,” she said. “It’s about making better business decisions.”
Matt Leonard is a former reporter for GCN.