Analytics platform surfaces insights from regulatory documents
- By Stephanie Kanowitz
- Sep 08, 2021
An open-source project that collects, quantifies and visualizes federal and state regulatory text allows policymakers to analyze the data when making decisions on regulations.
The project is called RegData and it falls under the open-source QuantGov platform, a collection of relational data and tools for researchers and policymakers. It was developed by the Policy Analytics Team at the Mercatus Center at George Mason University and aims to use computer science and data analytics to surface data from legal and policy documents and then help researchers draw insights from that data.
“QuantGov as a whole looks to quantify the government,” said Stephen Strosko, head of the policy and analytics team at the Mercatus Center. The system collects and analyzes legal text -- regulatory codes, statutes, occupational licensing and guidance documents, he said. “We not only provide data, but we provide those documents in a public manner, and we also provide different tools that researchers and policymakers can use to create their own custom analyses.”
RegData works specifically with regulatory code. It started at the federal level and has expanded to include state-level and international code for Canada and its provinces as well as to Australia and its states.
It quantifies regulations based on the actual text. RegData counts the number of restriction words, like “shall” or “must” that indicate an obligation to comply. It can also quantify regulations by industry by estimating the likelihood that a regulation restriction targets a specific industry. Together, these two methods allow researchers to see how regulations relevant to a particular industry change over time or to compare restrictiveness across industries.
To do this quantification, the team set up a pipeline using Amazon Web Services and Python code and scraped web pages to put the text from the Electronic Code of Federal Regulations (CFR) into a machine-readable format. RegData uses algorithms and natural language processing to analyze the text to help users find legally binding terms that are unique to the legal framework for the jurisdiction.
The team uploads the data into a relational database and the resulting visualization is the RegData U.S. Regulation Tracker. It lets the public compare data across industries and agencies and create their own visualizations.
“In addition, we do dive a little bit deeper and use some advanced machine learning algorithms to associate the text surrounding each of those regulations to specific economic industries, which therefore allows researchers [and] policymakers to look at how many regulatory restrictions are associated to even at the minute level -- something like mushroom farmers -- or at the higher level, something like agriculture,” Strosko said. “That’s our most popular metric that we produce.”
The datasets update daily for state-level regulations and annually for federal ones.
Another tool, the RegCensus Explorer, is most comprehensive visualization, he said. The interactive application lets users visualize and download QuantGov data. For instance, they can compare two jurisdictions, see the states with the greatest number of restrictions, view the percent change in restrictions over time and get state-by-state snapshots. “If you wanted to see the number of regulatory restrictions Australia has compared to the United States, you can do that here, and you can look at changes over time,” Strosko said.
Everything is open source, so the code, data and documents are all publicly available for free. About 18 months ago, the team released RegHub, where users can bulk download the text documents they analyze and search them for keywords and phrases.
How far back the records go depends on the state, Strosko said. All but two or three states have now made their regulations accessible online. For those, the team has three data points in time: 2020, 2021 and something in the range of 2018, he said.
“There are a few states we’re really interested in having the regulatory code analyzed throughout history, and they’ve sent us PDF versions or text versions of their regulatory code for years back,” Strosko said. They include Idaho, Kentucky and Missouri.
At the federal level, the regulations date to about 1980, when they were first digitized.
“Our hope is to provide educational materials so that policymakers and researchers can make better-informed decisions with quantitative data,” he said.
Several are doing just that, he added. For instance, Idaho, Kentucky and Missouri use QuantGov metrics for legislation they consider. Policymakers might say, “We have this amount of regulatory restrictions, and we want to improve it.”
Strosko said some federal agencies don’t know how many regulatory restrictions they’re responsible for under the CFR because there are so many. QuantGov can provide that information and create custom checklists, such as the 10 most complex regulations an agency is responsible for.
“We also track the Federal Register, which is the precursor to the final rules that appear in the Code of Federal Regulations,” he added.
Another QuantGov product is the QuantGov Python Library, which allows people who are not familiar with Python or coding to run basic machine learning or deep learning analysis on regulatory text.
Looking ahead, Strosko said the team is interested in looking into industry metrics for countries beyond North America. Currently, a limiting factor of the data is that the industry classification system they use -- the North American Industry Classification System – has no international counterpart. That makes apples-to-apples comparisons across countries impossible. For example, it would be useful for U.S. policymakers to know that an increase or decrease in restrictive agricultural had a particular outcome in Germany so they could draw conclusions about how similar actions here could play out.
Stephanie Kanowitz is a freelance writer based in northern Virginia.