Programmers, lawmakers want AI to eliminate bias, not promote it
- By Kristian Hernández
- Jun 09, 2021
DALLAS — When software engineer Bejoy Narayana was developing Bob.ai, an application to help automate Dallas-Fort Worth’s Section 8 voucher program, he stopped and asked himself, ‘‘Could this system be used to help some people more than others?”
Bob.ai uses artificial intelligence, known as AI, and automation to help voucher holders find rental units, property owners complete contracting and housing authorities conduct inspections. The software and mobile app were released in 2018 in partnership with the Dallas Housing Authority, which gave Narayana access to data from some 16,000 Section 8 voucher holders.
Artificial intelligence is used in a host of algorithms in medicine, banking and other major industries. But as it has proliferated, studies have shown that AI can be biased against people of color. In housing, AI has helped perpetuate segregation, redlining and other forms of racial discrimination against Black families, who disproportionately rely on vouchers.
Narayana worried that Bob.ai would do the same, so he tweaked his app so that tenants could search for apartments using their voucher number alone, without providing any other identifying information.
As an Indian immigrant overseeing a team largely made up of people of color, Narayana was especially sensitive to the threat of racial bias. But lawmakers in a growing number of states don’t want to rely on the goodwill of AI developers. Instead, as AI is adopted by more industries and government agencies, they want to strengthen and update laws to guard against racially discriminatory algorithms—especially in the absence of federal rules.
Since 2019, more than 100 bills related to artificial intelligence and automated decision systems have been introduced in nearly two dozen states, according to the National Conference of State Legislatures. This year, lawmakers in at least 16 states proposed creating panels to review AI’s impact, promote public and private investment in AI, or address transparency and fairness in AI development.
A bill in California would be the first to require developers to evaluate the privacy and security risks of their software, as well as assess their products’ potential to generate inaccurate, unfair, biased or discriminatory decisions. Under the proposed law, the California Department of Technology would have to approve software before it could be used in the public sector.
The bill, introduced by Assembly Member Ed Chau, a Democrat and chair of the Committee on Privacy and Consumer Protection, passed the California State Assembly earlier this month and was pending in the state Senate at publication time. Chau’s office did not respond to multiple requests for comment.
Vinhcent Le, a lawyer at the Greenlining Institute, an advocacy group focused on racial economic justice, helped write the California legislation. Le described algorithms such as Bob.ai as gatekeepers to opportunity that can either perpetuate segregation and redlining or help to end them.
“It’s great that the developers of Bob.ai decided to omit a person’s name, but we can’t rely on small groups of people making decisions that can essentially affect thousands,” Le said. “We need an agreed way to audit these systems to ensure they are integrating equity metrics in ways that don’t unfairly disadvantage people.”
According to an October report by the Massachusetts Institute of Technology, AI often has exacerbated racial bias in housing. A 2019 report from the University of California, Berkeley, showed that an AI-based mortgage lending system charged Black and Hispanic borrowers higher rates than White people for the same loans.
In 2019, U.S. Sen. Cory Booker, a New Jersey Democrat, introduced a bill like the one under consideration in California, but it died in committee and has not been reintroduced.
"Fifty years ago, my parents encountered a practice called 'real estate steering' where black couples were steered away from certain neighborhoods in New Jersey. With the help of local advocates and the backing of federal legislation, they prevailed,” Booker said in a news release introducing the bill.
“However, the discrimination that my family faced in 1969 can be significantly harder to detect in 2019: houses that you never know are for sale, job opportunities that never present themselves, and financing that you never become aware of—all due to biased algorithms."
Several states have struggled in recent years with problematic software.
Facebook overhauled its ad-targeting system to prevent discrimination in housing, credit and job ads in 2019 as part of a settlement to resolve legal challenges filed by the National Fair Housing Alliance, the American Civil Liberties Union, the Communications Workers of America and other advocacy groups.
In Michigan, an AI system that cost the state $47 million to build in 2013 falsely accused as many as 40,000 people of unemployment insurance fraud, forcing some people into bankruptcy, according to the Detroit Free Press.
In Pennsylvania, a child abuse prediction model unfairly targets low-income families because it relies on data that is collected only on families using public resources, according to Virginia Eubanks' 2018 book “Automating Inequality.”
“Automated decision-making shatters the social safety net, criminalizes the poor, intensifies discrimination, and compromises our deepest national values,” Eubanks wrote. “And while the most sweeping digital decision-making tools are tested in what could be called ‘low rights environments’ where there are few expectations of political accountability and transparency, systems first designed for the poor will eventually be used on everyone.”
The Sacramento Housing Redevelopment Agency began using Bob.ai in March. Laila Darby, assistant director of the housing voucher program, said the agency vetted Bob.ai before using it to make sure it didn’t raise privacy and discrimination concerns.
Narayana said he’s sure Bob.ai would pass any state-mandated test for algorithmic discrimination.
“We’re a company that is fighting discrimination and doing everything possible to expand housing for voucher holders,” Narayana said. “Vetting these systems is beneficial because discrimination and inequality is something everyone should be concerned about.”
Narayana worked as an engineer at IBM until he decided to start his own company with the mission of rethinking government functions. He founded BoodsKapper in 2016 and began developing Bob.ai out of a co-working space near the Dallas-Fort Worth airport.
Narayana’s creation has been a huge success—in Dallas and beyond. The Dallas Housing Authority has used Bob.ai to cut the average wait time for an apartment inspection from 15 days to one. Since the launch of Bob.ai, Dallas and more than a dozen other housing agencies have added some 20,000 Section 8 units from landlords who were not participating in the program because of the long inspection wait times.
“We partnered with [Narayana] to come up with some technology advancements to our workflows and automation so that we could more timely respond to our business partners so that they didn’t see this as a lost lead in terms of working with the voucher program,” said Troy Broussard, Dallas Housing Authority CEO.
Marian Russo, executive director of the Village of Patchogue Community Development Agency on Long Island, New York, said she hopes Bob.ai can help the agency reverse the area’s long history of redlining. The authority plans to begin using Bob.ai to manage its 173 housing vouchers later this year.
“We’re one of the most segregated parts of the country,” Russo said of Long Island. “We have 25 housing authorities, so if we could just have a central place with all the landlords who are renting through the program and all the individuals who are looking for housing in one place, that could be a part of equalizing the housing issues on Long Island.”
U.S. Rep. Bill Foster, an Illinois Democrat, has similar hopes for AI. In a May 7 hearing, members of the Task Force on Artificial Intelligence of the U.S. House Committee on Financial Services discussed how AI could expand lending, housing and other opportunities. But they also warned that historical data inputted into AI can create models that are racist or sexist. Foster’s office did not respond to multiple requests for comment.
“The real promise of AI in this space is that it may eventually produce greater fairness and equity in ways that we may not have contemplated ourselves,” said Foster, chair of the task force, in the hearing. “So, we want to make sure that the biases of the analog world are not repeated in the AI and machine-learning world.”
This article first appeared on Stateline, an initiative of The Pew Charitable Trusts.