Hiring the people that will put AI to work
Lawmakers on both sides of the aisle are betting the future of U.S. research and development lies in artificial intelligence technology, but it remains to be seen if their efforts will go beyond mere recommendations.
A report released in August by the Bipartisan Policy Center recommends the federal government double down on AI spending by 2025, in an effort to remain competitive with China and other world powers. On its face, this is exciting news -- few technology or IT professionals in civil service will see a $25 billion investment into more advanced technology as a misappropriation of funds.
However, bringing many of the report’s commendable recommendations to life will be easier said than done. While the report’s authors seem to have a good grasp on the technology itself, there isn’t much exploration into who is actually going to do the work. If AI is going to help usher in a new era of American R&D dominance in the public sector, we must have a larger conversation about hiring the people that will make this possible.
Welcoming the AI roles of the future
Big investments in AI at any agency -- from the local to federal level -- mean nothing without the right people to contextualize and find the appropriate solutions for the problems at hand. For these investments to be successful, agency leaders must:
Hire problem solvers. When building a team to guide an agency’s AI strategy, no applicant will have the perfect background. However, the right candidate does not need to have prior government experience. Instead, agencies should look for individuals who have intimate experience with the problem they are trying to solve, regardless of their familiarity with public service or the mission of the agency itself.
Ultimately, a good data scientist, process engineer or systems architect will speak to those who own the mission and help them figure out how to meet that mission with advanced technology. But technical know-how and prior experience come first -- which means a good candidate will come to the job with the skills needed to solve the agency’s specific pain points, even if they don’t yet understand the bigger picture. The right person will learn the ins and outs of the agency’s mission as they go.
We should also pause to applaud the AI report’s authors for this suggestion: Making it easier for the public sector to recruit tech talent from the private sector and academia. This proposal calls for creating the type of mobility and incentives necessary to attract the right kind of talent into government. While it acknowledges salaries likely won’t be competitive, it encourages government to lean into the public good tech talent can provide by modernizing government systems.
Embrace and evaluate digital workers: Not every AI role in government will be filled by humans. Once agencies begin integrating AI into various processes, they must begin thinking differently about who the “workers” are within each agency. In AI-enabled use cases, the workers can be the algorithms that perform any number of intelligent tasks in place of a human. These algorithms can be responsible for anything from transcribing audio in real time to identifying and redacting sensitive information in a large set of documents.
It’s imperative these digital workers are assessed just like their human counterparts. Like a good HR department evaluates employees, agencies must ensure they have a way to evaluate their AI workers’ efforts toward improvement. Because the output of AI-assisted systems change based on the input, this type of review must account for changes in the output over time.
Create a central policy and benchmarking authority. The broad call for more AI efforts is commendable, but such a sweeping effort risks becoming siloed. Two considerations will prevent this.
First, a subcommittee or task force should be established to answer the ethical and legal questions that are bound to arise. As intelligent digital systems begin making decisions and doing digital work in the public sector, it’s the taxpayer that ultimately funds these efforts. Therefore, a system of accountability and transparency is important, one that can weigh in on privacy and security issues as they arise. As the AI report describes it, the subcommittee’s mission would be to “prioritize and promote AI R&D, leverage federal data and computing resources for the AI community and train the AI-ready workforce.”
Second, there must also be a systemic way to benchmark AI efforts and outputs and share their findings. When agencies discover an algorithm has a particular bias, for example, there should be a mechanism that enables tech stakeholders from other agencies to learn how to eliminate that bias in their own processes.
A big bet on AI to bolster R&D is sound, but only if the right people and processes accompany it. If not, AI in the public sector is doomed to be little more than flowery language in a think tank-funded report. But fully realized, AI and digital workers in government are poised to bring a whole new level of intelligence and service to the public.
Jon Gacek is head of government, legal and compliance at Veritone.