How can AI help government improve?
Claude Yusti, Tatiana Sokolova, and Alayna Kennedy of IBM contributed to this article.
The IBM Center for The Business of Government and the Partnership for Public Service plan to release two reports that describe the impact and potential performance improvements that artificial intelligence can bring to government in areas such as effective workplaces, skilled workforces, and mission-focused secure programs. The reports will highlight how AI technology can assist agencies in delivering positive outcomes for their constituents, based on practical experiences and lessons learned.
To gain insights for these reports, the Center and the Partnership are hosting a series of four roundtables, which began on July 17 with discussions on the potential of AI to help government identify AI challenges. The roundtables seek to explore pressing issues surrounding AI, share best practices for addressing solvable challenges, and work toward a roadmap for government to maximize the benefits of AI. They are being conducted in a non-attribution setting to promote candid and transparent dialogue among participants.
The Partnership and the Center hosted a second roundtable on Oct. 24, convening experts for a discussion on how individual agencies and governments collectively could build a robust AI workforce, train employees, and best access external expertise to help their agencies succeed. Below is a summary of the key questions and findings from that discussion.
- How can the government be a better user of AI?
- How should AI be used in decision support?
- What are the expectations that citizens have of the government?
- What is the role of senior leadership in a changing workforce environment?
- How should agencies assess important new skills like critical thinking and data analysis in the recruiting process?
- How does the government attract, retain, and utilize technical talent?
- What is the role of government in addressing AI ethics and bias issues?
- How do leaders foster an innovative culture within the government space?
Summary of key findings
Governments face challenges in the implementation of AI technologies for several reasons:
- General lack understanding of AI to confirm fit with mission objectives.
- Perceived risk to jobs.
- Outdated IT infrastructure.
- Policy and cultural barriers to unlocking data necessary for AI projects.
- Lack of skilled technical workers and AI experts.
Many agencies have not yet invested significantly in emerging technologies and still run unprotected legacy systems that cannot process large amounts of data. More importantly, agencies have trouble attracting and retaining skilled staff who can implement AI-related technologies. Both problems stem from a culture that emphasizes pre-set approaches and legacy processes.
Areas of opportunity
These obstacles prevent the public sector from developing an innovative and effective technical workforce that can successfully implement AI technologies. However, the government has the potential to overcome these issues by addressing four main areas of opportunity:
- Empowering the government workforce to embrace AI technologies.
- Attracting and retaining technical talent.
- Applying AI technologies appropriately and effectively.
- Emphasizing a culture of innovation across government leadership.
Empowering the current workforce
Although government employees worry about being replaced by AI, the technology can improve the quality of their work by automating rote tasks and allowing time for more high-value efforts and human interaction. For example, public safety officials can use drones, sensors, and machine learning techniques to avoid places and items that might be dangerous, while freeing up time to focus on complex, human-centered risks; and maritime agencies can use AI to locate ships in distress, speeding time to rescue. Such approaches do not take away jobs; rather, they increase safety and effectiveness. Government leaders can model an environment in which employees see that adopting new technologies will benefit them in their day-to-day work.
The changing nature of work means that agency human capital leaders will see an increased need to understand their current workforce skill mix and plan for future skill requirements as AI is introduced. The transition to AI will lead to people performing more complex tasks that may require systematic and large-scale retraining.
Another concern is understanding the role of existing automation. Some agencies continue to rely on automated processes in a deterministic fashion, associating a single output with a specific pattern of data. Utilizing AI would provide a more probabilistic approach, enabling prediction of possible patterns and risks before they occur, as well as innovative approaches to problem-solving.
Attracting and retaining technical talent
Importantly, agencies can empower employees to adopt AI and learn technical skills by investing in training to build skill capacity across their current workforce. Agencies can also improve recruitment and retention of new tech expertise. This can be done by:
- Prioritizing in-depth skills needed for technical problems (critical thinking, data analysis, and decision-making).
- Attracting and hiring new technical talent quickly, by presenting technical experts with complex challenges to address and offering flexible on-boarding programs.
- Fostering partnerships with academic institutions and technology hubs.
New skills are required to address technical problems. Critical thinking, data analysis, decision-making, and appropriate risk-taking all help people who interact with AI systems; however, such skills are often not emphasized in schools or workplaces. Many students probably could fly a drone, but very few develop the critical thinking skills needed to analyze when and how to use that drone in a military operation. Government agencies need to assess these intangible, mission-focused skills. Creative examples of doing this include basing advanced training and work allocations on who had the best records in fantasy football, an online endeavor that requires complex decision-making and risk-taking mechanisms.
A younger, more agile, technical workforce often seeks flexible work hours and interesting issues. Talented AI practitioners may want experience in both the government and the private sector, especially earlier in their careers. Agencies can position themselves to be more attractive to newer professionals by providing more flexibility for moving in and out of government, while emphasizing the interesting problems that government works on -- highlighting important government missions will motivate new workers who want their public service to have social impact. This could have the additional benefit of bringing in fresh perspectives to accelerate innovation; agencies could even consider using AI to help identify the best talent from both sectors to address certain problems. Many tech professionals would rather spend time helping federal programs that have significant impact on the nation and world, rather than devote their days to creating another Snapchat filter. Challenges will remain-- for example, even with NASA’s brand recognition and strength in connecting its people with the mission, the agency is challenged in today’s talent market to attract and compete for high demand STEM professionals -- but steps like those discussed above can help government move forward.
Academic and industry partnerships, including work with Federally Funded Research and Development Centers such as the Department of Energy National Labs, can be leveraged to temporarily access technical expertise for solving public-sector problems. Agencies can collaborate with partners by sharing data and working with machine learning analysis, custom software tools, and other complex data manipulation. Such partnerships already exist within the government, but they can be expanded to help agencies become more proficient in using AI to solve problems. The results and algorithms produced by such partnerships should be examined by the government to ensure they are ethical, unbiased, and effective.
Applying AI technologies appropriately and effectively
A talented workforce is essential to AI adoption, but AI is not a panacea -- it is another business tool that can help solve problems for government. Leaders should focus on identifying issues and producing outcomes, ensure that data for addressing those problems is accurate and timely, and then seek AI as a solution when appropriate. For example, one rust belt city decided to adopt AI, but residents cared more about having their potholes fixed in a timely manner. Instead of focusing on this issue and fixing potholes, city officials spent time and money on an AI system that was unnecessary to address their citizens’ needs.
Agencies should ask end users and program stakeholders about their real problems and pain points, and research how best to address those challenges, before deciding if AI is the most cost-effective solution.
Emphasizing a culture of innovation across government leadership
Government has significant opportunity for leveraging AI. However, this opportunity requires strong leadership and emphasis on a more flexible, innovative culture. Agency leaders can show employees that technology will improve the quality of their work, not put them out of a job. Leaders can help set expectations surrounding AI -- unconventional skills are needed in the new workforce, and many problems can be solved with critical thinking before turning to AI as needed to address greater complexity. By encouraging a culture of innovation and scientific experimentation, leaders will allow their current workforce to understand the context and potential of AI, as well as attract new talent from academia and other non-traditional sectors that provide diverse perspectives. With a focus on fundamentals, human capital, and the transparent outcomes for constituents, agencies can apply AI to solve real problems.
Agencies have shown great interest in AI. Most initiatives are still exploratory and relatively small scale although some agencies -- including in the Department of Defense and the intelligence community -- have initiated large scale projects. These initial early steps likely will continue until agencies have enough experience to understand the criteria and approaches for successful public sector expansion of AI that does not simply replicate commercial experience.
In the next phase, agencies will build on lessons from organizations that have pioneered AI adoption. The insights that government innovators and stakeholders shared at these roundtables will be highlighted in the subsequent reports from the center and the partnership to help accelerate this progress, providing agencies asking: "Why AI?" and "When?" with actionable insights to move forward in scaling integration of AI into the processes and work of government.
Dan Chenok is executive director of the IBM Center for the Business of Government.