Automation can’t -- and shouldn’t -- be slipped into an agency under the radar. The best approach is to have a clear strategy, communicate it openly and keep people informed all along the way.
Now that many government agencies have become full-blown proponents of intelligent automation as a way to become more efficient, cut costs and move rote work from people to machines, it’s a good time to step back and see what’s been working and what hasn’t.
Our experience with agencies over the past few years has uncovered some basic truths about the best way to proceed on automation. For example: rushing in without a plan usually doesn’t work. Here's why.
What’s your vision?
We’ve noticed a pattern among some agencies. They hear about the promise of robotic process automation (RPA), for example, buy software licenses and assign a team to run a few small pilots to test its effectiveness -- a good practice to follow. Not surprisingly, these limited pilots often show great results, and agency leaders then want to ramp up quickly. But scaling up from a few pilots to many almost overnight can be problematic. Important interim steps are skipped, an appreciation for the complexity of implementing automation is missed and regrets can ensue.
So we always advise that before jumping headlong into automation -- no matter how promising a pilot appears -- agency leaders step back, think about their vision for automation and set up an agencywide governance process to realize that vision. Additionally, they should be especially sensitive to automation’s impact on the workforce.
Some basic advance planning can save embarrassment later. For example, some agencies eager to install large numbers of bots didn’t realize until they were well into launch that IT policies limited the number of non-human entities that could access agency systems, yet the number of planned bots far exceeded the limits. For one agency, this oversight meant it wouldn’t reach its ROI target. A good governance process would have flagged the issue well in advance and saved time, effort and investment.
Agencies that adopt a good governance process tend to have a better approach to aligning with the critically important IT security group. Governance forces leaders, once they receive their authority to operate, to think about automation’s impact on cybersecurity matters generally. In addition, the risk of a failed implementation is reduced because governance is likely to steer agencies to a centralized approach to handling RPA exceptions, for example, while assuring that the agency follows its change management and software development lifecycle processes.
Don’t act in isolation
We’ve also noticed that senior leaders at many agencies want to move quickly to launch intelligent automation projects -- sometimes without knowing what other parts of the organization are already doing in this area. Initiating projects in isolation can lead to disconnected, uncoordinated efforts that may be redundant, causing confusion about strategy and approach. A governance process that, at a minimum, creates awareness and structure around use cases, and agencywide intelligent automation efforts can avoid these problems.
Get a cross-section of leaders
A governance process needn’t be a burden, and many agencies already hold regular cross-functional leadership meetings to address issues facing the agency. Such gatherings can be ideal for creating a structure and a plan, because they typically include the agency CIO, CFO, HR leader and head of operations -- all of whom should have a voice in planning an automation rollout. And keep in mind it’s not all about the technology. Tools that can maximize automation’s speed and efficiency should complement practical considerations such as an agency’s financial reporting requirements and impact on the workforce.
The latter consideration can make or break adoption of automation.
Engage the workforce
One agency had several outside contractors performing relatively rote data-gathering work -- an obvious target for implementing RPA. So the agency installed bots and eventually eliminated the contractors. Makes sense, right? But then one day the automation platform stopped working, and there were no humans on board with any knowledge about the data-gathering process. No one had considered the implications of an entire workforce lacking background on an agency's basic function. Needless to say, the work piled up while the bots were fixed.
In contrast, another agency ran four pilot projects to test the capabilities of an automation approach. As part of its process, it kept the workforce informed with frequent, timely communications. The workforce was wary about automation, which is a normal reaction, but the workers saw how the pilot bots proved their worth by eliminating a lot of redundant, unrewarding tasks. So rather than resist automation, the workers came forward with their own recommendations for the next four pilots -- tasks no one wanted to do.
The point is that intelligent automation -- whether it’s artificial intelligence or bots or something in between -- carries some baggage. There’s no way around it; employees will naturally have concerns and questions. Automation can’t -- and shouldn’t -- be slipped into an agency under the radar. The best approach is to have a clear strategy, communicate it openly and keep people informed all along the way.