4 questions to help gauge AI readiness
- By Brigham Bechtel
- Jul 11, 2019
The pursuit of artificial intelligence is changing how national security organizations around the globe conduct their operations and manage their activities. China and Russia are making significant investments in AI for military purposes, and the U.S. must keep pace, the Defense Department states in its 2018 AI Strategy summary.
“Failure to adopt AI will result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms,” the summary states.
Fundamentally, AI is all about data, whether in the military or the private sector. Bad data "in" equals bad intelligence "out," and few can afford that less than the military, where lives can be saved or lost based on the quality of intelligence produced.
The drafting of the National Security Act of 1947, which was designed in part to improve data and intelligence handling, was informed by lessons learned after the attack on Pearl Harbor. "In the view of President Harry S. Truman, the Japanese attack might have been prevented ‘if there had been something like coordination of information in the government,’" the CIA writes.
As defense agencies adopt and build artificial intelligence capabilities and talk of AI-infused drones and autonomous weapons grab headlines, it might be easy to forget the most important piece of any successful AI effort: data management.
To ensure that data management is given its proper priority -- which will help ensure that any intelligence generated by AI will be the best it can be -- national security leaders must ensure that data can be integrated, enriched and delivered to help officials achieve situational awareness. Whether used by a policy maker, commander, or leader of an analytics group dealing with counter-terrorism issues, AI-powered intelligence will almost always come back to first having accessibility to secure data.
Here are four questions to help assess the readiness of data for AI:
1. Is data in silos? Although data has long been organized into silos, the prospects of using it for analytics heightens the need for ready access. The sheer volume of data, as well as the duplication of it, has since made it hard to get data out of those silos fast enough to make it actionable. Data in one silo, in one department, in one agency, might be critical to other departments in other agencies. In addition, legacy database technologies are not optimized for easily and quickly handling new data, which is often unstructured. More modern database technologies enable ingestion of data as it is, whether unstructured or not. AI algorithms need the context and metadata from all manner of files and reports to operate effectively. The more data that is readily available to AI algorithms, the better the quality of the AI results.
2. Is data secure? Secure means more than being protected against theft or cyberattacks coming through a perimeter. Data security now means that data is neither corrupted nor stolen, even as others access it. Ultimately, the full power of data comes from being able to share it with the broadest possible audience, and safe data sharing is enabled by controlling who sees and works with what data. Newer database technologies offer differing levels of granular controls, including those that limit data access to certain people, full data encryption, redaction and anonymization. Having more secure data means having better data available for AI algorithms.
3. Is data well governed? To have faith in data, it must be well governed. That means there’s an assurance that the data has not been changed or, if it has been changed, that there’s a record of when, how and by whom. Data governance policies -- including privacy policies -- must stay with the data so that as it moves, the policies remain in force and stray bytes and files are unable to gum up data flow and access.
4. Is data accessible in real time? Having access to mission-critical data is improved with a platform that enables real-time understanding and better decision-making. Consider, for example, the latest planes that come with vast arrays of sensors that report on the operating state of the aircraft while also collecting massive volumes of mission-critical data. Secure, agile database technologies safely and securely transport, integrate and present data to commanders, for better situational awareness and a fuller intelligence picture. On these data management platforms, real-time alert technology is critical, as are advanced search capabilities.
While technologies have changed since the time of Pearl Harbor, the need for secure, complete, real-time data that can be shared remains the same. Then, as now, getting a 360-degree view of data enables faster communication and better decision-making with or without AI.
Above all else, the way governments provide, maintain and use data has a unique place in society, and maintaining trust in federal data policies is pivotal to the democratic process. The government needs a coordinated and integrated approach to using data to deliver on its mission, serve the public and steward resources while respecting privacy and confidentiality.
Brigham Bechtel is the chief strategy officer for the public sector at MarkLogic.