'Grave misperceptions' of DOD's AI work sidetracks key conversations
- By Lauren C. Williams
- Mar 29, 2019
Although the Defense Department is working on a number of artificial intelligence programs, none is likely better known that Project Maven, which was designed to improve military decision-making by applying big data analytics and machine learning to aerial imagery captured by drones. Employees at Google, which was participating in the program, last year publically urged the company's CEO to cancel the project, arguing that Google "should not be in the war."
Fears of AI-based programs is creating "grave misperceptions about what DOD is actually working on" and pushing much-needed discussions off track, according to Air Force Lt. Gen. John "Jack" Shanahan, chief of the Joint Artificial Intelligence Center (JAIC).
"There is an assumption, in some quarters, that the DOD is in a back laboratory somewhere in the basement of a building [and] has got a free-will agent AGI -- artificial general intelligence -- that's going to roam indiscriminately across the battlefield. We do not," Shanahan said in a keynote address at the AFCEA AI and machine learning summit March 27.
AI is a tool for specific problems, Shanahan said, and like every other technology used by the DOD, it will be evaluated for legal, ethical and moral concerns.
The JAIC is focused on four capability areas: intelligence and perception; predictive maintenance with an Army pilot looking at mechanical issues, maintenance, performance and personnel management; disaster relief and humanitarian aid; and cyberspace. In fiscal 2020, the JAIC will also work on an effort targeting peer competitors and the "full spectrum of DOD operations."
Shanahan wouldn't discuss the effort beyond it being tied to the National Defense Strategy and noting that "this is so important, it's potentially so big, that we're going to spend more time on the problem-framing part of this so when we get our funding and people in fiscal year '20 we can accelerate."
Shanahan made similar points in recent testimony before the Senate Armed Services Subcommittee on Emerging Threats and Capabilities, pushing the JAIC's role and its partnership with the Defense Innovation Board, which had its first public meeting earlier this month.
"To underscore our focus on ethics, humanitarian considerations, and both short-term and long-term AI safety, JAIC is working closely with the Defense Innovation Board (DIB) to foster a broad dialogue and provide input into the development of AI principles for defense," Shanahan said, according to written testimony for that March 12 hearing.
During the question and answer portion of his March 27 keynote, Shanahan said that AI policy for the legal, moral and ethical concerns around AI exists, but there will need to be an element of transparency to get conversations back on track.
"We know there's work to do to continue a healthy dialogue about what our value system is, how we do adhere to international norms and how some of our potential adversaries are likely not to," Shanahan said. And while these discussions are important, he said, it "doesn't hold us back from moving forward with AI across the full range of DOD missions."
Lauren C. Williams is senior editor for FCW and Defense Systems, covering defense and cybersecurity.
Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.
Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at [email protected], or follow her on Twitter @lalaurenista.
Click here for previous articles by Wiliams.