Help wanted: An AI ethicist for the Defense Department
- By Lauren C. Williams
- Sep 06, 2019
As an indication of the importance of artificial intelligence to the military, the Defense Department plans to hire an AI ethicist to help guide the development and application of AI-based technologies.
"We are going to bring on someone who has a deep background in ethics," tag-teaming with DOD lawyers to make sure AI ethics can be "baked in," said Lt. Gen. Jack Shanahan, who leads the Joint Artificial Intelligence Center.
JAIC is building an AI ethics process with input from the Defense Innovation Board, the National Security Council and the Office of the Secretary of Defense Policy to address AI ethics policy concerns and offer recommendations to the defense secretary.
JAIC, which is only about a year old, is still "trying to fill a lot of gaps," Shanahan said, but installing an ethicist is a top priority. "One of the positions we are going to fill will be someone who's not just looking at technical standards, but who is an ethicist," he said.
Last year, concerns were raised by Google employees over the company's ethical responsibility for its work on DOD's Maven project, which uses AI to improve the identification of images from military drone video. The backlash came from over 3,000 employees who signed a petition saying "that Google should not be in the business of war" and citing concerns over "biased and weaponized AI" led the company to pass on renewing its contract.
"In Maven, these [ethical AI] questions really did not really rise to the surface every day because it was really still humans looking at object detection classification and tracking," Shanahan said. "There were no weapons involved in that."
Google not renewing its contract with Maven was not linked to the need for an ethicist, but DOD probably should be more involved in the ethical-AI conversation, Shanahan admitted.
"There are always concerns in any workforce about what is this technology going to be used for," he said. As the Department of Defense, "it's incumbent upon us, I think we have to do a better job, quite honestly, to provide a little bit more clarity and transparency about what we're doing with artificial intelligence without having to delve into deep operational details."
Developing and sharing international norms and standards will be key going forward as AI becomes more deeply integrated in everyday life -- and on the battlefield.
Shanahan told reporters he was "strongly in favor" of international discussions on AI norms and a DOD-State Department partnership to "understand what the future should be in terms of this question of norms and behavior" with AI, but he doesn't think there should be "outright bans" yet, as the technology is still so immature.
Even with careful and deliberate consideration of the legal, moral and ethical consequences of a technology, that doesn’t mean it won’t be used, but may at least help pre-empt surprises later, said Patrick Lin, director of the ethics and emerging sciences board at California Polytechnic State University.
"Any communication, any information … is better than what we have now," including DOD being open about the red lines it won't cross, Lin said.
Two years ago, Shanahan said, he couldn't have conceived of hiring an ethicist, or emphasizing the unintended moral and ethical consequences of AI technology. Now it's one of the JAIC's chief concerns. He said he hopes more transparency going forward will reassure companies and the public, especially as the JAIC expects to produce more AI capabilities in 2020.
"Humans are fallible; in combat, humans are particularly fallible. And mistakes will happen. AI can help mitigate the chances of those mistakes -- not eliminate, but reduce," Shanahan said. "Maybe we have a lower incidence of civilian casualties because we’re using artificial intelligence."
A longer version of this article was first posted to FCW, a sibling site to GCN.
Lauren C. Williams is a staff writer at FCW covering defense and cybersecurity.
Prior to joining FCW, Williams was the tech reporter for ThinkProgress, where she covered everything from internet culture to national security issues. In past positions, Williams covered health care, politics and crime for various publications, including The Seattle Times.
Williams graduated with a master's in journalism from the University of Maryland, College Park and a bachelor's in dietetics from the University of Delaware. She can be contacted at email@example.com, or follow her on Twitter @lalaurenista.
Click here for previous articles by Wiliams.