Mitigating the risks of military AI
- By Caroline Mohan
- Aug 17, 2018
What: “The Cautious Path to Strategic Advantage: How Militaries Should Plan for AI,” a research study done by the Electronic Frontier Foundation.
Why: While artificial intelligence can play many roles in the automation of combat, weighing the risks -- particularly surrounding machine learning -- and compensating for them accordingly is critical.
Findings: The risks around AI are significant, EFF researchers warned. Machine learning can be easily fooled, while reinforcement learning -- or learning from environmental interactions -- produces innovative but unpredictable AI. Factor in the inherent cybersecurity advantage that attackers have over defenders, and using AI in the military could lead to “catastrophic forms of failure that are hard to mitigate.”
However, these risks can be mitigated in several, which can be generalized into two principles:
1. Create an information network amongst other communities, labs, and nations.
Creating international institutions and standards for managing AI and its risks as well as “engaging in military-to-military dialogue to share research will prevent accidental conflict and boost understanding of “instruments, agreements, and treaties.”
2. Focus on defensive strategy.
Focusing machine learning development in processes outside the “kill chain” like logistics, systems diagnostics, and defensive cyber give the defender greater control in a virtual combat zone which consequently minimizes risk. Additionally, conducting more intensive research on ML predictability, robustness, and safety will give the US a long-term advantage in the current cyber environment that favors high-risk offense. “The national security community has a key role to play in changing the balance between cyber offense and defense.”
Click here for the full report.