SHASAI (Securing and Hardening AI Systems Against Intrusions) is a recently launched program by the European Union to improve the security of artificial intelligence (AI). The SHASAI Program was established because there is increasing concern regarding potential attacks on the AI systems being developed and deployed throughout the world. The concern includes cyber-attacks, data manipulation, etc., and all of these potential risks could lead to unsafe use of AI systems in all parts of Europe.
The SHASAI Program will bring together top-level research organizations
The SHASAI Program will bring together research organizations, cybersecurity professionals, and technology companies to develop tools to protect AI from vulnerabilities that malicious entities may attempt to exploit.
Artificial intelligence systems are becoming more prevalent in critical areas such as healthcare, transportation, financial services, and government services.
AI systems provide many potential advantages; however, they also present new potential risks
There are several types of threats that cybercriminals can utilize to attack AI systems, including manipulating the output of AI models to obtain sensitive data, stealing sensitive data from AI systems, and physically harming people when AI systems are integrated into autonomous vehicles.
SHASAI and the impact it will have on intelligence systems
SHASAI will concentrate on three primary areas:
- Robustness testing – Evaluating AI models to assess how well they can withstand adversarial attacks and data poisoning.
- Secure deployment – Developing best practices for the safe integration of AI systems into real-world applications.
- Real-time monitoring – Developing systems capable of detecting and responding to threats in real-time.
SHASAI’s goals for addressing these three areas are to establish a global standard for the security and reliability of AI systems.
Cyber-criminals can take advantage of weaknesses in machine learning algorithms
Attacks on AI systems could result in serious ramifications, particularly in sectors such as healthcare, where incorrect output from AI systems could potentially impact the quality of patient care. One major type of attack that has been identified as a major concern is adversarial attacks, whereby cyber-criminals feed AI systems carefully designed data to manipulate the decision-making process of AI systems. Model inversion is another type of attack that cyber-criminals can perform to determine the private training data utilized to train AI systems.
Measures SHASAI will take to ensure the success of threat prevention
To mitigate these types of threats, SHASAI will develop and implement advanced encryption, secure methods for training AI models, and robust authentication mechanisms.
The SHASAI Program will be funded via the EU’s Horizon Europe program with initial funding of โฌ50 million. The SHASAI Program will include collaboration between top-level universities, cybersecurity firms, and AI developers located in each of the EU member states. The ultimate goal of the SHASAI Program is to produce open-source tools and best practices that can be broadly adopted.
In addition to creating tools and best practices, the EU plans to collaborate with international partners to ensure that the tools and best practices produced by the SHASAI Program are compatible with global standards. This collaborative effort represents a growing understanding that AI security is not simply a regional issue, but rather it is a global issue.
For businesses, SHASAI will serve as a guide for secure AI adoption
Through its emphasis on resiliency and transparency, the EU intends to foster trust among the public in the use of AI technologies, while encouraging innovation. The SHASAI Program represents a significant advancement in the EU’s plan to be a leader in the development of secure and ethically based AI systems. As AI becomes more integral to daily life, ensuring that AI systems are safe is essential. With SHASAI, the EU is positioning itself as a leader in the global community, working to protect AI systems from emerging threats.
