The U.K.’s National Cyber Security Centre (NCSC), along with the U.S.’s Cybersecurity and Infrastructure Security Agency (CISA) and international agencies from 16 other countries, have recently released new guidelines on the security of artificial intelligence (AI) systems. These guidelines, called the “Guidelines for Secure AI System Development,” aim to provide developers with guidance throughout the entire life cycle of AI systems, from design to deployment and operation, to ensure that security is a core component.
At a glance: The Guidelines for Secure AI System Development
The Guidelines for Secure AI System Development focus on ensuring that AI models function as intended, are available when needed, and do not reveal sensitive data to unauthorized parties. The guidelines promote a “secure by default” approach, which emphasizes taking ownership of security outcomes for customers, embracing transparency and accountability, and making security a top business priority.
Securing the four key stages of the AI development life cycle
The guidelines are structured into four sections, each corresponding to a different stage of the AI system development life cycle: secure design, secure development, secure deployment, and secure operation and maintenance. These sections provide specific recommendations for each stage, such as conducting threat modeling during the design phase, ensuring supply chain security during development, safeguarding infrastructure during deployment, and effectively managing updates during operation and maintenance.
Guidance for all AI systems and related stakeholders
The guidelines are applicable to all types of AI systems, not just the cutting-edge models discussed at the recent AI Safety Summit. They are also relevant to all professionals involved in AI, including developers, data scientists, managers, decision-makers, and other AI “risk owners.” The NCSC encourages all stakeholders to read the guidelines to make informed decisions about the design, development, deployment, and operation of their AI systems.
Building on the outcomes of the AI Safety Summit
The release of these guidelines follows the AI Safety Summit, where representatives from 28 countries signed the Bletchley Declaration on AI safety. The declaration emphasizes the importance of designing and deploying AI systems safely and responsibly, with a focus on collaboration and transparency. The newly published guidelines align with other international commitments, such as the G7 Hiroshima AI Process and the U.S.’s Voluntary AI Commitments and Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.
Reactions to these AI guidelines from the cybersecurity industry
The publication of the AI guidelines has been well-received by cybersecurity experts and analysts. Toby Lewis, global head of threat analysis at Darktrace, describes the guidelines as a welcome blueprint for safety and trustworthy AI systems. He emphasizes the need for AI providers to secure their data and models from attackers and for AI users to apply the right AI for the right task. Georges Anidjar, Southern Europe vice president at Informatica, sees the guidelines as a significant step towards addressing cybersecurity challenges in the rapidly evolving field of AI. He highlights the importance of instilling security measures at the core of AI development to create a safer digital landscape for businesses and individuals.
In conclusion, the release of the Guidelines for Secure AI System Development is a significant step towards ensuring the safe and responsible development of AI systems. These guidelines provide valuable guidance for developers and other stakeholders involved in AI projects, emphasizing the importance of security throughout the entire life cycle of AI systems. With the growing recognition of the risks posed by AI, these guidelines contribute to a global effort to harness the potential of AI while mitigating its risks.
Photo: Freepik.com