Agencies from 18 countries, including the US, endorsed new UK-developed guidelines on AI cyber security and have confirmed their intention to co-seal the new guidelines.
The guidelines, published on Monday (4th December 2023) aim to help organisations design and deploy AI systems that are secure, trustworthy, and ethical. They’re based on seven principles covering aspects such as risk management, data protection, human oversight, and transparency.
The NCSC says the guidelines were developed in collaboration with experts from academia, industry, and government, and were informed by international standards and best practices. They’re intended to complement existing regulations and frameworks, such as the UK’s AI Ethics Framework and the EU’s AI Regulation.
The NCSC’s Director of Cyber Security, Dr Ian Levy, said:
“AI is a powerful and transformative technology that has the potential to bring significant benefits to society and the economy. But it also poses new and complex challenges for security, privacy and ethics. That’s why we have developed these guidelines, which provide a practical and flexible approach to help organisations design and deploy AI systems that are secure, trustworthy and ethical. We hope these guidelines will help raise awareness of the security implications of AI, and encourage organisations to adopt good practices that will enhance the resilience and reliability of their AI systems.”
The NCSC promises to engage with stakeholders and update the guidelines as the AI landscape evolves, and invites feedback from organisations that use or develop AI systems, as well as researchers and policy makers.
Read the full story on the NCSC’s website: https://www.ncsc.gov.uk/guidance/ai-security-guidelines