The US Cybersecurity and Infrastructure Security Agency (CISA) and the UK National Cyber Security Centre (NCSC) released the Guidelines for Secure AI System Development to address the integration of artificial intelligence (AI), cybersecurity, and critical infrastructure.
The Guidelines underline the significance of implementing Secure by Design principles and offer crucial advice for AI system development, complementing the U.S. Voluntary Commitments to Ensuring Safe, Secure, and Trustworthy AI.
The approach places a high value on customers owning security outcomes, radical transparency and accountability, and organizational structures that place a high focus on secure design.
“Implementing these guidelines will help providers build AI systems that function as intended, are available when needed, and work without revealing sensitive data to unauthorized parties,” according to the guidelines released by CISA and NCSC.
The Guidelines for Secure AI System Development
New security flaws in AI systems must be considered in addition to the usual cyber security risks. As AI is developing rapidly, security is frequently neglected in favor of other factors.
Within the AI system development life cycle, the guidelines are divided into four major areas: secure design, secure development, secure deployment, and secure operation and maintenance.
Document
(‘https://fonts.googleapis.com/css2?family=Poppins&display=swap’);
(‘https://fonts.googleapis.com/css2?family=Poppins&family=Roboto&display=swap’);
*{
margin: 0; padding: 0;
text-decoration: none;
}
.container{
font-family: roboto, sans-serif;
width: 90%;
border: 1px solid lightgrey;
padding: 20px;
background: linear-gradient(2deg,#E0EAF1 100%,#BBD2E0 100%);
margin: 20px auto ;
border-radius: 40px 10px;
box-shadow: 5px 5px 5px #e2ebff;
}
.container:hover{
box-shadow: 10px 10px 5px #e2ebff;
}
.container .title{
color: #015689;
font-size: 22px;
font-weight: bolder;
}
.container .title{
text-shadow: 1px 1px 1px lightgrey;
}
.container .title:after {
width: 50px;
height: 2px;
content: ‘ ‘;
position: absolute;
background-color: #015689;
margin: 20px 8px;
}
.container h2{
line-height: 40px;
margin: 2px 0;
font-weight: bolder;
}
.container a{
color: #170d51;
}
.container p{
font-size: 18px;
line-height: 30px;
}
.container button{
padding: 15px;
background-color: #4469f5;
border-radius: 10px;
border: none;
background-color: #00456e ;
font-size: 16px;
font-weight: bold;
margin-top: 5px;
}
.container button:hover{
box-shadow: 1px 1px 15px #015689;
transition: all 0.2S linear;
}
.container button a{
color: white;
}
hr{
/* display: none; */
}
Free Webinar
Live API Attack Simulation Webinar
In the upcoming webinar, Karthik Krishnamoorthy, CTO and Vivek Gopalan, VP of Products at Indusface demonstrate how APIs could be hacked. The session will cover: an exploit of OWASP API Top 10 vulnerability, a brute force account take-over (ATO) attack on API, a DDoS attack on an API, how a WAAP could bolster security over an API gateway
Register for Free
Secure Design
Guidelines for the design phase of the AI system development life cycle are included in this section such as:
Raise staff awareness of threats and risks
Model the threats to your system
Design your system for security as well as functionality and performance
Consider security benefits and trade-offs when selecting your AI model
Secure Development
This section includes suggestions relevant to the development stage of the AI system development life cycle such as:
Secure your supply chain
Identify, track, and protect your assets
Document your data, models, and prompts
Manage your technical debt
Secure Deployment
This section includes guidelines that apply to the deployment stage of the AI system development life cycle such as:
Secure your infrastructure
Protect your model continuously
Develop incident management procedures
Release AI responsibly
Make it easy for users to do the right things
Secure Operation and Maintenance
Guidelines for the secure operation and maintenance phase of the AI system development life cycle are included in this section.
Monitor your system’s behavior
Monitor your system’s input
Follow a secure-by-design approach to updates
Collect and share lessons learned
CISA strongly advises all stakeholders, including data scientists, developers, managers, decision-makers, and risk owners, to read this guidance to aid in their decision-making about the development, implementation, and management of their machine learning artificial intelligence systems.
Experience how StorageGuard eliminates the security blind spots in your storage systems by trying a 14-day free trial.
The post CISA & NCSC Discloses Guidelines for Secure AI System Development can be searched on searchng.ng & dotifi.comCyber Security News.