Learn from our AI Security Experts
Discover every model. Secure every workflow. Prevent AI attacks - without slowing innovation.


min read
Integrating HiddenLayer’s Model Scanner with Databricks Unity Catalog
As machine learning becomes more embedded in enterprise workflows, model security is no longer optional. From training to deployment, organizations need a streamlined way to detect and respond to threats that might lurk inside their models. The integration between HiddenLayer’s Model Scanner and Databricks Unity Catalog provides an automated, frictionless way to monitor models for vulnerabilities as soon as they are registered. This approach ensures continuous protection without slowing down your teams.
Introduction
As machine learning becomes more embedded in enterprise workflows, model security is no longer optional. From training to deployment, organizations need a streamlined way to detect and respond to threats that might lurk inside their models. The integration between HiddenLayer’s Model Scanner and Databricks Unity Catalog provides an automated, frictionless way to monitor models for vulnerabilities as soon as they are registered. This approach ensures continuous protection without slowing down your teams.
In this blog, we’ll walk through how this integration works, how to set it up in your Databricks environment, and how it fits naturally into your existing machine learning workflows.
Why You Need Automated Model Security
Modern machine learning models are valuable assets. They also present new opportunities for attackers. Whether you are deploying in finance, healthcare, or any data-intensive industry, models can be compromised with embedded threats or exploited during runtime. In many organizations, models move quickly from development to production, often with limited or no security inspection.
This challenge is addressed through HiddenLayer’s integration with Unity Catalog, which automatically scans every new model version as it is registered. The process is fully embedded into your workflow, so data scientists can continue building and registering models as usual. This ensures consistent coverage across the entire lifecycle without requiring process changes or manual security reviews.
This means data scientists can focus on training and refining models without having to manually initiate security checks or worry about vulnerabilities slipping through the cracks. Security engineers benefit from automated scans that are run in the background, ensuring that any issues are detected early, all while maintaining the efficiency and speed of the machine learning development process. HiddenLayer’s integration with Unity Catalog makes model security an integral part of the workflow, reducing the overhead for teams and helping them maintain a safe, reliable model registry without added complexity or disruption.
Getting Started: How the Integration Works
To install the integration, contact your HiddenLayer representative to obtain a license and access the installer. Once you’ve downloaded and unzipped the installer for your operating system, you’ll be guided through the deployment process and prompted to enter environment variables.
Once installed, this integration monitors your Unity Catalog for new model versions and automatically sends them to HiddenLayer’s Model Scanner for analysis. Scan results are recorded directly in Unity Catalog and the HiddenLayer console, allowing both security and data science teams to access the information quickly and efficiently.

Figure 1: HiddenLayer & Databricks Architecture Diagram
The integration is simple to set up and operates smoothly within your Databricks workspace. Here’s how it works:
- Install the HiddenLayer CLI: The first step is to install the HiddenLayer CLI on your system. Running this installation will set up the necessary Python notebooks in your Databricks workspace, where the HiddenLayer Model Scanner will run.
- Configure the Unity Catalog Schema: During the installation, you will specify the catalogs and schemas that will be used for model scanning. Once configured, the integration will automatically scan new versions of models registered in those schemas.
- Automated Scanning: A monitoring notebook called hl_monitor_models runs on a scheduled basis. It checks for newly registered model versions in the configured schemas. If a new version is found, another notebook, hl_scan_model, sends the model to HiddenLayer for scanning.
- Reviewing Scan Results After scanning, the results are added to Unity Catalog as model tags. These tags include the scan status (pending, done, or failed) and a threat level (safe, low, medium, high, or critical). The full detection report is also accessible in the HiddenLayer Console. This allows teams to evaluate risk without needing to switch between systems.
Why This Workflow Works
This integration helps your team stay secure while maintaining the speed and flexibility of modern machine learning development.
- No Process Changes for Data Scientists
Teams continue working as usual. Model security is handled in the background. - Real-Time Security Coverage
Every new model version is scanned automatically, providing continuous protection. - Centralized Visibility
Scan results are stored directly in Unity Catalog and attached to each model version, making them easy to access, track, and audit. - Seamless CI/CD Compatibility
The system aligns with existing automation and governance workflows.
Final Thoughts
Model security should be a core part of your machine learning operations. By integrating HiddenLayer’s Model Scanner with Databricks Unity Catalog, you gain a secure, automated process that protects your models from potential threats.
This approach improves governance, reduces risk, and allows your data science teams to keep working without interruptions. Whether you’re new to HiddenLayer or already a user, this integration with Databricks Unity Catalog is a valuable addition to your machine learning pipeline. Get started today and enhance the security of your ML models with ease.
All Resources

Securing Your AI: A Guide for CISOs PT4
As AI continues to evolve at a fast pace, implementing comprehensive security measures is vital for trust and accountability. The integration of AI into essential business operations and society underscores the necessity for proactive security strategies. While challenges and concerns exist, there is significant potential for leaders to make strategic, informed decisions. By pursuing clear, actionable guidance and staying well-informed, organizational leaders can effectively navigate the complexities of security for AI. This proactive stance will help reduce risks, ensure the safe and responsible use of AI technologies, and ultimately promote trust and innovation.
Introduction
As AI continues to evolve at a fast pace, implementing comprehensive security measures is vital for trust and accountability. The integration of AI into essential business operations and society underscores the necessity for proactive security strategies. While challenges and concerns exist, there is significant potential for leaders to make strategic, informed decisions. By pursuing clear, actionable guidance and staying well-informed, organizational leaders can effectively navigate the complexities of security for AI. This proactive stance will help reduce risks, ensure the safe and responsible use of AI technologies, and ultimately promote trust and innovation.
In this final installment, we will explore essential topics for comprehensive AI systems: data security and privacy, model validation, secure development practices, continuous monitoring, and model explainability. Key areas include encryption, access controls, anonymization, and evaluating third-party vendors for security compliance. We will emphasize the importance of red teaming training, which simulates adversarial attacks to uncover vulnerabilities. Techniques for adversarial testing and model validation will be discussed to ensure AI robustness. Embedding security best practices throughout the AI development lifecycle and implementing continuous monitoring with a strong incident response strategy are crucial.
This guide will provide you with the necessary tools and strategies to fortify your AI systems, making them resilient against threats and reliable in their operations. Follow our series as we cover understanding AI environments, governing AI systems, strengthening AI systems, and staying up-to-date on AI developments.
Step 1: User Training and Awareness
Continuous education is vital. Conduct regular training sessions for developers, data scientists, and IT staff on security best practices for AI. Training should cover topics such as secure coding, data protection, and threat detection. An informed team is your first line of defense against security threats.
Raise awareness across the organization about security for AI risks and mitigation strategies. Knowledge is power, and an aware team is a proactive team. Regular workshops, seminars, and awareness campaigns help keep security top of mind for all employees.
Who Should Be Responsible and In the Room:
- Training and Development Team: Organizes and conducts regular training sessions for developers, data scientists, and IT staff on security for AI best practices.
- AI Development Team: Participates in training on secure coding, data protection, and threat detection to stay updated on the latest security measures.
- Data Scientists: Engages in ongoing education to understand and implement data protection and threat detection techniques.
- IT Staff: Receives training on security for AI best practices to ensure strong implementation and maintenance of security measures.
- Security Team: Provides expertise and updates on the latest security for AI threats and mitigation strategies during training sessions and awareness campaigns.
Step 2: Third-Party Audits and Assessments
Engage third-party auditors to review your security for AI practices regularly. Fresh perspectives can identify overlooked vulnerabilities and provide unbiased assessments of your security posture. These auditors bring expertise from a wide range of industries and can offer valuable insights that internal teams might miss. Audits should cover all aspects of security for AI, including data protection, model robustness, access controls, and compliance with relevant regulations. A thorough audit assesses the entire lifecycle of AI deployment, from development and training to implementation and monitoring, ensuring comprehensive security coverage.
Conduct penetration testing on AI systems periodically to find and fix vulnerabilities before malicious actors exploit them. Penetration testing involves simulating attacks on your AI systems to identify weaknesses and improve your defenses. This process can uncover flaws in your infrastructure, applications, and algorithms that attackers could exploit. Regularly scheduled penetration tests, combined with ad-hoc testing when major changes are made to the system, ensure that your defenses are constantly evaluated and strengthened. This proactive approach helps ensure your AI systems remain resilient against emerging threats as new vulnerabilities are identified and addressed promptly.
In addition to penetration testing, consider incorporating other forms of security testing, such as red team exercises and vulnerability assessments, to provide a well-rounded understanding of your security posture. Red team exercises simulate real-world attacks to test the effectiveness of your security measures and response strategies. Vulnerability assessments systematically review your systems to identify and prioritize security risks. Together, these practices create a strong security testing framework that enhances the resilience of your AI systems.
By engaging third-party auditors and regularly conducting penetration testing, you improve your security for AI posture and demonstrate a commitment to maintaining high-security standards. This can enhance trust with stakeholders, including customers, partners, and regulators, by showing that you take proactive measures to protect sensitive data and ensure the integrity of your AI solutions.
Who Should Be Responsible and In the Room:
- Chief Information Security Officer (CISO): Oversees security for AI practices and the engagement with third-party auditors.
- Security Operations Team: Manages security audits and penetration testing, and implements remediation plans.
- IT Security Manager: Coordinates with third-party auditors and facilitates the audit process.
- AI Development Team Lead: Addresses vulnerabilities identified during audits and testing, ensuring strong AI model security.
- Compliance Officer: Ensures security practices comply with regulations and implements auditor recommendations.
- Risk Management Officer: Integrates audit and testing findings into the overall risk management strategy.
- Chief Information Officer (CIO) & Chief Technology Officer (CTO): Provides oversight, resources, and strategic direction for security initiatives.
Step 3: Data Integrity and Quality
Implement strong procedures to ensure the quality and integrity of data used for training AI models. Begin with data quality checks by establishing validation and cleaning processes to maintain accuracy and reliability.
Regularly audit your data to identify and fix any issues, ensuring ongoing integrity. Track the origin and history of your data to prevent using compromised or untrustworthy sources, verifying authenticity and integrity through data provenance.
Maintain detailed metadata about your datasets to provide contextual information, helping assess data reliability. Implement strict access controls to ensure only authorized personnel can modify data, protecting against unauthorized changes.
Document and ensure transparency in all processes related to data quality and provenance. Educate your team on the importance of these practices through training sessions and awareness programs.
Who Should Be Responsible and In the Room:
- Data Quality Team: Manages data validation and cleaning processes to maintain accuracy and reliability.
- Audit and Compliance Team: Conducts regular audits and ensures adherence to data quality standards and regulations.
- Data Governance Officer: Oversees data provenance and maintains detailed records of data origin and history.
- IT Security Team: Implements and manages strict access controls to protect data integrity.
- AI Development Team: Ensures data quality practices are integrated into AI model training and development.
- Training and Development Team: Educates staff on data quality and provenance procedures, ensuring ongoing awareness and adherence.
Step 4: Security Metrics and Reporting
Define and monitor key security metrics to gauge the effectiveness of your security for AI measures. Examples include the number of detected incidents, response times, and the effectiveness of security controls.
Review and update these metrics regularly to stay relevant to current threats. Benchmark against industry standards and set clear goals for continuous improvement. Implement automated tools for real-time monitoring and alerts.
Establish a clear process for reporting security incidents, ensuring timely and accurate responses. Incident reports should detail the nature of the incident, affected systems, and resolution steps. Train relevant personnel on these procedures.
Conduct root cause analysis for incidents to prevent future occurrences, building a resilient security framework. To maintain transparency and a proactive security culture, communicate metrics and incident reports regularly to all stakeholders, including executive leadership.
Who Should Be Responsible and In the Room:
- Chief Information Security Officer (CISO): Oversees the overall security strategy and ensures the relevance and effectiveness of security metrics.
- Security Operations Team: Monitors security metrics, implements automated tools, and manages real-time alerts.
- Data Scientists: Analyze security metrics data to provide insights and identify trends.
- IT Security Manager: Coordinates the reporting process and ensures timely and accurate incident reports.
- Compliance and Legal Team: Ensures all security measures and incident reports comply with relevant regulations.
- Chief Information Officer (CIO) & Chief Technology Officer (CTO): Reviews security metrics and incident reports to maintain transparency and support proactive security measures.
Step 5: AI System Lifecycle Management
Manage AI systems from development to decommissioning, ensuring security at every stage of their lifecycle. This comprehensive approach includes secure development practices, continuous monitoring, and proper decommissioning procedures to maintain security throughout their operational lifespan. Secure development practices involve implementing security measures from the outset, incorporating best practices in secure coding, data protection, and threat modeling. Continuous monitoring entails regularly overseeing AI systems to detect and respond to security threats promptly, using advanced monitoring tools to identify anomalies and potential vulnerabilities.
Proper decommissioning procedures are crucial when retiring AI systems. Follow stringent processes to securely dispose of data and dismantle infrastructure, preventing unauthorized access or data leaks. Clearly defining responsibilities ensures role clarity, making lifecycle management cohesive and strong. Effective communication is essential, as it fosters coordination among team members and strengthens your AI systems' overall security and reliability.
Who Should Be Responsible and In the Room:
- Chief Information Security Officer (CISO): Oversees the entire security strategy and ensures all stages of the AI lifecycle are secure.
- AI Development Team: Implements secure development practices and continuous monitoring.
- IT Infrastructure Team: Handles the secure decommissioning of AI systems and ensures proper data disposal.
- Compliance and Legal Team: Ensures all security practices meet legal and regulatory requirements.
- Project Manager: Coordinates efforts across teams, ensuring clear communication and role clarity.
Step 6: Red Teaming Training
To enhance the security of your AI systems, implement red teaming exercises. These involve simulating real-world attacks to identify vulnerabilities and test your security measures. If your organization lacks well-trained AI red teaming professionals, it is crucial to engage reputable external organizations, such as HiddenLayer, for specialized AI red teaming training to ensure comprehensive security.
To start the red teaming training, assemble a red team of cybersecurity professionals. Once again, given that your team may not be well-versed in security for AI enlist outside organizations to provide the necessary training. Develop realistic attack scenarios that mimic potential threats to your AI systems. Conduct these exercises in a controlled environment, closely monitor the team's actions, and document each person's strengths and weaknesses.
Analyze the findings from the training to identify knowledge gaps within your team and address them promptly. Use these insights to improve your incident response plan where necessary. Schedule quarterly red teaming exercises to test your team’s progress and ensure continuous learning and improvement.
Integrating red teaming into your security strategy, supported by external training as needed, helps proactively identify and mitigate risks. This ensures your AI systems are robust, secure, and resilient against real-world threats.
Step 7: Collaboration and Information Sharing
Collaborate with industry peers to share knowledge about security for AI threats and best practices. Engaging in information-sharing platforms keeps you informed about emerging threats and industry trends, helping you stay ahead of potential risks. By collaborating, you can adopt best practices from across the industry and enhance your own security measures.
For further guidance, check out our latest blog post, which delves into the benefits of collaboration in securing AI. The blog provides valuable insights and practical advice on how to effectively engage with industry peers to strengthen your security for AI posture.
Conclusion: Securing Your AI Systems Effectively
Securing AI systems is an ongoing, dynamic process that requires a thorough, multi-faceted approach. As AI becomes deeply integrated into the core operations of businesses and society, the importance of strong security measures cannot be overstated. This guide has provided a comprehensive, step-by-step approach to help organizational leaders navigate the complexities of securing AI, from initial discovery and risk assessment to continuous monitoring and collaboration.
By diligently following these steps, leaders can ensure their AI systems are secure but also trustworthy and compliant with regulatory standards. Implementing secure development practices, continuous monitoring, and rigorous audits, coupled with a strong focus on data integrity and collaboration, will significantly enhance the resilience of your AI infrastructure.
At HiddenLayer, we are here to guide and assist organizations in securing their AI systems. Don't hesitate to reach out for help. Our mission is to support you in navigating the complexities of securing AI ensuring your systems are safe, reliable, and compliant. We hope this series helps provide guidance on securing AI systems at your organization.
Remember: Stay informed, proactive, and committed to security best practices to protect your AI systems and, ultimately, your organization’s future. For more detailed insights and practical advice, be sure to explore our blog post on collaboration in security for AI and our comprehensive Threat Report.
Read the previous installments: Understanding AI Environments, Governing AI Systems, Strengthening AI Systems.

Securing Your AI with Optiv and HiddenLayer
In today’s rapidly evolving artificial intelligence (AI) landscape, securing AI systems has become paramount. As organizations increasingly rely on AI and machine learning (ML) models, ensuring the integrity and security of these models is critical. To address this growing need, HiddenLayer, a pioneer security for AI company, has a scanning solution that enables companies to secure their AI digital supply chain, mitigating the risk of introducing adversarial code into their environment.
AI Overview
In today’s rapidly evolving artificial intelligence (AI) landscape, securing AI systems has become paramount. As organizations increasingly rely on AI and machine learning (ML) models, ensuring the integrity and security of these models is critical. To address this growing need, HiddenLayer, a pioneer security for AI company, has a scanning solution that enables companies to secure their AI digital supply chain, mitigating the risk of introducing adversarial code into their environment.
The Challenge of Security for AI
AI and ML models are susceptible to various threats, including data poisoning, adversarial attacks, and malware injection. According to HiddenLayer’s AI Threat Landscape 2024 Report, 77% of companies reported breaches to their AI models in the past year, and 75% of IT leaders believe third-party AI integrations pose a significant risk. This highlights the urgent need for comprehensive security measures.
The Solution: AI Model Vulnerability Scan
HiddenLayer provides the advanced scanning technology for one of Optiv’s AI services, the AI Model Vulnerability Scan. This service offers point-in-time scans for vulnerabilities and malware in AI models, leveraging both static and AI techniques to identify security risks.
Key Features and Benefits
- Detection of Compromised Models: The scan detects compromised pre-trained models, ensuring that any models downloaded from public repositories are from reputable sources and free of malicious code.
- Enhanced Security: By incorporating HiddenLayer Model Scanner into their ML Ops pipeline, organizations can secure their entire digital AI supply chain, detect security risks, and ensure the integrity of their operations.
- Visibility into Risks and Attacks: The service provides visibility into potential risks and attacks on large language models (LLMs) and ML operations, enabling organizations to identify vulnerable points of attack.
- Adversarial Attack Detection: The scanner uses MITRE ATLAS tactics and techniques to detect adversarial AI attacks, supplementing the capabilities of your security team with advanced AI security expertise.
“Engineering and product teams are going to market faster than ever with AI and ML solutions. It’s evident that organizations who neglect to test and validate AI models and applications for safety and security run the risk of brand damage, data loss, legal and regulatory action, and general reputational harm,” says Shawn Asmus, Application Security Practice Director at Optiv. “Demonstrating a system is resilient and trustworthy, apart from merely being functional, is what responsible AI is all about.”
HiddenLayer’s Strategic Advantage
HiddenLayer, a Gartner recognized AI Application Security company, is a provider of security solutions for machine learning algorithms, models & the data that power them. With a first-of-its-kind, patented, noninvasive software approach to observing & securing ML, HiddenLayer is helping to protect the world’s most valuable technologies. Trust, flexibility, and comprehensiveness are non-negotiable when it comes to ensuring your business stays ahead in innovation.
Proof Points from HiddenLayer’s AI Threat Landscape 2024 Report
- High Incidence of Breaches: 77% of companies reported breaches to their AI models in the past year.
- Increased Risk from Third-Party Integrations: 75% of IT leaders believe that third-party AI integrations pose greater risks than existing cybersecurity threats.
- Sophistication of Adversarial Attacks: Adversarial attacks such as data poisoning and model evasion are becoming more sophisticated, necessitating advanced defensive strategies and tools.
"Organizations across all verticals and of all sizes are excited about the innovation AI delivers. Given this reality, HiddenLayer is excited to accelerate secure AI adoption by leveraging AI's competitive advantage without the inherent risks associated with its deployment. Using the HiddenLayer Model Scanner, Optiv's AI Model Vulnerability Scan Service allows for enhanced security, improved mitigation, and accelerated innovation to harness the full power of AI."
Abigail Maines, CRO of HiddenLayer
Conclusion
Organizations can secure their AI models and operations against emerging threats by leveraging advanced scanning technology and deep security expertise. This collaboration not only enhances security but also allows organizations to embrace the transformative capabilities of AI with confidence.

Securing Your AI: A Step-by-Step Guide for CISOs PT3
With AI advancing rapidly, it's essential to implement thorough security measures. The need for proactive security strategies grows as AI becomes more integrated into critical business functions and society. Despite the challenges and concerns, there is considerable potential for leaders to make strategic, informed decisions. Organizational leaders can navigate the complexities of AI security by seeking clear, actionable guidance and staying well-informed. This proactive approach will help mitigate risks, ensure AI technologies' safe and responsible deployment, and ultimately foster trust and innovation.
Introduction
With AI advancing rapidly, it's essential to implement thorough security measures. The need for proactive security strategies grows as AI becomes more integrated into critical business functions and society. Despite the challenges and concerns, there is considerable potential for leaders to make strategic, informed decisions. Organizational leaders can navigate the complexities of AI security by seeking clear, actionable guidance and staying well-informed. This proactive approach will help mitigate risks, ensure AI technologies' safe and responsible deployment, and ultimately foster trust and innovation.
Strengthening your AI systems is crucial to ensuring their security, reliability, and trustworthiness. Part 3 of our series focuses on implementing advanced measures to secure data, validate models, embed secure development practices, monitor systems continuously, and ensure model explainability and transparency. These steps are essential for protecting sensitive information, maintaining user trust, and complying with regulatory standards. This guide will provide you with the necessary tools and strategies to fortify your AI systems, making them resilient against threats and reliable in their operations. Tune in as we continue to cover understanding AI environments, governing AI systems, strengthening AI systems, and staying up-to-date on AI developments over the next few weeks.
Step 1: Data Security and Privacy
Data is the lifeblood of AI. Deploy advanced security measures tailored to your AI solutions that are adaptable to various deployment environments. This includes implementing encryption, access controls, and anonymization techniques to protect sensitive data. Ensuring data privacy is critical in maintaining user trust and complying with regulations.
Evaluate third-party vendors rigorously. Your vendors must meet stringent security for AI standards. Integrate their security measures into your overall strategy to ensure there are no weak links in your defense. Conduct thorough security assessments and require vendors to comply with your security policies and standards.
Who Should Be Responsible and In the Room:
- Data Security Team: Implements encryption, access controls, and anonymization techniques.
- AI Development Team: Ensures AI solutions are designed with integrated security measures.
- Compliance and Legal Team: Ensures compliance with data privacy regulations.
- Third-Party Vendor Management Team: Evaluates and integrates third-party vendor security measures.
- Chief Information Officer (CIO) & Chief Technology Officer (CTO): Provides oversight and resources for security initiatives.
Step 2: Model Strength and Validation
AI models must be resilient to ensure their reliability and effectiveness. Regularly subject them to adversarial testing to evaluate their systems. This process involves simulating various attacks to identify potential vulnerabilities and assess the model's ability to withstand malicious inputs. By doing so, you can pinpoint weaknesses and fortify the model against potential threats.
Employing thorough model validation techniques is equally essential. These techniques ensure consistent, reliable behavior in real-world scenarios. For example, cross-validation helps verify that the model performs well across different subsets of data, preventing overfitting and ensuring generalizability. Stress testing pushes the model to its limits under extreme conditions, revealing how it handles unexpected inputs or high-load situations.
Both adversarial testing and validation processes are critical for maintaining trust and reliability in your AI outputs. They provide a comprehensive assessment of the model's performance, ensuring it can handle the complexities and challenges of real-world applications. By integrating these practices into your AI development and maintenance workflows, you can build more resilient and trustworthy AI systems.
Who Should Be Responsible and In the Room:
- AI Development Team: Designs and develops the AI models, ensuring strength and the ability to handle adversarial testing.
- Data Scientists: Conduct detailed analysis and validation of the AI models, including cross-validation and stress testing.
- Cybersecurity Experts: Simulate attacks and identify vulnerabilities to test the model's resilience against malicious inputs.
- Quality Assurance (QA) Team: Ensures that the AI models meet required standards and perform reliably under various conditions.
- Chief Information Officer (CIO) & Chief Technology Officer (CTO): Provides oversight, resources, and strategic direction for testing and validation processes.
Step 3: Secure Development Practices
Embed security best practices at every stage of the AI development lifecycle. From inception to deployment, aim to minimize vulnerabilities by incorporating security measures at each step. Start with secure coding practices, ensuring that your code is free from common vulnerabilities and follows the latest security guidelines. Conduct regular code reviews to catch potential security issues early and to maintain high standards of code quality.
Implement comprehensive security testing throughout the development process. This includes static and dynamic code analysis, penetration testing, and vulnerability assessments. These tests help identify and mitigate risks before they become critical issues. Additionally, threat modeling should be incorporated to anticipate potential security threats and design defenses against them.
By embedding these secure development practices, you ensure that security is integrated into your AI systems from the ground up. This proactive approach significantly reduces the risk of introducing vulnerabilities during development, leading to strong and secure AI solutions. It also helps maintain user trust and compliance with regulatory requirements, as security is not an afterthought but a fundamental component of the development lifecycle.
Who Should Be Responsible and In the Room:
- AI Development Team: Responsible for secure coding practices and incorporating security measures into the AI models from the start.
- Security Engineers: Conduct regular code reviews, static and dynamic code analysis, and penetration testing to identify and address security vulnerabilities.
- Cybersecurity Experts: Perform threat modeling and vulnerability assessments to anticipate potential security threats and design appropriate defenses.
- Quality Assurance (QA) Team: Ensures that security testing is integrated into the development process and that security standards are maintained throughout.
- Project Managers: Coordinate efforts across teams, ensuring that security best practices are followed at every stage of the development lifecycle.
- Compliance and Legal Team: Ensures that the development process complies with relevant security regulations and industry standards.
- Chief Information Officer (CIO) & Chief Technology Officer (CTO): Provides oversight, resources, and support for embedding security practices into the development lifecycle.
Step 4: Continuous Monitoring and Incident Response
Implement continuous monitoring systems to detect anomalies immediately to ensure the ongoing security and integrity of your AI systems. Real-time surveillance acts as an early warning system, enabling you to identify and address potential issues before they escalate into major problems. These monitoring systems should be designed to detect a wide range of indicators of compromise, such as unusual patterns in data or system behavior, unauthorized access attempts, and other signs of potential security breaches.
Advanced monitoring tools should employ machine learning algorithms and anomaly detection techniques to identify deviations from normal activity that may indicate a threat. These tools can analyze vast amounts of data in real time, providing comprehensive visibility into the system's operations and enabling swift response to any detected anomalies.
Additionally, integrating continuous monitoring with automated response mechanisms can further enhance security. When an anomaly is detected, automated systems can trigger predefined actions, such as alerting security personnel, isolating affected components, or initiating further investigation procedures. This proactive approach minimizes the time between detection and response, reducing the risk of significant damage.
To effectively implement continuous monitoring systems for immediately detecting anomalies, it's crucial to consider products specifically designed for this purpose. Involving the right stakeholders to evaluate and select these products ensures a strong and effective monitoring strategy.
Pair continuous monitoring with a comprehensive incident response strategy. Regularly update and rehearse this strategy to maintain readiness against evolving threats, as preparedness is key to effective incident management. An effective incident response plan includes predefined roles and responsibilities, communication protocols, and procedures for containing and mitigating incidents.
A Ponemon survey found that 77% of respondents lack a formal incident response plan that is consistently applied across their organization, and nearly half say their plan is informal or nonexistent. Don't be part of the 77% who do not have an up-to-date incident response (IR) plan. It's time for security to be proactive rather than reactive, especially regarding AI.
For support on developing an incident response plan, refer to the CISA guide on Incident Response Plan Basics. This guide provides valuable insights into what an IR plan should include and needs.
Step 5: Model Explainability and Transparency
Before you do Step 5, make sure you have fully completed Step 3 on implementing ethical AI guidelines.
As you know, transparency and explainability are critical, especially when it comes to improving the public’s trust in AI usage. Ensure AI decisions can be interpreted and explained to users and stakeholders. Explainable AI builds trust and ensures accountability by making the decision-making process understandable. Techniques such as model interpretability tools, visualizations, and detailed documentation are essential for achieving this goal.
Regularly publish transparency reports detailing AI system operations and decisions. Transparency is not just about compliance; it’s about fostering an environment of openness and trust. These reports should provide insights into how AI models function, the data they use, and the measures taken to ensure their fairness and reliability.
Who Should Be Responsible and In the Room:
- AI Development Team: Implements model interpretability tools, visualizations, and detailed documentation to make AI decisions interpretable and explainable.
- Data Scientists: Develop techniques and tools for explaining AI models and decisions, ensuring these explanations are accurate and accessible.
- Compliance and Legal Team: Ensures transparency practices comply with relevant regulations and industry standards, providing guidance on legal and ethical requirements.
- Communication and Public Relations Team: Publishes regular transparency reports and communicates AI system operations and decisions to users and stakeholders, fostering an environment of openness and trust.
Conclusion
Strengthening your AI systems requires a multi-faceted approach encompassing data security, model validation, secure development practices, continuous monitoring, and transparency. Organizations can protect sensitive data and ensure compliance with privacy regulations by implementing advanced security measures such as encryption, access controls, and anonymization techniques. Rigorous evaluation of third-party vendors and adversarial testing of AI models further enhance the reliability and resilience of AI systems.
Embedding secure development practices throughout the AI lifecycle, from secure coding to regular security testing, helps minimize vulnerabilities and build strong, secure AI solutions. Continuous monitoring and a well-defined incident response plan ensure that potential threats are detected and addressed promptly, maintaining the integrity of AI systems. Finally, fostering transparency and explainability in AI decisions builds trust and accountability, making AI systems more understandable and trustworthy for users and stakeholders.
By following these comprehensive steps, organizations can create AI systems that are not only secure but also ethical and transparent, ensuring they serve as valuable and reliable assets in today's complex technological landscape. In our last installment, we will dive into audits and how to stay up-to-date on your AI environments.
Read the previous installments: Understanding AI Environments, Governing AI Systems

Securing Your AI: A Step-by-Step Guide for CISOs PT2
As AI advances at a rapid pace, implementing comprehensive security measures becomes increasingly crucial. The integration of AI into critical business operations and society is growing, highlighting the importance of proactive security strategies. While there are concerns and challenges surrounding AI, there is also significant potential for leaders to make informed, strategic decisions. Organizational leaders can effectively navigate the complexities of security for AI by seeking clear, actionable guidance and staying informed amidst abundant information. This proactive approach will help mitigate risks and ensure AI technologies' safe and responsible deployment, ultimately fostering trust and innovation.
Introduction
As AI advances at a rapid pace, implementing comprehensive security measures becomes increasingly crucial. The integration of AI into critical business operations and society is growing, highlighting the importance of proactive security strategies. While there are concerns and challenges surrounding AI, there is also significant potential for leaders to make informed, strategic decisions. Organizational leaders can effectively navigate the complexities of security for AI by seeking clear, actionable guidance and staying informed amidst abundant information. This proactive approach will help mitigate risks and ensure AI technologies' safe and responsible deployment, ultimately fostering trust and innovation.
Effective governance ensures that AI systems are secure, ethical, and compliant with regulatory standards. As organizations increasingly rely on AI, they must adopt comprehensive governance strategies to manage risks, adhere to legal requirements, and uphold ethical principles. This second part of our series on governing AI systems focuses on the importance of defensive frameworks within a broader governance strategy. We explore how leading organizations have developed detailed frameworks to enhance security for AI and guide the development of ethical AI guidelines, ensuring responsible and transparent AI operations. Tune in as we continue to cover understanding AI environments, governing AI systems, strengthening AI systems, and staying up-to-date on AI developments over the next few weeks.
Step 1: Defensive Frameworks
As tools and techniques for attacking AI become more sophisticated, a methodical defensive approach is essential to safeguard AI. Over the past two years, leading organizations have developed comprehensive frameworks to enhance security for AI. Familiarizing yourself with these frameworks is crucial as you build out your secure AI processes and procedures. The following frameworks provide valuable guidance for organizations aiming to safeguard their AI systems against evolving threats.
MITRE ATLAS
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a comprehensive framework launched in 2021, detailing adversarial machine learning tactics, techniques, and case studies. It complements the MITRE ATT&CK framework and includes real-world attacks and red-teaming exercises to provide a complete picture of AI system vulnerabilities.
In 2023, MITRE ATLAS was significantly updated, adding 12 new techniques and 5 unique case studies, focusing on large language models (LLMs) and generative AI systems. Collaborations with Microsoft led to new tools like the Arsenal and Almanac plugins for enhanced threat emulation. The update also introduced 20 new mitigations based on case studies. ATLAS now includes 14 tactics, 82 techniques, 22 case studies, and 20 mitigations, with ongoing efforts to expand its resources. This community-driven approach ensures that ATLAS remains a critical resource for securing AI-enabled systems against evolving threats.
NIST AI Risk Management Framework
Released in January 2023, the NIST AI RMF provides a conceptual framework for responsibly designing, developing, deploying, and using AI systems. It focuses on risk management through four functions: govern, map, measure, and manage.
Google Secure AI Framework (SAIF)
Introduced in June 2023, SAIF offers guidance on securing AI systems by adapting best practices from traditional software development. It emphasizes six core elements: expanding security foundations, automating defenses, and contextualizing AI risks.
OWASP Top 10
In 2023, OWASP released the Top 10 Machine Learning Risks, highlighting critical security risks in machine learning and providing guidance on prevention. Additionally, OWASP outlined vulnerabilities in large language models (LLMs), offering practical security measures.
Gartner AI Trust, Risk, and Security Management (AI TRiSM)
Gartner’s AI TRiSM framework addresses bias, privacy, explainability, and security in AI/ML systems, providing a roadmap for building trusted, reliable, and secure AI systems.
Databricks AI Security Framework (DAISF)
Released in February 2024, DAISF provides a comprehensive strategy to mitigate cyber risks in AI systems, with actionable recommendations across 12 components of AI systems.
IBM Framework for Securing Generative AI
IBM’s framework, released in January 2024, focuses on securing LLMs and generative AI solutions through five steps: securing data, models, usage, infrastructure, and establishing governance.
Step 2: Governance and Compliance
Ensuring compliance with relevant laws and regulations is the first step in creating ethical AI guidelines. Your AI systems must adhere to all legal and regulatory requirements, such as GDPR, CCPA, and industry-specific standards. Compliance forms the backbone of your security for AI strategy, helping you avoid legal pitfalls.
Who Should Be Responsible and In the Room:
- Compliance and Legal Team: Ensures AI systems meet all relevant laws and regulations, providing legal guidance and support.
- Chief Information Security Officer (CISO): Oversees the integration of compliance requirements into the overall security strategy.
- AI Development Team: Integrates compliance requirements into the design and development of AI systems.
- Data Privacy Officer (DPO): Ensures data protection practices comply with privacy laws such as GDPR and CCPA.
- Chief Information Officer (CIO) & Chief Technology Officer (CTO): Provides oversight, resources, and strategic direction for compliance efforts.
Step 3: Ethical AI Guidelines
While working on Step 3, implement ethical AI guidelines to steer AI development and usage responsibly and transparently. Start by forming an ethics committee that includes AI developers, data scientists, legal experts, ethicists, cybersecurity professionals, and, if needed, community representatives. This diverse group will oversee the creation and enforcement of the guidelines.
Identify core ethical principles such as fairness, transparency, accountability, privacy, and safety. Fairness ensures AI systems avoid biases and treat all users equitably. Transparency makes AI processes and decisions understandable to users and stakeholders. Accountability establishes clear lines of responsibility for AI outcomes. Privacy involves protecting user data through strong security measures and respecting user consent. Safety ensures AI systems operate securely and do not cause harm.
Consult internal and external stakeholders, including employees and customers, to gather insights. Draft the guidelines with a clear introduction, core ethical values, and specific measures for bias mitigation, data privacy, transparency, accountability, and safety. Circulate the draft for review, incorporating feedback to ensure the guidelines are comprehensive and practical.
Once finalized, conduct training sessions for all employees involved in AI development and deployment. Make the guidelines accessible and embed ethical considerations into every stage of the AI lifecycle. Establish a governance framework for ongoing oversight and regular audits to ensure compliance and address emerging ethical issues. Regularly update the guidelines to reflect new insights and encourage continuous feedback from stakeholders.
Conclusion
Effective governance is essential for managing AI systems in an era of sophisticated threats and stringent regulatory requirements. By integrating comprehensive defensive frameworks such as MITRE ATLAS, NIST AI RMF, Google SAIF, OWASP Top 10, Gartner AI TRiSM, Databricks AI Security Framework, and IBM's generative AI framework, organizations can enhance the security of their AI systems. However, governance goes beyond security; it encompasses ensuring compliance with laws and regulations, such as GDPR and CCPA, and embedding ethical principles into AI development and deployment. Forming a diverse ethics committee and establishing clear guidelines on fairness, transparency, accountability, privacy, and safety are crucial steps in this process. By embedding these principles into every stage of the AI lifecycle and maintaining ongoing oversight, organizations can build and sustain AI systems that are not only secure but also ethical and trustworthy. o achieve this, following AI security best practices is critical in mitigating vulnerabilities and building resilience against evolving threats. In our next section, we will guide you on strengthening your AI systems.
Read the previous installment, Understanding AI Environments.

Securing Your AI: A Step-by-Step Guide for CISOs
As AI advances at a rapid pace, implementing comprehensive security measures becomes increasingly crucial. The integration of AI into critical business operations and society is growing, highlighting the importance of proactive security strategies. While there are concerns and challenges surrounding AI, there is also significant potential for leaders to make informed, strategic decisions. Organizational leaders can effectively navigate the complexities of AI security by seeking clear, actionable guidance and staying informed amidst the abundance of information. This proactive approach will help mitigate risks and ensure AI technologies' safe and responsible deployment, ultimately fostering trust and innovation.
Introduction
As AI advances at a rapid pace, implementing comprehensive security measures becomes increasingly crucial. The integration of AI into critical business operations and society is growing, highlighting the importance of proactive security strategies. While there are concerns and challenges surrounding AI, there is also significant potential for leaders to make informed, strategic decisions. Organizational leaders can effectively navigate the complexities of AI security by seeking clear, actionable guidance and staying informed amidst the abundance of information. This proactive approach will help mitigate risks and ensure AI technologies' safe and responsible deployment, ultimately fostering trust and innovation.
Many existing frameworks and policies provide high-level guidelines but lack detailed, step-by-step instructions for security leaders. That's why we created "Securing Your AI: A Step-by-Step Guide for CISOs." This guide aims to fill that gap, offering clear, practical steps to help leaders worldwide secure their AI systems and dispel myths that can lead to insecure implementations. Over the next four weeks, we'll cover understanding AI environments, governing AI systems, strengthening AI systems, and staying up-to-date on AI developments. Let’s delve into this comprehensive series to ensure your AI systems are secure and trustworthy.
https://www.youtube.com/watch?v=bLOrQYE-18I
Step 1: Establishing a Security Foundation
Establishing a strong security foundation is essential when beginning the journey to securing your AI. This involves understanding the basic principles of security for AI, setting up a dedicated security team, and ensuring all stakeholders know the importance of securing AI systems.
To begin this guide, we recommend reading our AI Threat Landscape Report, which covers the basics of securing AI. We also recommend the following persons to be involved and complete this step since they will be responsible for the following:
- Chief Information Security Officer (CISO): To lead the establishment of the security foundation.
- Chief Information Officer (CIO) & Chief Technology Officer (CTO): To provide strategic direction and resources.
- AI Development Team: To understand and integrate security principles into AI projects.
- Compliance and Legal Team: Ensure all security practices align with legal and regulatory requirements.
Ensuring these prerequisites are met sets the stage for successfully implementing the subsequent steps in securing your AI systems.
Now, let’s begin.
Step 2: Discovery and Asset Management
Begin your journey by thoroughly understanding your AI ecosystem. This starts with conducting an AI usage inventory. Catalog every AI application and AI-enabled feature within your organization. For each tool, identify its purpose, origin, and operational status. This comprehensive inventory should include details such as:
- Purpose: What specific function does each AI application serve? Is it used for data analysis, customer service, predictive maintenance, or another purpose?
- Origin: Where did the AI tool come from? Was it developed in-house, sourced from a third-party vendor, or derived from an open-source repository?
- Operational Status: Is the AI tool currently active, in development, or deprecated? Understanding each tool's lifecycle stage helps prioritize security efforts.
This foundational step is crucial for identifying potential vulnerabilities and gaps in your security infrastructure. By knowing exactly what AI tools are in use, you can better assess and manage their security risks.
Next, perform a pre-trained model audit. Track all pre-trained AI models sourced from public repositories. This involves:
- Cataloging Pretrained Models: Document all pre-trained models in use, noting their source, version, and specific use case within your organization.
- Assessing Model Integrity: Verify the authenticity and integrity of pre-trained models to ensure they have not been tampered with or corrupted.
- Monitoring Network Traffic: Continuously monitor network traffic for unauthorized downloads of pre-trained models. This helps prevent rogue elements from infiltrating your system.
Monitoring network traffic is essential to prevent unauthorized access and the use of pre-trained models, which can introduce security vulnerabilities. This vigilant oversight protects against unseen threats and ensures compliance with intellectual property and licensing agreements. Unauthorized use of pre-trained models can lead to legal and financial repercussions, so it is important to ensure that all models are used in accordance with their licensing terms.
By thoroughly understanding your AI ecosystem through an AI usage inventory and pre-trained model audit, you establish a strong foundation for securing your AI infrastructure. This proactive approach helps identify and mitigate risks, ensuring the safe and effective use of AI within your organization.
Who Should Be Responsible and In the Room:
- Chief Information Security Officer (CISO): To oversee the security aspects and ensure alignment with the overall security strategy.
- Chief Technology Officer (CTO): To provide insight into the technological landscape and ensure integration with existing technologies.
- AI Team Leads (Data Scientists, AI Engineers): To offer detailed knowledge about AI applications and models in use.
- IT Managers: To ensure accurate inventory and auditing of AI assets.
- Compliance Officers: To ensure all activities comply with relevant laws and regulations.
- Third-Party Security Consultants: If necessary, to provide an external perspective and expertise.
Step 3: Risk Assessment and Threat Modeling
With a clear inventory in place, assess the scope of your AI development. Understand the extent of your AI projects, including the number of dedicated personnel, such as data scientists and engineers, and the scale of ongoing initiatives. This assessment provides a comprehensive view of your AI landscape, highlighting areas that may require additional security measures. Specifically, consider the following aspects:
- Team Composition: Identify the number and roles of personnel involved in AI development. This includes data scientists, machine learning engineers, software developers, and project managers. Understanding your team structure helps assess resource allocation and identify potential skill gaps.
- Project Scope: Evaluate the scale and complexity of your AI projects. Are they small-scale pilots, or are they large-scale deployments across multiple departments? Assessing the scope helps understand the potential impact and the level of security needed.
- Resource Allocation: Determine the resources dedicated to AI projects, including budget, infrastructure, and tools. This helps identify whether additional investments are needed to bolster security measures.
Afterward, a thorough risk and benefit analysis will be conducted. Identify and evaluate potential threats, such as data breaches, adversarial attacks, and misuse of AI systems. Simultaneously, assess the benefits to understand the value of these systems to your organization. This dual analysis helps prioritize security investments and develop strategies to mitigate identified risks effectively. Consider the following steps:
- Risk Identification: List all potential threats to your AI systems. These include data breaches, unauthorized access, adversarial attacks, model theft, and algorithmic bias. Consider both internal and external threats.
- Risk Evaluation: Assess the likelihood and impact of each identified risk. Determine how each risk could affect your organization in terms of financial loss, reputational damage, operational disruption, and legal implications.
- Benefit Assessment: Evaluate the benefits of your AI systems. This includes improved efficiency, cost savings, enhanced decision-making, competitive advantage, and innovation. Quantify these benefits to understand their value to your organization.
- Prioritization: Based on the risk and benefit analysis, prioritize your security investments. Focus on mitigating high-impact and high-likelihood risks first. Ensure that the benefits of your AI systems justify the costs and efforts of implementing security measures.
By assessing the scope of your AI development and conducting a thorough risk and benefit analysis, you gain a holistic understanding of your AI landscape. This allows you to make informed decisions about where to allocate resources and how to mitigate risks effectively, ensuring the security and success of your AI initiatives.
Who Should Be Responsible and In the Room:
- Risk Management Team: To identify and evaluate potential threats.
- Data Protection Officers: To assess risks related to data breaches and privacy issues.
- AI Ethics Board: To evaluate ethical implications and misuse scenarios.
- AI Team Leads (Data Scientists, AI Engineers): To provide insights on technical vulnerabilities and potential adversarial attacks.
- Business Analysts: To understand and quantify these AI systems' benefits and value to the organization.
- Compliance Officers: To ensure all risk assessments are aligned with legal and regulatory requirements.
- External Security Consultants: To provide an independent assessment and validate internal findings.
Conclusion
This blog has highlighted the often neglected importance of security for AI amidst the pressure from organizational leaders and the prevalence of misinformation. Organizations can begin their journey toward a secure AI ecosystem by establishing a strong security foundation and engaging key stakeholders. Organizations can identify potential vulnerabilities and establish a solid understanding of their AI assets, starting with a comprehensive AI usage inventory and pre-trained model audit. Moving forward, conducting a detailed risk assessment and threat modeling exercise will help prioritize security measures, aligning them with the organization's strategic goals and resources.
Through these initial steps, leaders can set the stage for a secure, ethical, and compliant AI environment, fostering trust and enabling the safe integration of AI into critical business operations. This proactive approach addresses current security challenges and prepares organizations to adapt to future advancements and threats in the AI landscape. As we continue this series, we will delve deeper into the practical steps necessary to secure and govern AI systems effectively, ensuring they remain valuable and trustworthy assets.
Read the next installment, Governing Your AI Systems.
e

A Guide to AI Red Teaming
For decades, the concept of red teaming has been adapted from its military roots to simulate how a threat actor could bypass defenses put in place to secure an organization. For many organizations, employing or contracting with ethical hackers to simulate attacks against their computer systems before adversaries attack is a vital strategy to understand where their weaknesses are. As Artificial Intelligence becomes integrated into everyday life, red-teaming AI systems to find and remediate security vulnerabilities specific to this technology is becoming increasingly important.
Summary
For decades, the concept of red teaming has been adapted from its military roots to simulate how a threat actor could bypass defenses put in place to secure an organization. For many organizations, employing or contracting with ethical hackers to simulate attacks against their computer systems before adversaries attack is a vital strategy to understand where their weaknesses are. As Artificial Intelligence becomes integrated into everyday life, red-teaming AI systems to find and remediate security vulnerabilities specific to this technology is becoming increasingly important.
https://www.youtube.com/watch?v=LsgQ5fk2Dks
What is AI Red Teaming
The White House Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence defines AI red teaming as follows:
“The term “AI red-teaming” means a structured testing effort to find flaws and vulnerabilities in an AI system, often in a controlled environment and in collaboration with developers of AI. Artificial Intelligence red-teaming is most often performed by dedicated “red teams” that adopt adversarial methods to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.”
In traditional machine learning, the timing of the attack will dictate the tactics and techniques that can be employed. At a high level, this would either be during training time or decision time. Training time would employ techniques such as data poisoning or model tampering. On the other hand, decision, or inference, time attacks would leverage techniques such as model bypass.
The MITRE ATLAS framework offers an excellent description of the tactics and techniques that can be used against such systems, and we’ve also written about some of these techniques. In recent months, generative AI systems, such as Large Language Models (LLMs) and GPTs, have become increasingly popular. While there has yet to be a consensus on a true taxonomy of attacks against these systems, we can attempt to classify a few. Prompt Injection is probably one of the most well-known attacks against LLMs today. Yet numerous other attack techniques against LLMs exist, such as indirect prompt injection, jailbreaking, and many more. While these are the techniques, the attacker’s goal could be to generate illegal or copyrighted material, produce false or biased information, or leak sensitive data.
Red Team vs Penetration Testing vs Vulnerability Assessment

Vulnerability assessments are a more in-depth systematic review that identifies vulnerabilities within an organization or system and provides a prioritized list of findings with recommendations on how to resolve them. The important distinction here is that these assessments won’t attempt to exploit any of the discovered vulnerabilities.
Penetration testing, often referred to as pen testing, is a more targeted attack to check for exploitable vulnerabilities. Whereas the vulnerability assessment does not attempt any exploitation, a pen testing engagement will. These are targeted and scoped by the customer or organization, sometimes based on the results of a vulnerability assessment. In the concept of AI, an organization may be particularly interested in testing if a model can be bypassed. Still, techniques such as model hijacking or data poisoning are less of a concern and would be out of scope.
Red teaming is the process of employing a multifaceted approach to testing how well a system can withstand an attack from a real-world adversary. It is particularly used to test the efficacy of systems, including their detection and response capabilities, especially when paired with a blue team (defensive security team). These attacks can be much broader and encompass human elements such as social engineering. Typically, the goals of these types of attacks are to identify weaknesses and how long or far the engagement can succeed before being detected by the security operations team.
Benefits of AI Red Teaming
Running through simulated attacks on your AI and ML ecosystems is critical to ensure comprehensiveness against adversarial attacks. As a data scientist, you have trained the model and tested it against real-world inputs you would expect to see and are happy with its performance. Perhaps you’ve added adversarial examples to the training data to improve comprehensiveness. This is a good start, but red teaming goes deeper by testing your model’s resistance to well-known and bleeding-edge attacks in a realistic adversary simulation.
This is especially important in generative AI deployments due to the unpredictable nature of the output. Being able to test for harmful or otherwise unwanted content is crucial not only for safety and security but also for ensuring trust in these systems. There are many automated and open-source tools that help test for these types of vulnerabilities, such as LLMFuzzer, Garak, or PyRIT. However, these tools have drawbacks, making them no substitute for in-depth AI red teaming. Many of these tools are static prompt analyzers, meaning they use pre-written prompts, which defenses typically block as they are previously known. For the tools that use dynamic adversarial prompt generation, the task of generating a system prompt to generate adversarial prompts can be quite challenging. Some tools have “malicious” prompts that are not malicious at all.
Real World Examples
One such engagement we conducted with a client highlights the importance of running through these types of tests with machine learning systems. This financial services institution had an AI model that identified fraudulent transactions. During the testing, we identified various ways in which an attacker could bypass their fraud models and crafted adversarial examples. Through this testing, we could work with the client and identify examples with the least amount of features modified, which provided guidance to data science teams to retrain the models that were not susceptible to such attacks.
In this case, if adversaries could identify and exploit the same weaknesses first, it would lead to significant financial losses. By gaining insights into these weaknesses first, the client can fortify their defenses while improving their models' comprehensiveness. Through this approach, this institution not only protects its assets but also maintains a stellar customer experience, which is crucial to its success.
Regulations for AI Red Teaming
In October 2023, the Biden administration issued an Executive Order to ensure AI's safe, secure, and trustworthy development and use. It provides high-level guidance on how the US government, private sector, and academia can address the risks of leveraging AI while also enabling the advancement of the technology. While this order has many components, such as
responsible innovations, protecting the American worker, and other consumer protections, one primary component is AI red teaming.
This order requires that organizations undergo red-teaming activities to identify vulnerabilities and flaws in their AI systems. Some of the important callouts include:
- Section 4.1(a)(ii) - Establish appropriate guidelines to enable developers of AI, especially of dual-use foundation models, to conduct AI red-teaming tests to enable deployment of safe, secure, and trustworthy systems.
- Section 4.2(a)(i)(C) - The results of any developed dual-use foundation model’s performance in relevant AI red-team testing.
- Companies developing or demonstrating an intent to develop potential dual-use foundation models to provide the Federal Government, on an ongoing basis, with information, reports, or records
- Section 10.1(b)(viii)(A) - External testing for AI, including AI red-teaming for generative AI
- Section 10.1(b)(viii)(A) - Testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI
Another well-known framework that addresses AI Red Teaming is the NIST AI Risk Management Framework (RMF). The framework's core provides guidelines for managing the risks of AI systems, particularly how to govern, map, measure, and manage. Although red teaming is not explicitly mentioned, section 3.3 offers valuable insights into ensuring AI systems are secure and resilient.
“Common security concerns relate to adversarial examples, data poisoning, and the exfiltration of models, training data, or other intellectual property through AI system endpoints. AI systems that can maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and use may be said to be secure.”
The EU AI Act is a behemoth of a document, spanning more than 400 pages outlining requirements and obligations for organizations developing and using AI. The concept of red-teaming is touched on in this document as well:
“require providers to perform the necessary model evaluations, in particular prior to its first placing on the market, including conducting and documenting adversarial testing of models, also, as appropriate, through internal or independent external testing.”
Conclusion
AI red teaming is an important strategy for any organization that is leveraging artificial intelligence. These simulations serve as a critical line of defense, testing AI systems under real-world conditions to uncover vulnerabilities before they can be exploited for malicious purposes. When conducting red teaming exercises, organizations should be prepared to examine their AI models thoroughly. This will lead to stronger and more resilient systems that can both detect and prevent these emerging attack vectors. AI red teaming goes beyond traditional testing by simulating adversarial attacks designed to compromise AI integrity, uncovering weaknesses that standard methods might miss. Similarly, LLM red teaming is essential for large language models, enabling organizations to identify vulnerabilities in their generative AI systems, such as susceptibility to prompt injections or data leaks, and address these risks proactively
Engaging in AI red teaming is not a journey you should take on alone. It is a collaborative effort that requires cyber security and data science experts to work together to find and mitigate these weaknesses. Through this collaboration, we can ensure that no organization has to face the challenges of securing AI in a silo. If you want to learn more about red-team your AI operations, we are here to help.
Join us for the “A Guide to Red Teaming” Webinar on July 17th.
You can contact us here to learn more about our Automated Red Teaming for AI module.

Advancements in Security for AI
To help understand the evolving cybersecurity environment, we developed HiddenLayer’s 2024 AI Threat Landscape Report as a practical guide to understanding the security risks that can affect every industry and to provide actionable steps to implement security measures at your organization.
To help understand the evolving cybersecurity environment, we developed HiddenLayer’s 2024 AI Threat Landscape Report as a practical guide to understanding the security risks that can affect every industry and to provide actionable steps to implement security measures at your organization.
Understanding Advancements in Security for AI
Understanding new technologies' vulnerabilities is crucial before implementing security measures. Offensive security research plays a significant role in planning defenses, as initial security measures are often built on the foundation of these offensive insights.
Security for AI is no exception. Early research and tools in this field focused on offensive strategies. Initially, AI attacks were mainly explored in academic papers and through exercises by security professionals. However, there has been a significant shift in the last few years.
Offensive Security Tooling for AI
Just as in traditional IT security, offensive security tools for AI have emerged to identify and mitigate vulnerabilities. While these tools are valuable for enhancing AI system security, malicious actors can also exploit them.
Automated Attack Frameworks
Pioneering tools like CleverHans (2016) and IBM's Adversarial Robustness Toolbox (ART, 2018) have paved the way for testing AI comprehensively. Subsequent tools such as MLSploit (2019), TextAttack (2019), Armory (2020), and Counterfit (2021) have further advanced the field, offering a variety of attack techniques to evaluate AI defenses.

Anti-Malware Evasion Tooling
Specialized tools like MalwareGym (2017) and its successor MalwareRL (2021) focus on evading AI-based anti-malware systems. These tools highlight the need for continuous improvement in security for AI measures.
Model Theft Tooling
KnockOffNets (2021) demonstrates the feasibility of AI model theft, emphasizing the importance of securing AI intellectual property.
Model Deserialization Exploitation
Fickling (2021) and Charcuterie (2022) showcase vulnerabilities in AI model serialization, underscoring the need for secure model handling practices.
Defensive Frameworks for AI
Leading cybersecurity organizations have developed comprehensive defensive frameworks to address the rising threats to AI.
MITRE ATLAS
Launched in 2021, MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) provides a knowledge base of adversarial tactics and techniques. Modeled after the MITRE ATT&CK framework, ATLAS helps professionals stay updated on AI threats and defenses.
“This survey demonstrates the prominence of real-world threats on AI-enabled systems, with 77% of participating companies reporting breaches to their AI applications this year. The MITRE ATLAS community is dedicated to characterizing and mitigating these threats in a global alliance. We applaud our community collaborators who enhance our collective ability to anticipate, prevent, and mitigate risks to AI systems, including HiddenLayer and their latest threat report.”
– Dr. Christina Liaghati, MITRE ATLAS Lead
NIST AI Risk Management Framework
Released in January 2023, the NIST AI Risk Management Framework (AI RMF) offers guidance for the responsible design, deployment, and use of AI systems, promoting trust and security in AI.
Google Secure AI Framework (SAIF)
Introduced in June 2023, SAIF outlines best practices for securing AI systems, emphasizing strong security foundations, automated defenses, and contextualized risk management.
Policies and Regulations
Global policies and regulations are being established to ensure AI's safe and ethical use. The EU's GDPR and AI Act, OECD AI Principles, and national frameworks like Singapore's Model AI Governance Framework and the US's AI Bill of Rights highlight the growing emphasis on security for AI and governance.
Concluding Thoughts
As AI technology evolves, so must the security measures that secure it. By combining offensive and defensive strategies, leveraging comprehensive frameworks, and adhering to evolving regulations, the industry can better safeguard AI systems against emerging threats. Collaboration between academia, industry, and policymakers is essential to anticipate and mitigate risks effectively.
Continuous innovation and vigilance in security for AI will be crucial in maintaining trust and reliability in AI applications, ensuring they can be safely integrated into various sectors.
View the full Threat Landscape Report here.

AI Model Scanner Accelerates Adoption
OpenAI revolutionized the world by launching ChatGPT, marking a pivotal moment in technology history. The AI arms race, where companies speed to integrate AI amidst the dual pressures of rapid innovation and cybersecurity challenges, highlights the inherent risks in AI models. HiddenLayer’s Model Scanner is crucial for identifying and mitigating these vulnerabilities. From the surge of third-party models on platforms like Hugging Face to the Wild West-like rush for AI dominance, this article offers insights into securing AI’s future while enabling businesses to harness its transformative power safely.
Summary
OpenAI revolutionized the world by launching ChatGPT, marking a pivotal moment in technology history. The AI arms race, where companies speed to integrate AI amidst the dual pressures of rapid innovation and cybersecurity challenges, highlights the inherent risks in AI models. HiddenLayer’s Model Scanner is crucial for identifying and mitigating these vulnerabilities. From the surge of third-party models on platforms like Hugging Face to the Wild West-like rush for AI dominance, this article offers insights into securing AI’s future while enabling businesses to harness its transformative power safely.
Introduction
November 30, 2022 will go down in history as a major milestone in the history of technology. That was the day OpenAI made ChatGPT publicly available to the masses. Although, as a society, we’ve been experimenting with artificial intelligence academically since the 1950s, and many industries (finance, defense, healthcare, insurance, manufacturing, cybersecurity, and more.) have been putting AI into practical use since the early 2010s, OpenAI’s launch of ChatGPT helped the general public truly understand the vast benefits AI has on technology, economy, and society. We now find ourselves in the middle of an AI Arms Race with practically every enterprise and start-up company trying to rapidly embrace and adopt AI to help solve their business and/or technical problems.
AI Acceleration vs AI Hesitancy
The sudden acceleration of AI adoption in this arms race puts immense pressure on companies’ cybersecurity organizations to facilitate AI initiatives without impeding progress. Many company CISOs and their teams have “AI Hesitancy” because they have not been afforded the time to understand the full scope of the cybersecurity risk and attack threat landscape to put the people, processes, procedures, and products in place to embrace AI safely and securely. In A Beginner’s Guide to Securing AI for SecOps, we offer a primer for Security Operations teams to consider securing AI.
AI acceleration causes cybersecurity risks inherent in AI models. HiddenLayer’s AI Model Scanner can empower cybersecurity teams to help companies adopt AI while minimizing cybersecurity risks and attacks.
The Wild Wild West of the New AI Frontier
Today’s AI technological frontier is reminiscent of the Wild West of America in the 1800s. Like early pioneers, those venturing into this new era are motivated by its promise, and first-movers gain significant advantages by staking their claim early before the area becomes saturated. As success stories emerge, they attract an influx of others, including unwanted threat actors. The frontier remains largely lawless despite new regulations due to a lack of enforcement and security resources. Consequently, organizations must take proactive steps to protect themselves and their AI assets.
AI Rush: Supply vs Demand
The hyper-demand for AI and machine learning models is exacerbated by the lack of supply of AI expertise (data scientists, ML engineers, etc) and has created a market explosion of third-party and open-source AI Models. A symptom of this hyper-demand can be seen in the growth of Hugging Face. Billed as the “GitHub of AI Models,” Hugging Face has established itself as the leader in AI Model Marketplaces, where anyone can download AI Models to bootstrap their adoption of AI. In 2023, Hugging Face had about 50,000 models. Today, in a little over a year, they have exceeded 650,000 models created by AI companies and creators. It is clear that we are in the middle of a gold rush in the era of the Dot AI Boom.
Downloading third-party models without validation, attestation, or insights into the trustworthiness of AI models exposes companies to significant cybersecurity risks. Recognizing this as an issue that could impede AI adoption, Microsoft uses HiddenLayer to scan the models in their curated Azure AI catalog on behalf of their customers.
Exploitation of Malicious AI Models
AI Robbery
AI models are uniquely attractive to threat actors and ripe for attack because they contain both sensitive data and code execution capabilities. Threat actors commonly utilize malicious code execution to access sensitive data and intel. In this scenario, the keys to the safe are attached to the safe itself.
What are the most common threats to AI Model Files?
- Arbitrary Code Execution – Arbitrary code can be executed as part of a model format’s intended functionality or by exploiting a vulnerability. An attacker may run any code to compromise a target system, exfiltrate data, poison training data sets, coin mining, encrypt the machine, or worse.
- Network Requests – The machine learning model may execute network requests, allowing for data exfiltration and remote access to a restricted environment.
- Embedded Payloads – Malicious executables and other files can be embedded within a machine learning model in several ways: either appended to a model, injected into the weights and biases via steganography, or bundled as part of a model archive.
- Decompression Vulnerabilities – Some machine learning models can be compressed to a small size when saved but can be designed to expand to an enormous size on load, crashing the system it is loaded on.
- Unsafe Python Modules – Unsafe modules within the Python ecosystem can execute arbitrary code and be used to compromise a machine.
- File System Access – The machine learning model can access the local file system, allowing for data exfiltration or arbitrary file writes to the file system.
- Exploitation – Machine learning models are not impervious to typical vulnerabilities such as buffer overflows and path traversals when parsing the model file. These can then be used to exploit the host machine to achieve arbitrary code execution, arbitrary file writes, and more.
HiddenLayer Model Scanner
HiddenLayer’s Model Scanner performs a deep introspective analysis of AI models with the industry’s most comprehensive breadth and depth of coverage. It recognizes and parses all the major model file formats to identify cybersecurity risks and threats embedded in the model’s layers, tensors, and functionality using HiddenLayer’s patented detection techniques.
AI Model Format War
AI Models come in all types of flavors. Each one of them has nuances and capabilities that can expose their vulnerabilities to be exploited. Some of the most commonly used AI Model formats seen in the wild are:
Model FormatDescriptionFile ExtensionsGGUFGGUF is a file format for storing models for inference with GGML and executors based on GGML. GGUF is a binary format that is designed for fast loading and saving of models, and for ease of reading. Models are traditionally developed using PyTorch or another framework and then converted to GGUF for use in GGML.ggufH5H5 is a file format used to organize large datasets and can contain multiple files that can potentially reference each other. It’s very common to bundle datasets, weights, or supporting scripts in an h5 file..h5KerasKeras is a high-level neural network API written in Python and runs on top of multiple open-source ML frameworks like TensorFlow. Keras model format can be a directory or a single file..keras, .tfNemoNemo models are used to train and reproduce Conversational AI models and are compatible with the PyTorch ecosystem..nemoNumPyA file type for storing N-dimensional arrays, a Python datatype that is very common in machine learning..npyONNXA machine learning file format that allows for easy exchange between different frameworks. ONNX file stores model information as a graph object..onnxPickleA file type that serializes Python objects. Can contain data, trained models, and weights..pkl, .picklePytorchModel format primarily used by PyTorch ML framework. The format is a compressed ZIP archive containing a data.pkl (pickle file) and associated model weights. .pt, .bin, .zipSafetensorsSafetensors is a safe and fast file format for storing and loading tensors. Safetensors are meant to replace PyTorch models distributed as pickles with safer versions, where only the tensor is serialized without any surrounding code and logic..safetensorsTensorflowTensorFlow is a free and open-source software library for machine learning and artificial intelligence. It can be used across a range of tasks but has a particular focus on training and inference of deep neural networks. Tensorflow native save format (.tf) is a directory containing variables and three protobuf files. The SavedModel format persists the graph of a TensorFlow model to disk..savedmodel, .tf, .pb
Security Checkpoints Throughout the AI Model Lifecycle
AI model training and development process can be very dynamic, with constant changes to data, functionality, weights, and biases from a team of contributors. This dynamic nature makes implementing traditional change control, code audits, and chain of custody difficult.
HiddenLayer Model Scanner should be used to implement security checkpoints at multiple stages of the AI Operations lifecycle to ensure the security and trustworthiness of the model:
- Scan third-party models upon initial download to ensure the foundational model is free of vulnerabilities or malicious code. This should be done before feeding it sensitive training data.
- Perform scans on all models within an MLOps Tools registry/catalog to identify any existing latent security risks.
- Scan models whenever a new version is created to identify supply chain attacks or inadvertent inclusions of new vulnerabilities
- Enforce model scanning before transitioning to production to confirm their safety and take a snapshot of the last known safe state.

HiddenLayer AISec Platform integrates with MLOps tools (such as Microsoft AzureML, Databricks, and others) to synchronize and aggregate different MLOps tools’ model registry into HiddenLayer’s Model Inventory to give security teams a single view of all the company’s models in development.

Detection Analysis & Incident Response
When the Model Scanner detects an issue with an AI model, it provides insightful details to allow security teams to collaborate with data science teams to investigate further. In this example, the scan of a Keras file found that the model has a lambda function that allows for arbitrary code execution.

The presence of this function could allow a threat actor to exploit its capabilities to execute malware, a backdoor, or any other capability to accomplish their goal. Many questions arise from this single detection:
- Was this vulnerability already embedded in the 3rd party model used as the foundation for this version?
- Was the Data Science team aware of this capability in the model?
- If this comes as a surprise, could this be evidence of a supply chain attack by an external threat actor, an internal threat, or a result of a compromised credential?
- If the Data Science team was aware of the functionality, perhaps they felt it was important for the model to deliver on its purpose but were unaware of the cybersecurity risks it poses to the company.
By detecting this early in the MLOps lifecycle and gaining valuable insight from the detection details and subsequent investigation, security teams could save data science teams and the company time and money spent on the training and development of insecure AIModels or, worse, a potential breach resulting from the exploitation of the vulnerability.
Conclusion
Companies can go from “AI Hesitancy” to “AI Acceleration” if they take the steps to include security into their AI adoption early in their journey. HiddenLayer AISec Platform and Model Scanner can be used as security checkpoints at key milestones in the MLOps life cycle to identify embedded vulnerabilities and malicious code within AI Models, reducing the company’s risk of attacks and breaches and increasing their AI Security Posture.

Introducing the Security for AI Council
It’s been just a few short weeks since RSAC 2024, an event that left a lasting impression on all who attended. This year, the theme “The Art of the Possible” resonated deeply, showcasing the industry’s commitment to exploring new horizons and embracing innovative ideas. It was inspiring to witness the collective enthusiasm for Possibility Thinking, a cognitive perspective that focuses on exploring potential opportunities and imagining various scenarios without being constrained by current realities or limitations. It involves a mindset open to new ideas, creative solutions, and innovative thinking. The theme and general ambiance set the stage perfectly for us to launch something big, the Security for AI Council.
Introduction
It’s been just a few short weeks since RSAC 2024, an event that left a lasting impression on all who attended. This year, the theme “The Art of the Possible” resonated deeply, showcasing the industry’s commitment to exploring new horizons and embracing innovative ideas. It was inspiring to witness the collective enthusiasm for Possibility Thinking, a cognitive perspective that focuses on exploring potential opportunities and imagining various scenarios without being constrained by current realities or limitations. It involves a mindset open to new ideas, creative solutions, and innovative thinking. The theme and general ambiance set the stage perfectly for us to launch something big, the Security for AI Council.
In the security industry, we are accustomed to thinking about the possibility of harm, which can cause a pause in innovation in some organizations. At HiddenLayer, we believe that when it comes to AI, any organization can go from pause to possibilities. Those who can understand the risks AI can create and can focus on practical solutions to mitigate those risks will be able to unlock the enormous potential AI has to create worldwide social and economic benefits.
Launching the Security for AI Council
That is why we created the Security for AI Council. The council’s mission is to revolutionize the cybersecurity industry by spearheading the secure adoption of AI, the most impactful technology of our time. We want to empower the industry to fully realize AI’s potential while ensuring its usage is secure and responsible.
Our goals for the council are to:
- Accelerate Security for AI adoption
- Diminish the frequency, severity, and impact of attacks against AI
- Develop the CISO roadmap for securing AI
- Understand and influence emerging new regulations & industry frameworks
- Define success in securing this rapidly growing technology
The kickoff of the Security for AI Council that I led at the RSA conference this year was filled with creativity, innovation, visionary thinking, and a positive outlook for the future. At our kickoff, 18 security executives from a wide range of sectors joined the council, including higher education, entertainment, healthcare, technology, financial services, retail, and manufacturing.
We discussed a wide range of topics, including:
- AI Governance
- Secure by Design for AI
- Security Operations & Vulnerability Management for AI
- Data Scientists as a potential substantial insider risk
- Evolving regulatory environment for AI
During our discussions on the various threats to AI, a substantial positive outlook emerged from everyone. The focus was on proactive measures to address these risks and the potential achievements that can result. It was invigorating to hear about the opportunities within our challenges and how obstacles can be transformed into stepping stones for progress rather than roadblocks hindering innovation.
Our Mission
Our mission to accelerate the secure adoption of AI, mitigate risks, and influence emerging regulations is more critical than ever. The discussions at the RSA conference reinforced the need for collaborative efforts and creative solutions to unlock AI's full potential. By embracing Possibility Thinking, we can turn challenges into opportunities and pave the way for groundbreaking advancements in cybersecurity.
As we progress, the Security for AI Council will continue to champion this mindset, fostering an environment where innovation thrives and security is paramount. Together, we will ensure that AI not only drives economic and social benefits but also does so in a manner that is safe, responsible, and sustainable. The journey ahead is promising, and with the collective efforts of industry leaders, the art of the possible is within our grasp.
If you are a CISO or a security executive and are interested in joining the council or learning more, please reach out to me at Mharkins@hiddenlayer.com.

Understand AI Security, Clearly Defined
Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.
