Learn from our AI Security Experts

Discover every model. Secure every workflow. Prevent AI attacks - without slowing innovation.

All Resources

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Insights
xx
min read

Why Revoking Biden’s AI Executive Order Won’t Change Course for CISOs

On 20 January 2025, President Donald Trump rescinded former President Joe Biden’s 2023 executive order on artificial intelligence (AI), which had established comprehensive guidelines for developing and deploying AI technologies. While this action signals a shift in federal policy, its immediate impact on the AI landscape is minimal for several reasons.

Governance
Security for AI
Insights
xx
min read

HiddenLayer Achieves ISO 27001 and Renews SOC 2 Type 2 Compliance

Security compliance is more than just a checkbox - it’s a fundamental requirement for protecting sensitive data, building customer trust, and ensuring long-term business growth. At HiddenLayer, security has always been at the core of our mission, and we’re proud to announce that we have achieved SOC 2 Type 2 and ISO 27001 compliance. These certifications reinforce our commitment to providing our customers with the highest level of security and reliability.

Governance
Security for AI
Insights
xx
min read

AI Risk Management: Effective Strategies and Framework

Artificial Intelligence (AI) is no longer just a buzzword—it’s a cornerstone of innovation across industries. However, with great potential comes significant risk. Effective AI Risk Management is critical to harnessing AI’s benefits while minimizing vulnerabilities. From data breaches to adversarial attacks, understanding and mitigating risks ensures that AI systems remain trustworthy, secure, and aligned with organizational goals.

Security for AI
Insights
xx
min read

Security for AI vs. AI Security

When we talk about securing AI, it’s important to distinguish between two concepts that are often conflated: Security for AI and AI Security. While they may sound similar, they address two entirely different challenges.

101’s
Security for AI
Insights
xx
min read

The Next Step in AI Red Teaming, Automation

Red teaming is essential in security, actively probing defenses, identifying weaknesses, and assessing system resilience under simulated attacks. For organizations that manage critical infrastructure, every vulnerability poses a risk to data, services, and trust. As systems grow more complex and threats become more sophisticated, traditional red teaming encounters limits, particularly around scale and speed. To address these challenges, we built the next step in red teaming: an <a href="https://hiddenlayer.com/autortai/"><strong>Automated Red Teaming for AI solution</strong><strong> </strong>that combines intelligence and efficiency to achieve a level of depth and scalability beyond what human-led efforts alone can offer.

Gen AI
Security for AI
Insights
xx
min read

Understanding AI Data Poisoning

Today, AI is woven into everyday technology, driving everything from personalized recommendations to critical healthcare diagnostics. But what happens if the data feeding these AI models is tampered with? This is the risk posed by AI data poisoning—a targeted attack where someone intentionally manipulates training data to disrupt how AI systems operate. Far from science fiction, AI data poisoning is a growing digital security threat that can have real-world impacts on everything from personal safety to financial stability.

101’s
Insights
xx
min read

The EU AI Act: A Groundbreaking Framework for AI Regulation

Artificial intelligence (AI) has become a central part of our digital society, influencing everything from healthcare to transportation, finance, and beyond. The European Union (EU) has recognized the need to regulate AI technologies to protect citizens, foster innovation, and ensure that AI systems align with European values of privacy, safety, and accountability. In this context, the EU AI Act is the world’s first comprehensive legal framework for AI. The legislation aims to create an ecosystem of trust in AI while balancing the risks and opportunities associated with its development.

Governance
Insights
xx
min read

Key Takeaways from NIST's Recent Guidance

On July 29th, 2024, the National Institute of Standards and Technology (NIST) released critical guidance that outlines best practices for managing cybersecurity risks associated with AI models. This guidance directly ties into several comments we submitted during the open comment periods, highlighting areas where HiddenLayer effectively addresses emerging cybersecurity challenges.

101’s
Governance
Insights
xx
min read

Three Distinct Categories Of AI Red Teaming

As we’ve covered previously, AI red teaming is a highly effective means of assessing and improving the security of AI systems. The term “red teaming” appears many times throughout recent public policy briefings regarding AI.

No items found.

Understand AI Security, Clearly Defined

Explore our glossary to get clear, practical definitions of the terms shaping AI security, governance, and risk management.