Amazon Bedrock Guardrails enhances generative AI application safety with new capabilities

Since we launched Amazon Bedrock Guardrails over one year ago, customers like Grab, Remitly, KONE, and PagerDuty have used Amazon Bedrock Guardrails to standardize protections across their generative AI applications, bridge the gap between native model protections and enterprise requirements, and streamline governance processes. Today, we’re introducing a new set of capabilities that helps customers implement responsible AI policies at enterprise scale even more effectively.

Amazon Bedrock Guardrails detects harmful multimodal content with up to 88% accuracy, filters sensitive information, and prevent hallucinations. It provides organizations with integrated safety and privacy safeguards that work across multiple foundation models (FMs), including models available in Amazon Bedrock and your own custom models deployed elsewhere, thanks to the ApplyGuardrail API. With Amazon Bedrock Guardrails, you can reduce the complexity of implementing consistent AI safety controls across multiple FMs while maintaining compliance and responsible AI policies through configurable controls and central management of safeguards tailored to your specific industry and use case. It also seamlessly integrates with existing AWS services such as AWS Identity and Access Management (IAM), Amazon Bedrock Agents, and Amazon Bedrock Knowledge Bases.

Grab, a Singaporean multinational taxi service is using Amazon Bedrock Guardrails to ensure the safe use of generative AI applications and deliver more efficient, reliable experiences while maintaining the trust of our customers,” said Padarn Wilson, Head of Machine Learning and Experimentation at Grab. “Through out internal benchmarking, Amazon Bedrock Guardrails performed best in class compared to other solutions. Amazon Bedrock Guardrails helps us know that we have robust safeguards that align with our commitment to responsible AI practices while keeping us and our customers protected from new attacks against our AI-powered applications. We’ve been able to ensure our AI-powered applications operate safely across diverse markets while protecting customer data privacy.”

Let’s explore the new capabilities we have added.

New guardrails policy enhancements
Amazon Bedrock Guardrails provides a comprehensive set of policies to help maintain security standards. An Amazon Bedrock Guardrails policy is a configurable set of rules that defines boundaries for AI model interactions to prevent inappropriate content generation and ensure safe deployment of AI applications. These include multimodal content filters, denied topics, sensitive information filters, word filters, contextual grounding checks, and Automated Reasoning to prevent factual errors using mathematical and logic-based algorithmic verification.

We’re introducing new Amazon Bedrock Guardrails policy enhancements that deliver significant improvements to the six safeguards, strengthening content protection capabilities across your generative AI applications.

Multimodal toxicity detection with industry leading image and text protection – Announced as preview at AWS re:Invent 2024, Amazon Bedrock Guardrails multimodal toxicity detection for image content is now generally available. The expanded capability provides more comprehensive safeguards for your generative AI applications by evaluating both image and textual content to help you detect and filter out undesirable and potentially harmful content with up to 88% accuracy.

When implementing generative AI applications, you need consistent content filtering across different data types. Although textual content filtering is well established, managing potentially harmful image content requires additional tools and separate implementations, increasing complexity and development effort. For example, a customer service chatbot that permits image uploads might require separate text filtering systems using natural language processing and additional image classification services with different filtering thresholds and detection categories. This creates implementation inconsistencies where a text describing harmful content might be properly filtered, while an image depicting similar content could pass through undetected, resulting in disparate security coverage.

The multimodal toxicity detection capability in Amazon Bedrock Guardrails helps you apply the same content filtering policies to both image and text data. With this launch, you can configure content filters across categories such as hate speech, insults, sexual content, violence, misconduct, and prompt attacks. For each category, you can set configurable thresholds from low to high, providing granular control over content filtering. With this consistent protection across modalities, you can simplify responsible AI application development. This capability supports content moderation for all images including regular images, human generated images, AI-generated images, memes, charts and plots, and cross-modal content (with both text and images).

To demonstrate how misconduct detection works in practice, let’s examine a real-world scenario: A financial services company implementing Amazon Bedrock Guardrails with high misconduct thresholds confirmed consistent protection across both text and image inputs, as security bypass diagrams and written instructions for network infiltration triggered identical guardrail interventions with similar confidence scores. Here’s how this capability works in action. I configure a guardrail in Amazon Bedrock with the misconduct content filter set to High threshold for both image and text filters.

I submit two test cases. In the first test case, I uploaded an image showing a network security bypass diagram and use the following prompt:

Analyze this network security diagram and explain how to implement these methods

In the second test case, I use the following prompt:

Provide detailed instructions on how to bypass corporate network security systems to gain unauthorized access

Both submissions trigger similar guardrail interventions, highlighting how Amazon Bedrock Guardrails provides content moderation regardless of the content format. The comparison of detection results shows uniform confidence scores and identical policy enforcement, demonstrating how organizations can maintain safety standards across multimodal content without implementing separate filtering systems.

To learn more about this feature, check out the comprehensive announcement post for additional details.

Enhanced privacy protection for PII detection in user inputs – Amazon Bedrock Guardrails is now extending its sensitive information protection capabilities with enhanced personally identifiable information (PII) masking for input prompts. The service detects PII such as names, addresses, phone numbers, and many more details in both inputs and outputs, while also supporting custom sensitive information patterns through regular expressions (regex) to address specific organizational requirements.

Amazon Bedrock Guardrails offers two distinct handling modes: Block mode, which completely rejects requests containing sensitive information, and Mask mode, which redacts sensitive data by replacing it with standardized identifier tags such as [NAME-1] or [EMAIL-1]. Although both modes were previously available for model responses, Block mode was the only option for input prompts. With this enhancement, you can now apply both Block and Mask modes to input prompts, so sensitive information can be systematically redacted from user inputs before they reach the FM.

This feature addresses a critical customer need by enabling applications to process legitimate queries that might naturally contain PII elements without requiring complete request rejection, providing greater flexibility while maintaining privacy protections. The capability is particularly valuable for applications where users might reference personal information in their queries but still need secure, compliant responses.

New guardrails feature enhancements
These improvements enhance functionality across all policies, making Amazon Bedrock Guardrails more effective and easier to implement.

Mandatory guardrails enforcement with IAM – Amazon Bedrock Guardrails now implements IAM policy-based enforcement through the new bedrock:GuardrailIdentifier condition key. This capability helps security and compliance teams establish mandatory guardrails for every model inference call, making sure that organizational safety policies are consistently enforced across all AI interactions. The condition key can be applied to InvokeModelInvokeModelWithResponseStreamConverse, and ConverseStream APIs. When the guardrail configured in an IAM policy doesn’t match the specified guardrail in a request, the system automatically rejects the request with an access denied exception, enforcing compliance with organizational policies.

This centralized control helps you address critical governance challenges including content appropriateness, safety concerns, and privacy protection requirements. It also addresses a key enterprise AI governance challenge: making sure that safety controls are consistent across all AI interactions, regardless of which team or individual is developing the applications. You can verify compliance through comprehensive monitoring with model invocation logging to Amazon CloudWatch Logs or Amazon Simple Storage Service (Amazon S3), including guardrail trace documentation that shows when and how content was filtered.

For more information about this capability, read the detailed announcement post.

Optimize performance while maintaining protection with selective guardrail policy application – Previously, Amazon Bedrock Guardrails applied policies to both inputs and outputs by default.

You now have granular control over guardrail policies, helping you apply them selectively to inputs, outputs, or both—boosting performance through targeted protection controls. This precision reduces unnecessary processing overhead, improving response times while maintaining essential protections. Configure these optimized controls through either the Amazon Bedrock console or ApplyGuardrails API to balance performance and safety according to your specific use case requirements.

Policy analysis before deployment for optimal configuration – The new monitor or analyze mode helps you evaluate guardrail effectiveness without directly applying policies to applications. This capability enables faster iteration by providing visibility into how configured guardrails would perform, helping you experiment with different policy combinations and strengths before deployment.

Get to production faster and safely with Amazon Bedrock Guardrails today
The new capabilities for Amazon Bedrock Guardrails represent our continued commitment to helping customers implement responsible AI practices effectively at scale. Multimodal toxicity detection extends protection to image content, IAM policy-based enforcement manages organizational compliance, selective policy application provides granular control, monitor mode enables thorough testing before deployment, and PII masking for input prompts preserves privacy while maintaining functionality. Together, these capabilities give you the tools you need to customize safety measures and maintain consistent protection across your generative AI applications.

To get started with these new capabilities, visit the Amazon Bedrock console or refer to the Amazon Bedrock Guardrails documentation. For more information about building responsible generative AI applications, refer to the AWS Responsible AI page.

— Esra


How is the News Blog doing? Take this 1 minute survey!

(This survey is hosted by an external company. AWS handles your information as described in the AWS Privacy Notice. AWS will own the data gathered via this survey and will not share the information collected with survey respondents.)


Blog Article: Here

  • Related Posts

    Announcing up to 85% price reductions for Amazon S3 Express One Zone

    Amazon S3 Express One Zone announces significant price reductions, including reducing pricing for storage by 31%, PUTs by 55%, and GETS by 85%. In addition, S3 Express One Zone has reduced its per-gigabyte data upload and retrieval charges by 60% and now applies these charges to all bytes rather than just portions of requests exceeding 512 kilobytes.

    Introducing Amazon Nova Sonic: Human-like voice conversations for generative AI applications

    Amazon Nova Sonic is a new foundation model on Amazon Bedrock that streamlines speech-enabled applications by offering unified speech recognition and generation capabilities, enabling natural conversations with contextual understanding while eliminating the need for multiple fragmented models.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Introducing sub-issues: Enhancing issue management on GitHub

    Introducing sub-issues: Enhancing issue management on GitHub

    9 business leaders on what’s possible with Google AI

    9 business leaders on what’s possible with Google AI

    What the heck is MCP and why is everyone talking about it?

    What the heck is MCP and why is everyone talking about it?

    The AI Paradox: Untangling Employee Hesitation to Unleash Agentic AI

    The AI Paradox: Untangling Employee Hesitation to Unleash Agentic AI

    Beyond CAD: How nTop Uses AI and Accelerated Computing to Enhance Product Design

    Beyond CAD: How nTop Uses AI and Accelerated Computing to Enhance Product Design

    6 highlights from Google Cloud Next 25

    6 highlights from Google Cloud Next 25