Introduction
Generative AI creates human-like content that must align with organizational standards and safety guidelines. Amazon Bedrock Guardrails helps organizations implement customized safeguards for their AI applications by filtering harmful content, blocking denied topics, and removing sensitive information.
The ApplyGuardrail API enables content validation without invoking foundation models. This API offers three key benefits.
- Evaluate text against your defined rules, including topic avoidance, content filters, and PII detection
- Validate data at any point in your application flow, from user input to final output
- Assess content without calling foundation models, giving you control over your application's security workflow
In this article, you will learn how to use the Amazon Bedrock ApplyGuardrail API to enforce responsible API behaviour by detecting and masking sensitive information like names, addresses, emails, and more, directly in your application inputs.
Guardrail Configuration JSON
{
"name": "test_guardrail_vijai_test",
"guardrailId": "xxxx0xx9x6xx",
"guardrailArn": "arn:aws:bedrock:us-east-1:11111111111:guardrail/xxxx0xx9x6xx",
"version": "DRAFT",
"status": "READY",
"contentPolicy": {
"filters": [
{
"type": "VIOLENCE",
"inputStrength": "HIGH",
"outputStrength": "HIGH",
"inputAction": "BLOCK",
"outputAction": "BLOCK"
},
{
"type": "PROMPT_ATTACK",
"inputStrength": "HIGH",
"outputStrength": "NONE",
"inputAction": "BLOCK"
},
{
"type": "MISCONDUCT",
"inputStrength": "HIGH",
"outputStrength": "HIGH",
"inputAction": "BLOCK",
"outputAction": "BLOCK"
},
{
"type": "HATE",
"inputStrength": "HIGH",
"outputStrength": "HIGH",
"inputAction": "BLOCK",
"outputAction": "BLOCK"
},
{
"type": "SEXUAL",
"inputStrength": "HIGH",
"outputStrength": "HIGH",
"inputAction": "BLOCK",
"outputAction": "BLOCK"
},
{
"type": "INSULTS",
"inputStrength": "HIGH",
"outputStrength": "HIGH",
"inputAction": "BLOCK",
"outputAction": "BLOCK"
}
]
},
"sensitiveInformationPolicy": {
"piiEntities": [
{
"type": "NAME",
"action": "BLOCK",
"inputAction": "ANONYMIZE",
"outputAction": "ANONYMIZE",
"inputEnabled": true,
"outputEnabled": true
},
{
"type": "EMAIL",
"action": "BLOCK",
"inputAction": "ANONYMIZE",
"outputAction": "ANONYMIZE",
"inputEnabled": true,
"outputEnabled": true
},
{
"type": "CREDIT_DEBIT_CARD_NUMBER",
"action": "BLOCK",
"inputAction": "ANONYMIZE",
"outputAction": "ANONYMIZE",
"inputEnabled": true,
"outputEnabled": true
},
{
"type": "ADDRESS",
"action": "BLOCK",
"inputAction": "ANONYMIZE",
"outputAction": "ANONYMIZE",
"inputEnabled": true,
"outputEnabled": true
},
{
"type": "PASSWORD",
"action": "BLOCK",
"inputAction": "BLOCK",
"outputAction": "BLOCK",
"inputEnabled": true,
"outputEnabled": true
}
],
"regexes": []
},
"createdAt": "2025-06-17T15:13:12.089441+00:00",
"updatedAt": "2025-06-17T15:19:59.408396+00:00",
"statusReasons": [],
"failureRecommendations": [],
"blockedInputMessaging": "Sorry, the model cannot answer this question.",
"blockedOutputsMessaging": "Sorry, the model cannot answer this question."
}
Use Case
Let’s consider a scenario where a user provides input that contains personal details.
“Hi, my name is Joe, and I live in the United Kingdom. Which car brand is reliable?”
The configured guardrail will detect and mask the following PII information
- Detect the name ("Joe")
- Detect the address ("United Kingdom")
- Apply the ANONYMIZE action to mask those values
The transformed output looks like the following example.
"Hi, my name is {NAME}, living in {ADDRESS}. Which car brand is reliable?"
Python Code
import json
import boto3
# Initialize Bedrock runtime client
bedrock = boto3.client(
"bedrock-runtime",
region_name="us-east-1"
)
# Define the input content
content = [
{
"text": {
"text": "Hi, my name is Joe living in United Kingdom. Which car brand is reliable?"
}
}
]
# Call the ApplyGuardrail API
response = bedrock.apply_guardrail(
content=content,
source="INPUT",
guardrailIdentifier="xxxx0xx9x6xx",
guardrailVersion="DRAFT"
)
# Pretty print the JSON response
print(json.dumps(response, indent=2, default=str))
Response
{
"ResponseMetadata": {
"RequestId": "0ffd3f32-x123-123y-xxxx-6d3e4e0d7615",
"HTTPStatusCode": 200,
"HTTPHeaders": {
"date": "Tue, 17 Jun 2025 15:57:41 GMT",
"content-type": "application/json",
"content-length": "1151",
"connection": "keep-alive",
"x-amzn-requestid": "0ffd3f32-x123-123y-xxxx-6d3e4e0d7615"
},
"RetryAttempts": 0
},
"usage": {
"topicPolicyUnits": 0,
"contentPolicyUnits": 1,
"wordPolicyUnits": 0,
"sensitiveInformationPolicyUnits": 1,
"sensitiveInformationPolicyFreeUnits": 0,
"contextualGroundingPolicyUnits": 0,
"contentPolicyImageUnits": 0
},
"action": "GUARDRAIL_INTERVENED",
"actionReason": "Guardrail masked.",
"outputs": [
{
"text": "Hi, my name is {NAME} living in {ADDRESS}. Which car brand is reliable?"
}
],
"assessments": [
{
"sensitiveInformationPolicy": {
"piiEntities": [
{
"match": "Joe",
"type": "NAME",
"action": "ANONYMIZED",
"detected": true
},
{
"match": "United Kingdom",
"type": "ADDRESS",
"action": "ANONYMIZED",
"detected": true
}
]
},
"invocationMetrics": {
"guardrailProcessingLatency": 343,
"usage": {
"topicPolicyUnits": 0,
"contentPolicyUnits": 1,
"wordPolicyUnits": 0,
"sensitiveInformationPolicyUnits": 1,
"sensitiveInformationPolicyFreeUnits": 0,
"contextualGroundingPolicyUnits": 0,
"contentPolicyImageUnits": 0
},
"guardrailCoverage": {
"textCharacters": {
"guarded": 73,
"total": 73
}
}
}
}
],
"guardrailCoverage": {
"textCharacters": {
"guarded": 73,
"total": 73
}
}
}
Summary
In this article, you learned how to implement the Amazon Bedrock ApplyGuardrail API using Python code to detect and mask sensitive PII information.