Getting Started with AWS Bedrock and Claude 3 using CloudShell

2024-11-22

Getting Started with AWS Bedrock and Claude 3 using CloudShell

AWS Bedrock provides a powerful way to interact with various AI models, including Anthropic's Claude 3. In this tutorial, we'll explore how to use AWS CloudShell to interact with Bedrock's API directly. CloudShell provides a browser-based shell environment with AWS CLI pre-installed, making it an ideal starting point for experimenting with Bedrock.

Prerequisites

Initial Setup and Verification

First, let's verify our AWS credentials and check available models. Open CloudShell and run:

# Verify AWS credentials
aws sts get-caller-identity 2>/dev/null

# List available Claude 3 models
aws bedrock list-foundation-models --region us-east-1 | grep -A 5 claude-3

This will show you the available Claude 3 models in your account. We'll be using the Claude 3 Sonnet model for this tutorial.

Creating Your First Prompt

Let's create a simple prompt file to interact with Claude 3. We'll ask it to explain what AWS Bedrock is:

# Create a basic prompt file
cat << 'EOF' > prompt.json
{
    "anthropic_version": "bedrock-2023-05-31",
    "messages": [
        {
            "role": "user",
            "content": "What is AWS Bedrock?"
        }
    ],
    "max_tokens": 500,
    "temperature": 0.7,
    "system": "You are a helpful AI assistant that specializes in explaining AWS services."
}
EOF

Let's verify our prompt file:

echo "Our prompt file contains:"
cat prompt.json

Sending the Request

Now let's send our request to Claude 3:

aws bedrock-runtime invoke-model \
    --model-id anthropic.claude-3-sonnet-20240229-v1:0 \
    --body file://prompt.json \
    --content-type application/json \
    --region us-east-1 \
    --cli-binary-format raw-in-base64-out \
    response.json

To see the response:

cat response.json

Creative Writing Example

Let's try something more creative - asking Claude to write a haiku about cloud computing:

cat << 'EOF' > creative_prompt.json
{
    "anthropic_version": "bedrock-2023-05-31",
    "messages": [
        {
            "role": "user",
            "content": "Write a haiku about cloud computing"
        }
    ],
    "max_tokens": 500,
    "temperature": 1.0,
    "system": "You are a creative AI assistant that specializes in poetry."
}
EOF

aws bedrock-runtime invoke-model \
    --model-id anthropic.claude-3-sonnet-20240229-v1:0 \
    --body file://creative_prompt.json \
    --content-type application/json \
    --region us-east-1 \
    --cli-binary-format raw-in-base64-out \
    creative_response.json

Multi-Turn Conversations

Claude 3 supports multi-turn conversations. Here's how to structure a conversation about AWS Lambda:

cat << 'EOF' > conversation.json
{
    "anthropic_version": "bedrock-2023-05-31",
    "messages": [
        {
            "role": "user",
            "content": "I want to learn about AWS Lambda"
        },
        {
            "role": "assistant",
            "content": "AWS Lambda is a serverless compute service that lets you run code without managing servers. Would you like to know about its key features, use cases, or how to get started?"
        },
        {
            "role": "user",
            "content": "Tell me about the key features"
        }
    ],
    "max_tokens": 500,
    "temperature": 0.7,
    "system": "You are a helpful AI assistant that specializes in explaining AWS services."
}
EOF

aws bedrock-runtime invoke-model \
    --model-id anthropic.claude-3-sonnet-20240229-v1:0 \
    --body file://conversation.json \
    --content-type application/json \
    --region us-east-1 \
    --cli-binary-format raw-in-base64-out \
    conversation_response.json

Understanding Errors

Here's an example of how to handle errors. Let's create an intentionally incorrect prompt:

cat << 'EOF' > bad_prompt.json
{
    "messages": [
        {
            "role": "user",
            "content": "Hello"
        }
    ]
}
EOF

# This will show an error about missing anthropic_version
aws bedrock-runtime invoke-model \
    --model-id anthropic.claude-3-sonnet-20240229-v1:0 \
    --body file://bad_prompt.json \
    --content-type application/json \
    --region us-east-1 \
    --cli-binary-format raw-in-base64-out \
    error_response.json

Cleanup

Don't forget to clean up your files:

rm -f prompt.json response.json creative_prompt.json creative_response.json \
    conversation.json conversation_response.json bad_prompt.json error_response.json

Key Points to Remember

  1. API Version: Always use bedrock-2023-05-31 as the anthropic_version
  2. Model ID: The full model ID for Claude 3 Sonnet is anthropic.claude-3-sonnet-20240229-v1:0
  3. Message Format: Messages must be in the correct array format with role and content
  4. System Prompt: Optional but helpful for setting context
  5. Command Structure: Always use bedrock-runtime for model invocation

Common Parameters

Response Structure

The response will include:

Conclusion

AWS CloudShell provides an excellent environment for getting started with Bedrock and Claude 3. The CLI interface gives you direct access to the API, making it perfect for testing and development. As you become more comfortable with these basic operations, you can expand into more complex use cases or integrate these calls into your applications.

Remember to check your AWS permissions and model access before running these commands, and ensure you're in a supported region (like us-east-1) when using the service.

Resources