Chapter08 Integrate Click With Cloud

If you find this content useful, consider buying this book:

  • Amazon
  • Purchase all books bundle
  • Purchase from pragmatic ai labs
  • Subscribe all content monthly
  • If you enjoyed this book considering buying a copy

    Chapter 8: Integrate Click with the Cloud #

    Cloud Computing is a strong use case for command-line tool development. The essence of the command-line is minimalism. Build a tool to solve the problem. Then build another tool to solve another problem. There is no “hotter” skill than cloud computing.

    One way to think about cloud computing is a new type of operating system. With the Unix operating system, small tools like awk, sed and cut serve to allow a user to glue together solutions in bash. Similarly, a python command-line tool in the cloud can “glue” cloud services together. A well crafted command-line tool can be the simplest and most effective way to solve a problem in the cloud.

    Cloud Developer Workflow #

    The ideal location to build a command-line tool for cloud computing isn’t your laptop! All of the cloud providers have gravitated toward cloud development environments. This step allows the developer to build code in the background that it runs in.

    Let’s take a look at how this plays out in practice in the following diagram. A developer spins up a cloud-based development like AWS Cloud9. Next, they develop a command-line tool that interacts with a cloud service.

    cloud-cli-workflow

    Why is this workflow so powerful?

    • Development takes place in the environment where the code runs (not your laptop).
    • Deep integrations to the cloud development environment are included.
    • A command-line tool is often the most efficient way to interact with a cloud service like computer vision or object storage
    • Python itself is the ideal language to glue together solutions in the cloud. The cloud builds in a variety of high-performance styles that have better performance characteristics than Python. Still, Python can build on top of these solutions by orchestrating the API calls.

    Using Cloud-based development environments #

    Just as many environments are Linux, it is also true that most deployment environments are in the cloud. Three of the largest cloud providers are: AWS, Azure and GCP. To write software that deploys on Cloud Computing environments, it often makes sense to write, test, and build code in cloud-specific development environments. Let’s discuss two of these environments.

    AWS Cloud9 #

    The AWS Cloud9 Environment is an IDE that allows a user to write, run, and debug code (including serverless code in Python) in the AWS cloud. This step simplifies many workflows, including security and network bandwidth. You can watch a walkthrough video here that creates a new AWS Cloud9 environment.

    Setup CI Pipeline with AWS Cloud9 and CircleCI](https://www.youtube.com/watch?v=4SIFF1PAMbw “Setup CI Pipeline with AWS Cloud9 and CircleCI”)

    Build a Computer Vision Tool with AWS Boto3 #

    How would a cloud developer go about using the power of command-line tools to develop a full-fledged computer vision application that triggers API calls that detect image labels? The following diagram shows the workflow.

    • Create a cloud-based development environment

    • Build a command-line tool that tests out the concept

    • Create a lambda function that triggers this same computer vision logic upon upload of an S3 image.

    computer-vision-flow

    This image of my dog will work throughout the examples.

    dog2

    First, a command-line tool using click accepts a bucket and a file name and passes it into the AWS Rekognition API.

    #!/usr/bin/env python
    import click
    import boto3
    
    
    @click.command()
    @click.option("--bucket", prompt="S3 Bucket", help="This is the S3 Bucket")
    @click.option(
        "--name",
        prompt="this is the name of the image",
        help="Pass in the name:  i.e. husky.png",
    )
    def labels(bucket, name):
        """This takes an S3 bucket and a image name"""
    
        print(f"This is the bucketname {bucket} !")
        print(f"This is the imagename {name} !")
        rekognition = boto3.client("rekognition")
        response = rekognition.detect_labels(
            Image={"S3Object": {"Bucket": bucket, "Name": name,}},
        )
        labels = response["Labels"]
        click.echo(click.style("Found Labels:", fg="red"))
        for label in labels:
            click.echo(click.style(f"{label}", bg="blue", fg="white"))
    
    
    if __name__ == "__main__":
        # pylint: disable=no-value-for-parameter
        labels()
    

    When this command-line tool runs, it generates the labels for the image of my dog. Notice the power of using the colored output to differentiate between different components of the command-line tool.

    python detect.py --bucket computervisionmay16 --name "dog.jpg"
    

    Screen Shot 2020-05-17 at 3 09 33 PM

    After this proof of concept has proved out the workflow, a good intermediate step is to take the logic and put it into an AWS Lambda function. This Lambda function accepts a JSON payload.

    The payload is bucket and a name.

    {
      "bucket": "computervisionmay16",
      "name": "dog.jpg"
    }
    

    Next, the Lambda function takes it and returns a response with the labels for the object.

    import boto3
    import json
    
    
    def lambda_handler(event, context):
        if "body" in event:
            event = json.loads(event["body"])
        image = event["image"]
        rekognition = boto3.client("rekognition")
        response = rekognition.detect_labels(
            Image={"S3Object": {"Bucket": "demoapril10", "Name": image,}},
        )
    
        print(response)
        return {"statusCode": 200, "body": json.dumps(response)}
    

    A more sophisticated lambda function would not need a manual API call. Instead, it responds to an event. This step can be tested in the Cloud9 environment as well.

    Screen Shot 2020-05-17 at 2 57 58 PM

    The big takeaway is that the logic can again work. The label_function does the main work. The lambda_handler parses the event['Records'] payload, which is the PUT event that results from an image stored in Amazon S3.

    import boto3
    from urllib.parse import unquote_plus
    
    def label_function(bucket, name):
        """This takes an S3 bucket and a image name!"""
        print(f"This is the bucketname {bucket} !")
        print(f"This is the imagename {name} !")
        rekognition = boto3.client("rekognition")
        response = rekognition.detect_labels(
            Image={"S3Object": {"Bucket": bucket, "Name": name,}},
        )
        labels = response["Labels"]
        print(f"I found these labels {labels}")
        return labels
    
    
    def lambda_handler(event, context):
        """This is a computer vision lambda handler"""
    
        print(f"This is my S3 event {event}")
        for record in event['Records']:
            bucket = record['s3']['bucket']['name']
            print(f"This is my bucket {bucket}")
            key = unquote_plus(record['s3']['object']['key'])
            print(f"This is my key {key}")
            
        my_labels = label_function(bucket=bucket, 
            name=key)
        return my_labels
    

    You can see the trigger set up in the AWS Lambda designer.

    Screen Shot 2020-05-17 at 3 00 03 PM

    Finally, the S3 event generates a call to the AWS Lambda function. The cloud watch logs show the label events.

    Screen Shot 2020-05-17 at 3 04 05 PM

    What are the next steps? The lambda function could store data in DynamoDB, or pass the results to another lambda function via AWS Step Functions.