This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Sustainability

The Sustainability pillar focuses on environmental impacts, especially energy consumption and efficiency, since they are important levers for architects to inform direct action to reduce resource usage.

Sustainability is a new pillar introduced in December 2021 and asks us as developers/engineers/architects/operations to think about the resources we are using, For example, do you need to run a server that offers uploads 24x7 when uploads only happen in business hours, if you don’t need it, automate its shutdown and start-up. You’ll also find that this has a pleasant effect on the cost optimization pillar due to the pay-for-what-you-use model of AWS.

1 - Static Web Hosting

How to host simple websites

OpEx Sec Rel Perf Cost Sus

Static web hosting is one of my favourite parts of S3. You can simply upload a static site to a bucket and serve the pages straight away. It’s worth noting S3 doesn’t support any serverside processing, so you need a static website. There are lots of options here, my favourite is goHugo but I’ve included a list for you to get started:

Serving a static website from a traditional server is wasteful. It sits idel most of the time, you may even runn XX more servers ready for peak load wasting even more enegery. S3 web hosting is different. The webserveres only lift your content into memory to serve when a request comes in. AT times when there is no traffic you data storage is all that is taking energy thus greatly reducing your usage and my reasoning for making this section a sustainability section. It’s also increadly cheep to host your site in this way.

By default, you’ll be serving your website from its bucket name and region name. For example:

It’s also worth noting that the endpoint is http and not https!

Adding a DNS CNAME

If you have a registered domain, you can add a DNS CNAME entry to point to the Amazon S3 website endpoint. For example, if you registered the domain www.squarecows.com, you could create a bucket called www.squarecows.com, and add a DNS CNAME record that points to www.squarecows.com.s3-website.Region.amazonaws.com. All requests to http://www.squarecows.com are routed to www.squarecows.com.s3-website.Region.amazonaws.com.

Whilst this is a nicer way to host your site with a friendly URL it still lacks HTTPS and these days that will affect your search engine rankings.

The Apex zone issue!

Whilst Amazon Route53 allows you to point the apex record (in this case squarecows.com) at a bucket by a CNAME or alias, manny other providers will not allow you to do this because it’s against the RFC. Instead you are required to point at an IP address. Theres no real fix for this and I would recomend moving you DNS hosting into Route53 to avoid potential issues.

Enabling Hosting

I’ve included the terraform to help you create an S3 website, however, you can also do this from the AWS CLI. When you create the website endpoint you need to specify your index page (normally an object named index.html) and your error document (normally and object nammed error.html).

aws s3 website s3://my-bucket/ --index-document index.html --error-document error.html

For the Terraform check out FHGJDHFSFSGDSHJGHFJ

You are also required to make sure the bucket is publically readable so you need to grant access as public and remove the disable public block section from your bucket configuration. If you have objects that are uploaded by people other than the bucket owner you will also have to grant read access to everybody. Here are the steps to disable the public block which is on by default:

  • Open the Amazon S3 console at https://console.aws.amazon.com/s3/.
  • Choose the name of the bucket that you have configured as a static website.
  • Choose Permissions.
  • Under Block public access (bucket settings), choose Edit.
  • Clear Block all public access, and choose Save changes.

The public block policies disabled

Now you’ve cleard these block policies you must do the bit I always for get to do! Add a bucket policiy to allow all the objects to be readable by everyone! This basically ensures all objects have the s3:GetObject permission.

  • Under Buckets, choose the name of your bucket.
  • Choose Permissions.
  • Under Bucket Policy, choose Edit.
  • To grant public read access for your website, copy the following bucket policy, and paste it in the Bucket policy editor.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::Bucket-Name/*"
            ]
        }
    ]
}
  • Update the Resource to your bucket name. In the preceding example bucket policy, Bucket-Name is a placeholder for the bucket name. To use this bucket policy with your own bucket, you must update this name to match your bucket name.
  • Choose Save changes. A message appears indicating that the bucket policy has been successfully added.

2 - Execute code from S3

Execute code using S3 triggers

OpEx Sec Rel Perf Cost Sus

One of the biggest announcements to ever come out of re:invent (the AWS annual event in Las Vegas) was the launch of Lambda. Lambda is effectively Functions as a service. You can write code and give it to AWS and they handle the runtime and scaling of the function. Lambda supports multiple languages, we will be using python in this example. Lambda supports multiple triggers. Triggers are what tells Lambda to run the code you’ve written. You could use SNS or Event bridge as mentioned in previous chapters, however, Lambda natively supports a direct trigger from S3 on completion of the upload of an object. This is my mind is the jewl that makes S3 so awesome.

Create a Bucket

First, let’s create a new bucket to test our trigger. From the command line run:

aws s3 mb s3://<MY_BUCKET_NAME>

AWS Console showing s3 bucket

If you want to check fromt he CLI run:

aws s3 ls

Create the Lambda Function

We are going to use a blueprint from AWS for the demo to get going quickly, but you can always adapt this to your needs. Blueprints not only contain the code but also include the scafolding for setting up the trigger from the S3 bucket to the function. We are going to use a Python Blueprint from the create new function screen in the lambda console.

Crreate a new function

Now select use a blueprint and select the s3-get-object-python function. This is a python 3.7 example but the latest runtime of python you can create yourself currently goes up to 3.9. Now click configue:

Configure the blueprint

  • Under Basic information, do the following:
  • For Function name, enter myExampleFunction
  • For Execution role, choose Create a new role from AWS policy templates.
  • For Role name, enter myExampleRole
  • Under S3 trigger, choose the that you created with the aws cli.
    • The AWS console will automatically allow the function to access the S3 resource and most importantly allows S3 to tigger the function.
  • Choose Create function.

Your screen should look like the following:

Configuring the options for the function

Review the code

When the Lambda is triggered it recieves an event. Our function inspects the event and extracts the bucket name and the key (the key is the file name). Using the bucket name and key the function then uses boto3 to get the object and then print out the object type in the log. Your code should look like the following in your AWS console:

import json
import urllib.parse
import boto3

print('Loading function')

s3 = boto3.client('s3')


def lambda_handler(event, context):
    #print("Received event: " + json.dumps(event, indent=2))

    # Get the object from the event and show its content type
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8')
    try:
        response = s3.get_object(Bucket=bucket, Key=key)
        print("CONTENT TYPE: " + response['ContentType'])
        return response['ContentType']
    except Exception as e:
        print(e)
        print('Error getting object {} from bucket {}. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket))
        raise e
              

Test with the S3 trigger

Invoke your function when you upload a file to the Amazon S3 source bucket.

To test the Lambda function using the S3 trigger

  • On the Buckets page of the Amazon S3 console, choose the name of the source that you created earlier.
  • On the Upload page, upload a few .jpg or .png image files to the bucket.
  • Open the Functions page of the Lambda console.
  • Choose the name of your function (myExampleFunction).
  • To verify that the function ran once for each file that you uploaded, choose the Monitor tab. This page shows graphs for the metrics that Lambda sends to CloudWatch. The count in the Invocations graph should match the number of files that you uploaded to the Amazon S3 bucket.

Metrics of the function

  • You can also check out the logs in the CloudWatch Console or in the Lambda Console additionally and check out the log stream for you function name.

Clean up your resources

To clean up all the resources you can follow the steps below. This will ensure your account is nice and tidy and avoid a really really tiny storage cost fee of your uploaded trigger. Because the function is serverless, unless you upload more files to that S3 bucket you won’t be charged anything for it! This also means you are not wasting energy running a server waiting for something to happen.

To delete the Lambda function

  • Open the Functions page of the Lambda console.
  • Select the function that you created.
  • Choose Actions, then choose Delete.
  • Choose Delete.

To delete the IAM policy

  • Open the Policies page of the AWS Identity and Access Management (IAM) console.
  • Select the policy that Lambda created for you. The policy name begins with AWSLambdaS3ExecutionRole-.
  • Choose Policy actions, Delete.
  • Choose Delete.

To delete the execution role

  • Open the Roles page of the IAM console.
  • Select the execution role that you created.
  • Choose Delete role.
  • Choose Yes, delete.

To delete the S3 bucket

  • Open the Amazon S3 console.
  • Select the bucket you created.
  • Choose Delete.
  • Enter the name of the bucket in the text box.
  • Choose Confirm.

Choose ARM

As a side note the function we created above used a x86 (intel) processor and is the default for all functions on AWS. However AWS have developed their own silicon chip called the Graviton Processsor and in Lambda you can currently use the v2 of this chip. It’s an ARM based processor and like it’s mobile phone cousin chips it sips electricity. Because of this you’ll use far less energy and it’s cheaper too!