Tag: Ruby

Authenticate applications through facial recognition with Amazon Cognito and Amazon Rekognition

Authenticate applications through facial recognition with Amazon Cognito and Amazon Rekognition

With increased use of different applications, social networks, financial platforms, emails and cloud storage solutions, managing different passwords and credentials can become a burden. In many cases, sharing one password across all these applications and platforms is just not possible. Different security standards may be required, such as passwords composed by only numeric characters, password renewal policies, and providing security questions.

But what if you could enhance the ways users authenticate themselves in their application in a more convenient, simpler and above everything, more secure way? In this post, I will show how to leverage Amazon Cognito user pools to customize your authentication flows and allow logging into your applications using Amazon Rekognition for facial recognition using a sample application.

Solution Overview

We will build a Mobile or Web application that allows users to sign in using an email and require the user to upload a document containing his or her photo. We will use the AWS Amplify Framework to integrate our Front-End application with Amazon S3 and store this image in a secure and encrypted bucket.  Our solution will trigger a Lambda function for each new image uploaded to this bucket so that we can index the images inside Amazon Rekognition and save the metadata in a DynamoDB table for later queries.

For authentication, this solution uses Amazon Cognito User Pools combined with Lambda functions to customize the authentication flows together with the Amazon Rekognition CompareFaces API to identify the confidence level between user photos provided during Sign Up and Sign In. Here is the architecture of the solution:

Here’s a step-wise description of the above data-flow architecture diagram:

  1. User signs up into the Cognito User Pool.
  2. User uploads – during Sign Up – a document image containing his/her photo and name, to an S3 Bucket (e.g. Passport).
  3. A Lambda function is triggered containing the uploaded image as payload.
  4. The function first indexes the image in a specific Amazon Rekognition Collection to store these user documents.
  5. The same function then persists in a DynamoDB table as the indexed image metadata, together with the email registered in Amazon Cognito User Pool for later queries.
  6. User enters an email in the custom Sign In page, which makes a request to Cognito User Pool.
  7. Amazon Cognito User Pool triggers the “Define Auth Challenge” trigger that determines which custom challenges are to be created at this moment.
  8. The User Pool then invokes the “Create Auth Challenge” trigger. This trigger queries the DynamoDB table for the user containing the given email id to retrieve its indexed photo from the Amazon Rekognition Collection.
  9. The User Pool invokes the “Verify Auth Challenge” trigger. This verifies if the challenge was indeed successfully completed; if it finds an image, it will compare it with the photo taken during Sign In to measure its confidence between both the images.
  10. The User Pool, once again, invokes the “Define Auth Challenge” trigger that verifies if the challenge was answered. No no further challenges are created, if the ‘Define Auth challenge’ is able to verify the user-supplied answer. The trigger response, back to the User Poll will include an “issueTokens:true” attribute to authenticate itself and finally issue the user a JSON Web Token (JWT) (see step 6).

Serverless Application and the different Lambdas invoked

The following solution is available as a Serverless application. You can deploy it directly from AWS Serverless Application Repository. Core parts of this implementation are:

  • Users are required to use a valid email as user name.
  • The solution includes a Cognito App Client configured to “Only allow custom authentication” as Amazon Cognito requires a password for user sign up. We are creating a random password to these users, since we don’t want them to Sign In using these passwords later.
  • We use two Amazon S3 Buckets: one to store document images uploaded during Sign Up and one to store user photos taken when Signing In for face comparisons.
  • We use two different Lambda runtimes (Python and Node.js) to demonstrate how AWS Serverless Application Model (SAM) handles multiple runtimes in the same project and development environment for the developer’s perspective.

The following Lambda functions are triggered to implement the images indexing in Amazon Rekognition and customize Amazon Cognito User Pools custom authentication challenges:

  1. Create Rekognition Collection (Python 3.6) – This Lambda function gets triggered only once, at the beginning of deployment, to create a Custom Collection in Amazon Rekognition to index documents for user Sign Ups.
  2. Index Images (Python 3.6) – This Lambda function gets triggered for each new document upload to Amazon S3 during Sign Up and indexes the uploaded document in the Amazon Rekognition Collection (mentioned in the previous step) and then persists its metadata into DynamoDB.
  3. Define Auth Challenge (Node.js 8.10) – This Lambda function tracks the custom authentication flow, which is comparable to a decider function in a state machine. It determines which challenges are presented, in what order, to the user. At the end, it reports back to the user pool if the user succeeded or failed authentication. The Lambda function is invoked at the start of the custom authentication flow and also after each completion of the “Verify Auth Challenge Response” trigger.
  4. Create Auth Challenge (Node.js 8.10) – This Lambda function gets invoked, based on the instruction of the “Define Auth Challenge” trigger, to create a unique challenge for the user. We will use this function to query DynamoDB for existing user records and if their given metadata are valid.
  5. Verify Auth Challenge Response (Node.js 8.10) – This Lambda function gets invoked by the user pool when the user provides the answer to the challenge. Its only job is to determine if that answer is correct. In this case, it compares both images provided during Sign Up and Sign In, using the Amazon Rekognition CompareFaces API and considers a API responses containing a confidence level equals or greater than 90% as a valid challenge response.

In the sections below, let’s step through the code for the different Lambda functions we described above.

1. Create an Amazon Rekognition Collection

As described above, this function creates a Collection in Amazon Rekognition that will later receive user photos uploaded during Sign Up.

import boto3
import os

def handler(event, context):

    maxResults=1
    collectionId=os.environ['COLLECTION_NAME']
    
    client=boto3.client('rekognition')

    #Create a collection
    print('Creating collection:' + collectionId)
    response=client.create_collection(CollectionId=collectionId)
    print('Collection ARN: ' + response['CollectionArn'])
    print('Status code: ' + str(response['StatusCode']))
    print('Done...')
    return response

2. Index Images into Amazon Rekognition

This function is responsible for receiving uploaded images during the sign up from users and index the images in the Amazon Rekognition Collection created by the Lambda described above to persist its metadata in an Amazon Dynamodb table.

from __future__ import print_function
import boto3
from decimal import Decimal
import json
import urllib
import os

dynamodb = boto3.client('dynamodb')
s3 = boto3.client('s3')
rekognition = boto3.client('rekognition')

# --------------- Helper Functions ------------------

def index_faces(bucket, key):

    response = rekognition.index_faces(
        Image={"S3Object":
            {"Bucket": bucket,
            "Name": key}},
            CollectionId=os.environ['COLLECTION_NAME'])
    return response
    
def update_index(tableName,faceId, fullName):
    response = dynamodb.put_item(
        TableName=tableName,
        Item={
            'RekognitionId': {'S': faceId},
            'FullName': {'S': fullName}
            }
        ) 
    
# --------------- Main handler ------------------

def handler(event, context):

    # Get the object from the event
    bucket = event['Records'][0]['s3']['bucket']['name']
    key = urllib.parse.unquote_plus(
        event['Records'][0]['s3']['object']['key'].encode('utf8'))

    try:

        # Calls Amazon Rekognition IndexFaces API to detect faces in S3 object 
        # to index faces into specified collection
        
        response = index_faces(bucket, key)
        
        # Commit faceId and full name object metadata to DynamoDB
        
        if response['ResponseMetadata']['HTTPStatusCode'] == 200:
            faceId = response['FaceRecords'][0]['Face']['FaceId']
            ret = s3.head_object(Bucket=bucket,Key=key)
            email = ret['Metadata']['email']
            update_index(os.environ['COLLECTION_NAME'],faceId, email) 
        return response
    except Exception as e:
        print("Error processing object {} from bucket {}. ".format(key, bucket))
        raise e

3. Define Auth Challenge Function

This is the decider function that manages the authentication flow. In the session array that’s provided to this Lambda function (event.request.session), the entire state of the authentication flow is present. If it’s empty, it means the custom authentication flow just started. If it has items, the custom authentication flow is underway, i.e. a challenge was presented to the user, the user provided an answer, and it was verified to be right or wrong. In either case, the decider function has to decide what to do next:

exports.handler = async (event, context) => {

    console.log("Define Auth Challenge: " + JSON.stringify(event));

    if (event.request.session &&
        event.request.session.length >= 3 &&
        event.request.session.slice(-1)[0].challengeResult === false) {
        // The user provided a wrong answer 3 times; fail auth
        event.response.issueTokens = false;
        event.response.failAuthentication = true;
    } else if (event.request.session &&
        event.request.session.length &&
        event.request.session.slice(-1)[0].challengeResult === true) {
        // The user provided the right answer; succeed auth
        event.response.issueTokens = true;
        event.response.failAuthentication = false;
    } else {
        // The user did not provide a correct answer yet; present challenge
        event.response.issueTokens = false;
        event.response.failAuthentication = false;
        event.response.challengeName = 'CUSTOM_CHALLENGE';
    }

    return event;
}

4. Create Auth Challenge Function

This function queries DynamoDB for a record containing the given e-mail to retrieve its image ID inside Amazon Rekognition Collection and define as a challenge that the user must provide a photo that relates to the same person.

const aws = require('aws-sdk');
const dynamodb = new aws.DynamoDB.DocumentClient();

exports.handler = async (event, context) => {

    console.log("Create auth challenge: " + JSON.stringify(event));

    if (event.request.challengeName == 'CUSTOM_CHALLENGE') {
        event.response.publicChallengeParameters = {};

        let answer = '';
        // Querying for Rekognition ids for the e-mail provided
        const params = {
            TableName: process.env.COLLECTION_NAME,
            IndexName: "FullName-index",
            ProjectionExpression: "RekognitionId",
            KeyConditionExpression: "FullName = :userId",
            ExpressionAttributeValues: {
                ":userId": event.request.userAttributes.email
            }
        }
        
        try {
            const data = await dynamodb.query(params).promise();
            data.Items.forEach(function (item) {
                
                answer = item.RekognitionId;

                event.response.publicChallengeParameters.captchaUrl = answer;
                event.response.privateChallengeParameters = {};
                event.response.privateChallengeParameters.answer = answer;
                event.response.challengeMetadata = 'REKOGNITION_CHALLENGE';
                
                console.log("Create Challenge Output: " + JSON.stringify(event));
                return event;
            });
        } catch (err) {
            console.error("Unable to query. Error:", JSON.stringify(err, null, 2));
            throw err;
        }
    }
    return event;
}

5. Verify Auth Challenge Response Function

This function verifies within Amazon Rekognition if it can find an image with the confidence level equals or over 90% compared to the image uploaded during Sign In, and if this image refers to the user claims to be through the given e-mail address.

var aws = require('aws-sdk');
var rekognition = new aws.Rekognition();

exports.handler = async (event, context) => {

    console.log("Verify Auth Challenge: " + JSON.stringify(event));
    let userPhoto = '';
    event.response.answerCorrect = false;

    // Searching existing faces indexed on Rekognition using the provided photo on s3

    const objectName = event.request.challengeAnswer;
    const params = {
        "CollectionId": process.env.COLLECTION_NAME,
        "Image": {
            "S3Object": {
                "Bucket": process.env.BUCKET_SIGN_UP,
                "Name": objectName
            }
        },
        "MaxFaces": 1,
        "FaceMatchThreshold": 90
    };
    try {
        const data = await rekognition.searchFacesByImage(params).promise();

        // Evaluates if Rekognition was able to find a match with the required 
        // confidence threshold

        if (data.FaceMatches[0]) {
            console.log('Face Id: ' + data.FaceMatches[0].Face.FaceId);
            console.log('Similarity: ' + data.FaceMatches[0].Similarity);
            userPhoto = data.FaceMatches[0].Face.FaceId;
            if (userPhoto) {
                if (event.request.privateChallengeParameters.answer == userPhoto) {
                    event.response.answerCorrect = true;
                }
            }
        }
    } catch (err) {
        console.error("Unable to query. Error:", JSON.stringify(err, null, 2));
        throw err;
    }
    return event;
}

The Front End Application

Now that we’ve stepped through all the Lambdas, let’s create a custom Sign In page, in order to orchestrate and test our scenario. You can use AWS Amplify Framework to integrate your Sign In page to Amazon Cognito and the photo uploads to Amazon S3.

The AWS Amplify Framework allows you to implement your application using your favourite framework (React, Angular, Vue, HTML/JavaScript, etc). You can customize the snippets below as per your requirements. The snippets below demonstrate how to import and initialize AWS Amplify Framework on React:

import Amplify from 'aws-amplify';

Amplify.configure({
  Auth: {
    region: 'your region',
    userPoolId: 'your userPoolId',
    userPoolWebClientId: 'your clientId',
  },
  Storage: { 
    region: 'your region', 
    bucket: 'your sign up bucket'
  }
});

Signing Up

For users to be able to sign themselves up, as mentioned above, we will “generate” a random password on their behalf since it is required by Amazon Cognito for user sign up. However, once we create our Cognito User Pool Client, we ensure that authentication only happens following the custom authentication flow – never using user and password.

import { Auth } from 'aws-amplify';

signUp = async event => {
  const params = {
    username: this.state.email,
    password: getRandomString(30),
    attributes: {
      name: this.state.fullName
    }
  };
  await Auth.signUp(params);
};

function getRandomString(bytes) {
  const randomValues = new Uint8Array(bytes);
  window.crypto.getRandomValues(randomValues);
  return Array.from(randomValues).map(intToHex).join('');
}

function intToHex(nr) {
  return nr.toString(16).padStart(2, '0');
}

Signing in

Starts the custom authentication flow to the user.

import { Auth } from "aws-amplify";

signIn = () => { 
    try { 
        user = await Auth.signIn(this.state.email);
        this.setState({ user });
    } catch (e) { 
        console.log('Oops...');
    } 
};

Answering the Custom Challenge

In this step, we open the camera through Browser to take a user photo and then upload it to Amazon S3, so we can start the face comparison.

import Webcam from "react-webcam";

// Instantiate and set webcam to open and take a screenshot
// when user is presented with a custom challenge

/* Webcam implementation goes here */


// Retrieves file uploaded to S3 and sends as a File to Rekognition 
// as answer for the custom challenge
dataURLtoFile = (dataurl, filename) => {
  var arr = dataurl.split(','), mime = arr[0].match(/:(.*?);/)[1],
      bstr = atob(arr[1]), n = bstr.length, u8arr = new Uint8Array(n);
  while(n--){
      u8arr[n] = bstr.charCodeAt(n);
  }
  return new File([u8arr], filename, {type:mime});
};

sendChallengeAnswer = () => {

    // Capture image from user camera and send it to S3
    const imageSrc = this.webcam.getScreenshot();
    const attachment = await s3UploadPub(dataURLtoFile(imageSrc, "id.png"));
    
    // Send the answer to the User Pool
    const answer = `public/${attachment}`;
    user = await Auth.sendCustomChallengeAnswer(cognitoUser, answer);
    this.setState({ user });
    
    try {
        // This will throw an error if the user is not yet authenticated:
        await Auth.currentSession();
    } catch {
        console.log('Apparently the user did not enter the right code');
    }
    
};

Conclusion

In this blog post, we implemented an authentication mechanism using facial recognition using the custom authentication flows provided by Amazon Cognito combined with Amazon Rekognition. Depending on your organization and workload security criteria and requirements, this scenario might work from both security and user experience point of views. Additionally, we can enhance the security factor by chaining multiple Auth Challenges not only based on the user photo, but also a combination of Liveness Detection, a combination of their document numbers used for signing up and other additional MFA’s.

Since this is an entirely Serverless-based solution, you can customize it as your requirements arise using AWS Lambda functions. You can read more on custom authentication in our developer guide.

Resources

  • All the resources from the implementation mentioned above are available at GitHub. You can clone, change, deploy and run it yourself.
  • You can deploy this solution directly from the AWS Serverless Application Repository.

About the author

Enrico is a Solutions Architect at Amazon Web Services. He works in the Enterprise segment and works helping customers from different business leveraging their Cloud Journey. With more than 10 years working in Solutions Architecture and Engineering, and DevOps, Enrico acted directly with many customers designing, implementing and deploying several enterprise solutions.

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/authenticate-applications-through-facial-recognition-with-amazon-cognito-and-amazon-rekognition/

The AWS CLI and AWS SDK for Python will require Python 2.7+ or 3.4+ as their Python runtime

The AWS CLI and AWS SDK for Python will require Python 2.7+ or 3.4+ as their Python runtime

On January 10, 2020, in order to continue supporting our customers with tools that are secure and maintainable, AWS will release version 1.17 of AWS CLI and AWS SDK for Python version 1.13 (Botocore) and 1.10 (Boto3). These versions will require Python 2.7+ or Python 3.4+ runtime.

Per PSF (Python Software Foundation), Python 2.6.9 was “the final security-only source-only maintenance release of the Python 2.6 series”. With its release on October 29, 2013, PSF states that “all official support for Python 2.6 ended and was no longer being maintained for any purpose”. Per PSF, as of September 29, 2017, Python 3.3.x also reached end-of-life status.

Until this year many industry Python projects and packages continued to support Python 2.6 and Python 3.3 as their runtime. However, currently these projects or package owners have stopped their support for Python 2.6 and Python 3.3 as their runtime. Additionally, the Python Windows Installers for Python 2.6/3.3, have not updated their bundled OpenSSL since Python 2.6/3.3 EOL and cannot support TLSv1.2+. Many AWS APIs required TLSv1.2+ to access their services.

I’m currently using Python 2.6 or Python 3.3 as my runtime for AWS CLI or AWS SDK for Python. What should I do?

We recommend moving to a newer version of the Python runtime, either 2.7+ or 3.4. These can be found at https://www.python.org/downloads.

If you are using the AWS CLI with Python 2.6 or 3.3 and are not ready to upgrade to a newer Python version, then you will need to take one of the below actions depending upon your installation method.

MSI Installer
If you install the AWS CLI using the Windows MSI Installer, you are not impacted by this deprecation and no changes are required.

Pip
If you install the AWS CLI or the AWS SDK for Python using Pip, ensure that your pip invocation or requirements.txt file installs “awscli<1.17“, such as:

$ pip install –upgrade –user awscli<1.17

Bundled Installer
If you install the AWS CLI using the bundled installer, you must ensure that you download a copy of the bundled installer that supports Python 2.6+ or 3.3+ runtime. You can do this by downloading the file from “https://s3.amazonaws.com/aws-cli/awscli-bundle-{VERSION}.zip“, replacing “{VERSION}“ with the desired version of the CLI. For example to download version 1.6.188 use:

$ curl https://s3.amazonaws.com/aws-cli/awscli-bundle-1.16.188.zip -o awscli-bundle.zip

Then continue following the installation instructions found in https://docs.aws.amazon.com/cli/latest/userguide/install-bundle.html, starting with step 2.

For additional help or questions go to the CLI user guide.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/deprecation-of-python-2-6-and-python-3-3-in-botocore-boto3-and-the-aws-cli/

The AWS SDK for Java will no longer support Java 6

The AWS SDK for Java will no longer support Java 6

The AWS SDK for Java currently maintains two major versions: 1.11.x and 2.x. Customers on Java 8 or newer may use either 2.x or 1.11.x, and customers on Java 6 or newer may use 1.11.x.

Free updates to the Java 6 virtual machine (JVM) were stopped by Oracle on April 2013. Users that don’t pay for extended JVM support would need to upgrade their JVM to continue to receive any updates, including security updates. As of December 2018, Oracle no longer provided extended support for Java 6. Additionally Jackson, a popular library for JSON serialization, is used by the AWS SDK for Java and in July 2016, portions of the Jackson library that are used by the AWS SDK stopped supporting Java 6. Therefore, as of November 15, 2019, new versions of AWS SDK for Java 1.11.x will be released without support for Java version 6, and will instead require Java version 7 or newer. After this date, customers on Java 6 that upgrade their version of the AWS SDK for Java will receive “Java version mismatch” errors at runtime.

I’m currently using Java 6 and the AWS SDK for Java 1.11.x. What should I do?

We recommend moving to a newer Java runtime that still supports free updates. Here are some popular choices:

  1. Amazon Corretto 8 or 11
  2. Red Hat OpenJDK 8 or 11
  3. OpenJDK 11
  4. AdoptOpenJDK 8 or 11

If you are not ready to update to a newer Java version, then you can pin your AWS SDK for Java version to one that supports Java 6, which will continue to work. However, you will no longer receive new service updates, bug fixes or security fixes.

Why will Java 6 no longer be supported?

As noted previously, free updates to the Java 6 virtual machine (JVM) were stopped by Oracle on April 2013. Free users would need to upgrade their JVM to continue to receive security updates. As of December 2018, Oracle no longer provides extended support for Java 6.

The AWS SDK for Java uses a small number of industry-standard dependencies. These dependencies provide the SDK with a larger feature set than would be possible if the functionality provided by these dependencies were to be developed in-house. Because Java 6 is now generally considered “unsupported”, many third party libraries have stopped supporting Java 6 as a runtime.

For example, Jackson, a popular library for JSON serialization, is used by the AWS SDK for Java as well as many other libraries in the Java ecosystem. In July 2016, portions of the Jackson library that are used by the AWS SDK stopped supporting Java 6. At the time, too many AWS customers would have been broken by removal of support for Java 6, and Jackson was too ingrained in the SDK’s public APIs to be removed without breaking a different set of customers. The AWS SDK for Java team froze the version of Jackson that they used and made sure that the Jackson features used by the SDK were not affected by known security issues.

Many things have changed since 2016, including: Java 6 is now generally considered “unsupported”, very few AWS customers use Java 6 on the updated AWS SDK for Java, and customers have begun to report that the old version of Jackson in their dependency graph is an issue. To maintain our customer focus, we will be raising the minimum Java version to Java 7 for the AWS SDK for Java and upgrading to use a newer version of Jackson.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/the-aws-sdk-for-java-will-no-longer-support-java-6/

Removing the vendored version of requests from Botocore

Removing the vendored version of requests from Botocore

We’d like to give additional visibility to an upcoming change to Botocore, a dependency on Boto3, the AWS SDK for Python. Starting 10/21/19, we will be removing the vendored version of the requests library in Botocore. In this post, we’ll cover the key details.

In August of last year, we made significant improvements to the internals of Botocore to allow for pluggable HTTP clients. A key part of the internal refactoring was changing the HTTP client library from the requests library to urllib3. As part of this change, we also decided to unvendor our HTTP library. This allows us to support a range of versions of urllib3 instead of requiring us to depend on a specific version. This meant that we no longer used the vendored version of requests in Botocore and we could remove this unused code. See the GitHub pull request for a more information.

If you’re using the vendored version of requests in Botocore, you’ll see the following warning:

./botocore/vendored/requests/api.py:67: DeprecationWarning: You are using the get() function from 'botocore.vendored.requests'.
This is not a public API in botocore and will be removed in the future. Additionally, this version of requests is out of date.
We recommend you install the requests package, 'import requests' directly, and use the requests.get() function instead.
DeprecationWarning

You can migrate away from this by installing requests into your python environment and importing requests directly:

Before

from botocore.vendored import requests
response = requests.get('https://...')

After

$ pip install requests
import requests
response = requests.get('https://...')

The associated PR https://github.com/boto/botocore/pull/1829 has a branch you can use to test this change before its merged into an official release.

Please let us know if you have any questions or concerns in the GitHub pull request.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/removing-the-vendored-version-of-requests-from-botocore/

Working with the AWS Cloud Development Kit and AWS Construct Library

Working with the AWS Cloud Development Kit and AWS Construct Library

The AWS Cloud Development Kit (CDK) is a software development framework for defining your cloud infrastructure in code and provisioning it through AWS CloudFormation. The AWS CDK allows developers to define their infrastructure in familiar programming languages such as TypeScript, Python, C# or Java, taking advantages of the features those languages provide.

When I worked as an AWS Solutions Architect with Digital Native Businesses in the UK. I worked directly with many companies that tend to build their own solutions to the problems they encounter and commonly embrace Infrastructure-as-Code practices.

When I speak with these customers about the AWS CDK, the most common question I get is, “how much of AWS CloudFormation is covered by the AWS CDK?” The short answer is all of it. The long answer, as we explore in this post, is more nuanced and requires understanding the different layers of abstraction in the AWS Construct Library.

Layers in the AWS Construct Library

The AWS CDK includes the AWS Construct Library, a broad set of modules that expose APIs for defining AWS resources in CDK applications. Each module in this library contains constructs, the basic building blocks for CDK apps, that encapsulate everything CloudFormation needs to create AWS resources. There are three different levels of CDK constructs in the library: CloudFormation Resource Constructs, AWS Constructs, and Pattern Constructs. CloudFormation Resource Constructs and AWS Constructs are packaged together in the same module and named after the AWS service they represent, aws-s3 for example. Pattern Constructs are packaged in their own module and have the patterns suffix, like aws-ecs-patterns.

CloudFormation Resource Constructs are the lowest-level constructs. They mirror the AWS CloudFormation Resource Types and are updated with each release of the CDK. This means that you can use the CDK to define any resource that is available to AWS CloudFormation and can expect them to be up-to-date. When you use CloudFormation Resources, you must explicitly configure all of the resource’s properties, which requires you to completely understand the details of the underlying resource model. You can quickly identify this construct layer by looking for the ‘Cfn’ prefix. If it starts with those three letters, then it is a CloudFormation Resource Construct and maps directly to the resource type found in the CloudFormation reference documentation.

AWS Constructs also represent AWS services and leverage CloudFormation Resource Constructs under-the-hood, but they provide a higher-level, intent-based API. They are designed specifically for the AWS CDK and handle much of the configuration details and boilerplate logic required by the CloudFormation Resources. AWS Constructs offer proven default values and provide convenient methods that make it simpler to work with the resource, reducing the need to know all the details about the CloudFormation resources they represent. In some cases, these constructs are contributed by the open source community and reviewed by the AWS CDK team for inclusion in the library. A good example of this is the Amazon Virtual Private Cloud (VPC) construct which I cover in more detail below.

Finally, the library includes even higher-level constructs, which are called Pattern Constructs. These constructs generally represent reference architectures or design patterns that are intended to help you complete common tasks in AWS. For example, the aws-ecs-patterns.LoadBalancedFargateService construct represents an architecture that includes an AWS Fargate container cluster that sits behind an Elastic Load Balancer (ELB). The aws-apigateway.LambdaRestApi construct represents an Amazon API Gateway API that’s backed by an AWS Lambda function.

My expectation is for you to use the highly abstracted AWS Constructs and Patterns Constructs whenever possible because of the convenience and time savings they provide. However, the CDK is new and AWS service coverage at these upper layers is not yet complete. What do you do when high-level service coverage is absent for your CDK use cases? In the remainder of this post I will teach you how to use the CloudFormation Resource layer when AWS Constructs are not available. I will also show you how to use “overrides” for situations where a high-level AWS Construct is available, but a specific underlying CloudFormation property you want to configure is not directly exposed in the API.

Prerequisites

You will need an active AWS account if you want to use the examples detailed in this post. You’ll be using only a few items that have an hourly billing figure – specifically NAT Gateways. Please check the pricing pages for this feature to understand any costs that may be incurred.

A basic understanding of a Terminal/CLI environment is recommended to run through everything here, but even without it, following along to learn the concepts should be fine.

First, follow the Getting Started guide to set up your computer or AWS Cloud9 environment to use the AWS CDK. It will also guide you on initializing new templates which you’ll be doing a few times in this post.

You will also need a text editor. If you’re working within AWS Cloud9, you have one provided within the console. I used Visual Studio Code, which has Typescript support built in the default install, to write this post. If using Visual Studio Code, I recommend using the EditorConfig for VS Code extension as it will handle different file formats and their space/tab requirements automatically for you.

Building a VPC with the CloudFormation Resource Construct layer

You will start by building a very basic VPC using CFN Resource constructs only.

Walkthrough

  1. Open up a Terminal in the environment you configured in Getting Started
  2. You will call this Stack vpc, so make a folder named `vpc` and change to this folder in your terminal
  3. Then initialize a new AWS CDK project using the following command:

cdk init app --language=typescript

4. Now, install the aws-ec2 library which contains both the CloudFormation Resource layer and AWS layer constructs for VPCs.

npm install @aws-cdk/aws-ec2

5. In your favorite text editor, open up the vpc-stack.ts file in the lib folder

6. The structure is pretty straightforward, you have imports at the top – these are similar to imports in Python or Java where you can include different libraries and features and the AWS CDK template puts in a convenient method to start defining our stack, highlighted by the comment.

7. Import the ec2 library by adding the import statement on the second line:

import ec2 = require('@aws-cdk/aws-ec2');

8. Now define your VPC using the following code where it says ‘The code that defines your stack goes here’:

new ec2.CfnVPC(this, "MyVPC",{
      cidrBlock: "10.0.0.0/16",
    });

9. Your code should now look like this:

10. Next you need to compile your TypeScript code to JavaScript for the CDK tool to then deploy it. You’ll first run the build command to do this, and then go ahead and deploy your stack. So, using your Terminal window again:

npm run build

> [email protected] build /Users/leepac/Code/_blog/vpc
> tsc

cdk deploy

11. Head to the AWS Console’s CloudFormation Section, and select VpcStack and then Resources. If the stack doesn’t appear, you might have the wrong region selected, and you can use the Region drop down to select the right one.

12. Click on the VPC resource Physical ID link and you get taken to the VPC Dashboard section of the Console. You can then filter the VPCs listed in the console by selecting the same Physical ID in the filter by VPC box:

13. In this filtered view, take a look at your VPC and click on Subnets in the left hand pane to see your subnets. You’ll find it empty – why?

Because you used a CloudFormation Resource Construct, a VPC Resource Type (the CfnVpc) only deploys an empty VPC rather than the resources needed to start deploying things like Amazon EC2 instances. You could start adding Subnets and the like to your VPC using the ‘Cfn*’ constructs available, but you need to work out things like the CIDRs and references yourself.

Next you will move up to the higher-level VPC construct and see what it does.

Moving to the higher-level AWS Construct layer

The Amazon EC2 Module of the AWS CDK provides a way to make a useable VPC with the same amount of code you wrote just to deploy a VPC object with no subnets. The CDK documentation covers all the various options – by default, it creates a Well-architected VPC with both Private and Public Subnets in up to three Availability Zones within a region with NAT Gateways in each AZ.

Walkthrough

1. Change the ‘CfnVPC’ to just `Vpc` and `cidrBlock` to `cidr`:

Go back to the Terminal and run the build and then look at what this will do by using the CDK’s `diff` command:

npm run build

> [email protected] build /Users/leepac/Code/_blog/vpc
> tsc

cdk diff

3. That’s a lot more stuff! The great thing with the ‘diff’ command is you can see quickly what the deploy will do when it goes to deploy a stack. Go ahead and deploy using the `cdk deploy` command – this will take up to 15 minutes to fully deploy as the NAT Gateways need to be fully provisioned.

4. Head over to the AWS Console’s CloudFormation section again and select the VpcStack and Resources – you’ll see there’s a lot more now!

5. Feel free to explore the VPC section of the console again by clicking on the VPC Physical ID link and having a browse. You now have a VPC that can be used for deploying resources.

As you can see, it’s worth looking at higher-level AWS Constructs! However, as an engineer, I always find high level abstractions are just that, abstractions. It can feel like I’m giving up the flexibility I get with lower level resources.

In the last section, you’ll find out how the AWS CDK gives you the flexibility to override the high-level abstractions when necessary.

Overriding parts of AWS Constructs

The AWS CDK provides a way to break out the AWS CloudFormation Constructs that make up AWS Construct resources to quickly extend functionality. This is best illustrated in a code example. In the lib folder of your CDK project, create a new file called s3-stack.ts, then copy or type the following code:

import cdk = require('@aws-cdk/core');
import s3 = require('@aws-cdk/aws-s3');

export class S3Stack extends cdk.Stack {
  constructor(scope: cdk.Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // Create a logging bucket
    const loggingBucket = new s3.Bucket(this, "LoggingBucket", {
      bucketName: "leerandomexamplelogging",
    });

    // Create my bucket to be logged
    const higherLevelBucket = new s3.Bucket(this, "MyBucket", {
      bucketName: "leerandomexamplebucket",
    });

    // Extract the CfnBucket from the L2 Bucket above
    const bucketResource = higherLevelBucket.node.findChild('Resource') as s3.CfnBucket;

    // Override logging configuration to point to my logging bucket
    bucketResource.addPropertyOverride('LoggingConfiguration', {
      "DestinationBucketName": loggingBucket.bucketName,
    });
  }
}

Because the AWS CDK builds a virtual tree of resources, you can take advantage of the findChild() method to traverse the tree of your higherLevelBucket to get the CloudFormation resource (in this case a CfnBucket). You can then use the addPropertyOverride method to set the specific property you wish to use. In this case, you add a LoggingConfiguration that points to your loggingBucket. The AWS CDK will take care of referencing for you – use the cdk synth subcommand to look at this in CloudFormation YAML format:

In the output you can see lines called aws:cdk:path: and this tells you quickly where to find the Cfn* type. This is how you find out that the CfnBucket is part of MyBucket and called Resource, giving you the parameter to pass into findChild().

Conclusions

Today you learned about the differences between AWS CloudFormation Constructs and higher-level AWS Constructs. You also learned that AWS CloudFormation Constructs are automatically generated and updated from the CloudFormation reference with the AWS Constructs being more curated with opinionated patterns.

You have also learned how to tell which type a Construct is by its prefix, with AWS CloudFormation Constructs being prefixed with ‘Cfn’. You then used one of these constructs, CfnVpc, to deploy a VPC and discovered that because of this exact mapping that you would need to use multiple constructs to build a VPC.

You then looked at the higher-level ‘Vpc’ AWS Construct and how it builds out a full VPC which contained both private and public subnets with NAT Gateways, a common deployment amongst customers looking to deploy Amazon EC2 applications.

 

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/working-with-the-aws-cloud-development-kit-and-aws-construct-library/

Automated Performance Regression Detection in the AWS SDK for Java 2.0

Automated Performance Regression Detection in the AWS SDK for Java 2.0

We are happy to share that we’ve added automated performance regression tests to the AWS SDK for Java 2.0. With this benchmark harness, every change to the SDK will be tested for performance before release, to avoid potential performance regressions. We understand that performance is critical to our customers and we’ve prioritized improving various performance aspects of the AWS SDK. In the past, we’ve relied on Pull Request reviews to catch any code changes that looked like would cause performance issues. With this approach, although rare, sometimes a simple line of code might get overlooked and end up causing performance issues. Also, at times, performance regressions were caused due to newer versions of SDK dependencies, and there was no easy way to monitor and quantify these performance impacts to the SDK. With the benchmark tests, we are now able to detect performance regressions with changes before they are merged into the master. The benchmark harness code is open-source and resides in the same repository as the AWS SDK for Java 2.0, and is implemented using the Java Microbenchmark Harness or JMH.

How to Run Benchmarks

To run the benchmarks, you need to build it first using mvn clean install -P quick -pl :sdk-benchmarks –am. Then, trigger the benchmarks using one of the following options:

Option 1:  Use the executable JAR

cd test/sdk-benchmarks
# Run a specific benchmark
java -jar target/benchmarks.jar ApacheHttpClientBenchmark
 
# Run all benchmarks: 3 warm up iterations, 3 benchmark iterations, 1 fork
java -jar target/benchmarks.jar -wi 3 -i 3 -f 1

Option 2: Use maven command to invoke BenchmarkRunner main method to run all benchmarks

mvn install -pl :bom-internal
cd test/sdk-benchmarks
mvn exec:exec

Option 3: Run the main method within each Benchmark class from your IDE

You can also run the main method within each Benchmark class from your IDE. If you are using Eclipse, you might need to set up build configurations for JMH annotations (check out the JMH page to learn how). You’ll note that per JMH recommendations, using Maven and executable JARs (options #1 & #2 above) are preferred over running it from within an IDE. IDE set up is a bit complex and could yield less reliable results.

How the Benchmark Harness Works

When the benchmark tool gets triggered, it first runs a set of predefined scenarios with different http clients sending requests to local mock servers. It then measures the throughput of the current revision and compares the results with the existing baseline results computed from the previously released version. The performance tests will fail if the throughput of the new change decreases by a certain threshold. Running the benchmark tests for every pull request allows us to block those with problematic changes. As an added bonus, the benchmark tool makes it easier to monitor the SDK performance over the time, and when performance improvement is made, we can inform customers with quantified performance gains so that they can benefit immediately from upgrading the SDK. Finally, the baseline data generated from the benchmark harness provides useful information, such as which HTTP client has the best performance or how many percentages of throughput gains it can achieve by tuning SDK configurations. For example, changing to use OpenSSL provider for NettyAsyncHttpClient can reach 10% more throughputs according to our benchmarks. With automated performance checks, we expect to limit unanticipated performance degradation with new version releases. For contributing, we also encourage you to run the benchmarks harness locally to check if there is a performance impact by the changes.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/automated-performance-regression-detection-in-the-aws-sdk-for-java-2-0/

Testing infrastructure with the AWS Cloud Development Kit (CDK)

Testing infrastructure with the AWS Cloud Development Kit (CDK)

The AWS Cloud Development Kit (CDK) allows you to describe your application’s infrastructure using a general-purpose programming language, such as TypeScript, JavaScript or Python. This opens up familiar avenues for working with your infrastructure, such as using your favorite IDE, getting the benefit of autocomplete, creating abstractions in a familiar way, distributing them using your ecosystem’s standard package manager, and of course: writing tests for your infrastructure like you would write tests for your application.

In this blog post you will learn how to write tests for your infrastructure code in TypeScript using Jest. The code for JavaScript will be the same (sans the types), while the code for Python would follow the same testing patterns. Unfortunately, there are no ready-made Python libraries for you to use yet.

Approach

The pattern for writing tests for infrastructure is very similar to how you would write them for application code: you define a test case as you would normally do in the test framework of your choice. Inside that test case you instantiate constructs as you would do in your CDK app, and then you make assertions about the AWS CloudFormation template that the code you wrote would generate.

The one thing that’s different from normal tests are the assertions that you write on your code. The TypeScript CDK ships with an assertion library (@aws-cdk/assert) that makes it easy to make assertions on your infrastructure. In fact, all of the constructs in the AWS Construct Library that ship with the CDK are tested in this way, so we can make sure they do—and keep on doing—what they are supposed to do. Our assertions library is currently only available to TypeScript and JavaScript users, but will be made available to users of other languages eventually.

Broadly, there are a couple of classes of tests you will be writing:

  • Snapshot tests (also known as “golden master” tests). Using Jest, these are very convenient to write. They assert that the CloudFormation template the code generates is the same as it was when the test was written. If anything changes, the test framework will show you the changes in a diff. If the changes were accidental, you’ll go and update the code until the test passes again, and if the changes were intentional, you’ll have the option to accept the new template as the new “golden master”.
    • In the CDK itself, we also use snapshot tests as “integration tests”. Rather than individual unit tests that only look at the CloudFormation template output, we write a larger application using CDK constructs, deploy it and verify that it works as intended. We then make a snapshot of the CloudFormation template, that will force us to re-deploy and re-test the deployment if the generated template starts to deviate from the snapshot.
  • Fine-grained assertions about the template. Snapshot tests are convenient and fast to write, and provide a baseline level of security that your code changes did not change the generated template. The trouble starts when you purposely introduce changes. Let’s say you have a snapshot test to verify output for feature A, and you now add a feature B to your construct. This changes the generated template, and your snapshot test will break, even though feature A still works as intended. The snapshot can’t tell which part of the template is relevant to feature A and which part is relevant to feature B. To combat this, you can also write more fine-grained assertions, such as “this resource has this property” (and I don’t care about any of the others).
  • Validation tests. One of the advantages of general-purpose programming languages is that we can add additional validation checks and error out early, saving the construct user some trial-and-error time. You would test those by using the construct in an invalid way and asserting that an error is raised.

An example: a dead letter queue

Let’s say you want to write a DeadLetterQueue construct. A dead letter queue is used to hold another queue’s messages if they fail delivery too many times. It’s generally bad news if messages end up the dead letter queue, because it indicates something is wrong with the queue processor. To that end, your DeadLetterQueue will come with an alarm that fires if there are any items in the queue. It is up to the user of the construct to attach any actions to the alarm firing, such as notifying an SNS topic.

Start by creating an empty construct library project using the CDK CLI and install some of the construct libraries we’ll need:

$ cdk init --language=typescript lib
$ npm install @aws-cdk/aws-sqs @aws-cdk/aws-cloudwatch

The CDK code might look like this (put this in a file called lib/dead-letter-queue.ts):

import cloudwatch = require('@aws-cdk/aws-cloudwatch');
import sqs = require('@aws-cdk/aws-sqs');
import { Construct, Duration } from '@aws-cdk/core';

export class DeadLetterQueue extends sqs.Queue {
  public readonly messagesInQueueAlarm: cloudwatch.IAlarm;

  constructor(scope: Construct, id: string) {
    super(scope, id);

    // Add the alarm
    this.messagesInQueueAlarm = new cloudwatch.Alarm(this, 'Alarm', {
      alarmDescription: 'There are messages in the Dead Letter Queue',
      evaluationPeriods: 1,
      threshold: 1,
      metric: this.metricApproximateNumberOfMessagesVisible(),
    });
  }
}

Writing a test

You’re going to write a test for this construct. First, start off by installing Jest and the CDK assertion library:

$ npm install --save-dev jest @types/jest @aws-cdk/assert

You also have to edit package.json file in your project to tell NPM to run Jest, and tell Jest what kind of files to collect:

{
  ...
 "scripts": {
    ...
    "test": "jest"
  },
  "devDependencies": {
    ...
    "@types/jest": "^24.0.18",
    "jest": "^24.9.0",
  },
  "jest": {
    "moduleFileExtensions": ["js"]
  }
}

You can now write a test. A good place to start is checking that the queue’s retention period is 2 weeks. The simplest kind of test you can write is a snapshot test, so start with that. Put the following in a file named test/dead-letter-queue.test.ts:

import { SynthUtils } from '@aws-cdk/assert';
import { Stack } from '@aws-cdk/core';

import dlq = require('../lib/dead-letter-queue');

test('dlq creates an alarm', () => {
  const stack = new Stack();
  new dlq.DeadLetterQueue(stack, 'DLQ');
  expect(SynthUtils.toCloudFormation(stack)).toMatchSnapshot();
});

You can now compile and run the test:

$ npm run build
$ npm test

Jest will run your test and tell you that it has recorded a snapshot from your test.

PASS  test/dead-letter-queue.test.js
 ✓ dlq creates an alarm (55ms)
 › 1 snapshot written.
Snapshot Summary
› 1 snapshot written

The snapshots are stored in a directory called __snapshots__. If you look at the snapshot, you’ll see it just contains a copy of the CloudFormation template that our stack would generate:

exports[`dlq creates an alarm 1`] = `
Object {
  "Resources": Object {
    "DLQ581697C4": Object {
      "Type": "AWS::SQS::Queue",
    },
    "DLQAlarm008FBE3A": Object {
     "Properties": Object {
        "AlarmDescription": "There are messages in the Dead Letter Queue",
        "ComparisonOperator": "GreaterThanOrEqualToThreshold",
...

Congratulations! You’ve written and run your first test. Don’t forget to commit the snapshots directory to version control so that the snapshot gets stored and versioned with your code.

Using the snapshot

To make sure the test is working, you’re going to break it to make sure the breakage is detected. To do this, in your dead-letter-queue.ts file, change the cloudwatch.Alarm period to 1 minute (instead of the default of 5 minutes), by adding a period argument:

this.messagesInQueueAlarm = new cloudwatch.Alarm(this, 'Alarm', {
  // ...
  period: Duration.minutes(1),
});

If you now build and run the test again, Jest will tell you that the template changed:

$ npm run build && npm test

FAIL test/dead-letter-queue.test.js
✕ dlq creates an alarm (58ms)

● dlq creates an alarm

expect(received).toMatchSnapshot()

Snapshot name: `dlq creates an alarm 1`

- Snapshot
+ Received

@@ -19,11 +19,11 @@
               },
             ],
             "EvaluationPeriods": 1,
             "MetricName": "ApproximateNumberOfMessagesVisible",
             "Namespace": "AWS/SQS",
     -       "Period": 300,
     +       "Period": 60,
             "Statistic": "Maximum",
             "Threshold": 1,
           },
           "Type": "AWS::CloudWatch::Alarm",
         },

 › 1 snapshot failed.
Snapshot Summary
 › 1 snapshot failed from 1 test suite. Inspect your code changes or run `npm test -- -u` to update them.

Jest is telling you that the change you just made changed the emitted Period attribute from 300 to 60. You now have the choice of undoing our code change if this result was accidental, or committing to the new snapshot if you intended to make this change. To commit to the new snapshot, run:

npm test -- -u

Jest will tell you that it updated the snapshot. You’ve now locked in the new alarm period:

PASS  test/dead-letter-queue.test.js
 ✓ dlq creates an alarm (51ms)

 › 1 snapshot updated.
Snapshot Summary
 › 1 snapshot updated

Dealing with change

Let’s return to the DeadLetterQueue construct. Messages go to the dead letter queue when something is wrong with the primary queue processor, and you are notified via an alarm. After you fix the problem with the queue processor, you’ll usually want to redrive the messages from the dead letter queue, back to the primary queue, to have them processed as usual.

Messages only exist in a queue for a limited time though. To give yourself the greatest chance of recovering the messages from the dead letter queue, set the lifetime of messages in the dead letter queue (called the retention period) to the maximum time of 2 weeks. You make the following changes to your DeadLetterQueue construct:

export class DeadLetterQueue extends sqs.Queue {
  constructor(parent: Construct, id: string) {
    super(parent, id, {
      // Maximum retention period
      retentionPeriod: Duration.days(14)
    });
    // ...
  }
}

Now run the tests again:

$ npm run build && npm test
FAIL test/dead-letter-queue.test.js
✕ dlq creates an alarm (79ms)

    ● dlq creates an alarm

    expect(received).toMatchSnapshot()

    Snapshot name: `dlq creates an alarm 1`

    - Snapshot
    + Received

    @@ -1,8 +1,11 @@
      Object {
        "Resources": Object 
          "DLQ581697C4": Object {
    +       "Properties": Object {
    +         "MessageRetentionPeriod": 1209600,
    +       },
            "Type": "AWS::SQS::Queue",
         },
         "DLQAlarm008FBE3A": Object {
           "Properties": Object {
             "AlarmDescription": "There are messages in the Dead Letter Queue",

  › 1 snapshot failed.
Snapshot Summary
  › 1 snapshot failed from 1 test suite. Inspect your code changes or run `npm test -- -u` to update them.

The snapshot test broke again, because you added a retention period property. Even though the test was only intended to make sure that the DeadLetterQueue construct created an alarm, it was inadvertently also testing that the queue was created with default options.

Writing fine-grained assertions on resources

Snapshot tests are convenient to write and have their place for detecting accidental change. We use them in the CDK for our integration tests when validating larger bits of functionality all together. If a change causes an integration test’s template to deviate from its snapshot, we use that as a trigger to tell us we need to do extra validation, for example actually deploying the template through AWS CloudFormation and verifying our infrastructure still works.

In the CDK’s extensive suite of unit tests, we don’t want to revisit all the tests any time we make a change. To avoid this, we use the custom assertions in the @aws-cdk/assert/jest module to write fine-grained tests that verify only part of the construct’s behavior at a time, i.e. only the part we’re interested in for that particular test. For example, the test called “dlq creates an alarm” should assert that an alarm gets created with the appropriate metric, and it should not make any assertions on the properties of the queue that gets created as part of that test.

To write this test, you will have a look at the AWS::CloudWatch::Alarm resource specification in CloudFormation, and see what properties and values you’re using the assertion library to guarantee. In this case, you’re interested in the properties Namespace, MetricName and Dimensions. You can use the expect(stack).toHaveResource(...) assertion to make sure those have the values you want. To get access to that assertion, you’ll first need to import @aws-cdk/assert/jest, which extends the assertions that are available when you type expect(…). Putting this all together, your test should look like this:

import '@aws-cdk/assert/jest';

// ...
test('dlq creates an alarm', () =>; {
  const stack = new Stack();

  new dlq.DeadLetterQueue(stack, 'DLQ');

  expect(stack).toHaveResource('AWS::CloudWatch::Alarm', {
    MetricName: "ApproximateNumberOfMessagesVisible",
    Namespace: "AWS/SQS",
    Dimensions: [
      {
        Name: "QueueName",
        Value: { "Fn::GetAtt": [ "DLQ581697C4", "QueueName" ] }
      }
    ],
  });
});

This test asserts that an Alarm is created on the ApproximateNumberOfMessagesVisible metric of the dead letter queue (by means of the { Fn::GetAtt } intrinsic). If you run Jest now, it will warn you about an existing snapshot that your test no longer uses, so get rid of it by running npm test -- -u.

You can now add a second test for the retention period:

test('dlq has maximum retention period', () => {
  const stack = new Stack();

  new dlq.DeadLetterQueue(stack, 'DLQ');

  expect(stack).toHaveResource('AWS::SQS::Queue', {
    MessageRetentionPeriod: 1209600
  });
});

Run the tests to make sure everything passes:

$ npm run build && npm test
 
PASS  test/dead-letter-queue.test.js
  ✓ dlq creates an alarm (48ms)
  ✓ dlq has maximum retention period (15ms)

Test Suites: 1 passed, 1 total
Tests:       2 passed, 2 total

It does!

Validating construct configuration

Maybe you want to make the retention period configurable, while validating that the user-provided value falls into an acceptable range. You’d create a Props interface for the construct and add a check on the allowed values that your construct will accept:

export interface DeadLetterQueueProps {
    /**
     * The amount of days messages will live in the dead letter queue
     *
     * Cannot exceed 14 days.
     *
     * @default 14
     */
    retentionDays?: number;
}

export class DeadLetterQueue extends sqs.Queue {
  public readonly messagesInQueueAlarm: cloudwatch.IAlarm;

  constructor(scope: Construct, id: string, props: DeadLetterQueueProps = {}) {
    if (props.retentionDays !== undefined && props.retentionDays > 14) {
      throw new Error('retentionDays may not exceed 14 days');
    }

    super(scope, id, {
        // Given retention period or maximum
        retentionPeriod: Duration.days(props.retentionDays || 14)
    });
    // ...
  }
}

To test that your new feature actually does what you expect, you’ll write two tests:

  • One that checks a configured value ends up in the template; and
  • One which supplies an incorrect value to the construct and checks that you get the error you’re expecting.
test('retention period can be configured', () => {
  const stack = new Stack();

  new dlq.DeadLetterQueue(stack, 'DLQ', {
    retentionDays: 7
  });

  expect(stack).toHaveResource('AWS::SQS::Queue', {
    MessageRetentionPeriod: 604800
  });
});

test('configurable retention period cannot exceed 14 days', () => {
  const stack = new Stack();

  expect(() => {
    new dlq.DeadLetterQueue(stack, 'DLQ', {
      retentionDays: 15
    });
  }).toThrowError(/retentionDays may not exceed 14 days/);
});

Run the tests to confirm:

$ npm run build && npm test

PASS  test/dead-letter-queue.test.js
  ✓ dlq creates an alarm (62ms)
  ✓ dlq has maximum retention period (14ms)
  ✓ retention period can be configured (18ms)
  ✓ configurable retention period cannot exceed 14 days (1ms)

Test Suites: 1 passed, 1 total
Tests:       4 passed, 4 total

You’ve confirmed that your feature works, and that you’re correctly validating the user’s input.

As a bonus: you know from your previous tests still passing that you didn’t change any of the behavior when the user does not specify any arguments, which is great news!

Conclusion

You’ve written a reusable construct, and covered its features with resource assertion and validation tests. Regardless of whether you’re planning on writing tests on your own infrastructure application, on your own reusable constructs, or whether you’re planning to contribute to the CDK on GitHub, I hope this blog post has given you some mental tools for thinking about testing your infrastructure code.

Finally, two values I’d like to instill in you when you are writing tests:

  • Treat test code like you would treat application code. Test code is going to have an equally long lifetime in your code as regular code, and is equally subject to change. Don’t copy/paste setup lines or common assertions all over the place, take some extra time to factor out commonalities into helper functions. Your future self will thank you.
  • Don’t assert too much in one test. Preferably, a test should test one and only one behavior. If you accidentally break that behavior, you would prefer exactly one test to fail, and the test name will tell you exactly what you broke. There’s nothing worse than changing something trivial and having dozens of tests fail and need to be updated because they were accidentally asserting some behavior other than what the test was for. This does mean that—regardless of how convenient they are—you should be using snapshot tests sparingly, as all snapshot tests are going to fail if literally anything about the construct behavior changes, and you’re going to have to go back and scrutinize all failures to make sure nothing accidentally slipped by.

Happy testing!

from AWS Developer Blog https://aws.amazon.com/blogs/developer/testing-infrastructure-with-the-aws-cloud-development-kit-cdk/

AWS Tech Talk: Infrastructure is Code with the AWS CDK

AWS Tech Talk: Infrastructure is Code with the AWS CDK

If you missed the Infrastructure is Code with the AWS Cloud Development Kit (AWS CDK) Tech Talk last week, you can now watch it online on the AWS Online Tech Talks channel. Elad and I had a ton of fun building this little URL shortener sample app to demo the AWS CDK for Python. If you aren’t a Python developer, don’t worry! The Python code we use is easy to understand and translates directly to other languages. Plus, you can learn a lot about the AWS CDK and get a tour of the AWS Construct Library.

Specifically, in this tech talk, you can see us:

  • Build a URL shortener service using AWS Constructs for AWS Lambda, Amazon API Gateway and Amazon DynamoDB.
  • Demonstrate the concept of shared infrastructure through a base CDK stack class that includes APIs for accessing shared resources such as a domain name and a VPC.
  • Use AWS Fargate to create a custom construct for a traffic generator.
  • Use a 3rd party construct library which automatically defines a monitoring dashboard and alarms for supported resources.

If you’re hungry for more after watching the Tech Talk, check out these resources to learn more about the AWS CDK:

  • https://cdkworkshop.com — This fun, online workshop takes an hour or two to complete and is available in both TypeScript and Python. You learn how to work with the AWS CDK by building an application using AWS Lambda, Amazon DynamoDB, and Amazon API Gateway.
  • https://www.github.com/aws-samples/aws-cdk-examples — After you know the basics of the AWS CDK, you’re ready to jump into the AWS CDK Examples repo! Learn more about the Constructs available in the AWS Construct Library. There are examples available for TypeScript, Python, C#, and Java. You can find the URL shortener sample app from our Tech Talk here, too.

I hope you enjoy learning about the AWS CDK. Let us know what other types of examples, apps, or construct libraries you want to see us build in demos or sample code!

Happy constructing!

Jason and Elad

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/aws-tech-talk-infrastructure-is-code-with-the-aws-cdk/

Setting up an Android application with AWS SDK for C++

Setting up an Android application with AWS SDK for C++

The AWS SDK for C++ can build and run on many different platforms, including Android. In this post, I walk you through building and running a sample application on an Android device.

Overview

I cover the following topics:

  • Building and installing the SDK for C++ and creating a desktop application that implements the basic functionality.
  • Setting up the environment for Android development.
  • Building and installing the SDK for C++ for Android and transplanting the desktop application to Android platform with the cross-compiled SDK.

Prerequisites

To get started, you need the following resources:

  • An AWS account
  • A GitHub environment to build and install AWS SDK for C++

To set up the application on the desktop

Follow these steps to set up the application on your desktop:

  • Create an Amazon Cognito identity pool
  • Build and test the desktop application

Create an Amazon Cognito identity pool

For this demo, I use unauthenticated identities, which typically belong to guest users. To learn about unauthenticated and authenticated identities and choose the one that fits your business, check out Using Identity Pools.

In the Amazon Cognito console, choose Manage identity pools, Create new identity pool. Enter an identity pool name, like “My Android App with CPP SDK”, and choose Enable access to unauthenticated identities.

Next, choose Create Pool, View Details. Two Role Summary sections should display, one for authenticated and the other for unauthenticated identities.

For the unauthenticated identities, choose View Policy Document, Edit. Under Action, add the following line:

s3: ListAllMyBuckets

After you have completed the preceding steps, the policy should read as follows:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets",
                "mobileanalytics:PutEvents",
                "cognito-sync:*"
            ],
            "Resource": "*"
        }
    ]
}

To finish the creation, choose Allow.

On the next page, in the Get AWS Credentials section, copy the identity pool ID and keep it somewhere to use later. Or, you can find it after you choose Edit identity pool in the Amazon Cognito console. The identity pool ID has the following format:

<region>:<uuid>

Build and test the desktop application

Before building an Android application, you can build a regular application with the SDK for C++ in your desktop environment for testing purpose. Then you must modify the source code, making the CMake script switch its target to Android.

Here’s how to build and install the SDK for C++ statically:

cd <workspace>
git clone https://github.com/aws/aws-sdk-cpp.git
mkdir build_sdk_desktop
cd build_sdk_desktop
cmake ../aws-sdk-cpp \
    -DBUILD_ONLY="identity-management;s3" \
    -DBUILD_SHARED_LIBS=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_INSTALL_PREFIX="<workspace>/install_sdk_desktop"
cmake --build .
cmake --build . --target install

If you install the SDK for C++ successfully, you can find libaws-cpp-sdk-*.a (or aws-cpp-sdk-*.lib for Windows) under <workspace>/install_sdk_desktop/lib/ (or <workspace>/install_sdk_desktop/lib64).

Next, build the application and link it to the library that you built. Create a folder <workspace>/app_list_all_buckets and place two files under this directory:

  • main.cpp (source file)
  • CMakeLists.txt (CMake file)
// main.cpp
#include <iostream>
#include <aws/core/Aws.h>
#include <aws/core/utils/Outcome.h>
#include <aws/core/utils/logging/AWSLogging.h>
#include <aws/core/client/ClientConfiguration.h>
#include <aws/core/auth/AWSCredentialsProvider.h>
#include <aws/identity-management/auth/CognitoCachingCredentialsProvider.h>
#include <aws/s3/S3Client.h>

using namespace Aws::Auth;
using namespace Aws::CognitoIdentity;
using namespace Aws::CognitoIdentity::Model;

static const char ALLOCATION_TAG[] = "ListAllBuckets";
static const char ACCOUNT_ID[] = "your-account-id";
static const char IDENTITY_POOL_ID[] = "your-cognito-identity-id";

int main()
{
    Aws::SDKOptions options;
    options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Debug;
    Aws::InitAPI(options);

    Aws::Client::ClientConfiguration config;
    auto cognitoIdentityClient = Aws::MakeShared<CognitoIdentityClient>(ALLOCATION_TAG, Aws::MakeShared<AnonymousAWSCredentialsProvider>(ALLOCATION_TAG), config);
    auto cognitoCredentialsProvider = Aws::MakeShared<CognitoCachingAnonymousCredentialsProvider>(ALLOCATION_TAG, ACCOUNT_ID, IDENTITY_POOL_ID, cognitoIdentityClient);

    Aws::S3::S3Client s3Client(cognitoCredentialsProvider, config);
    auto listBucketsOutcome = s3Client.ListBuckets();
    Aws::StringStream ss;
    if (listBucketsOutcome.IsSuccess())
    {
        ss << "Buckets:" << std::endl;
        for (auto const& bucket : listBucketsOutcome.GetResult().GetBuckets())
        {
            ss << "  " << bucket.GetName() << std::endl;
        }
    }
    else
    {
        ss << "Failed to list buckets." << std::endl;
    }
    std::cout << ss.str() << std::endl;
    Aws::ShutdownAPI(options);
    return 0;
}
# CMakeLists.txt
cmake_minimum_required(VERSION 3.3)
set(CMAKE_CXX_STANDARD 11)

project(list_all_buckets LANGUAGES CXX)
find_package(AWSSDK REQUIRED COMPONENTS s3 identity-management)
add_executable(${PROJECT_NAME} "main.cpp")
target_link_libraries(${PROJECT_NAME} ${AWSSDK_LINK_LIBRARIES})

Build and test this desktop application with the following commands:

cd <workspace>
mkdir build_app_desktop
cd build_app_desktop
cmake ../app_list_all_buckets \
    -DBUILD_SHARED_LIBS=ON \
    -DCMAKE_PREFIX_PATH="<workspace>/install_sdk_desktop"
cmake --build .
./list_all_buckets # or ./Debug/list_all_buckets.exe for Windows

The output should read as follows:

Buckets:
  <bucket_1>
  <bucket_2>
  ...

Now you have a desktop application. You’ve accomplished this without touching anything related to Android. The next section covers Android instructions.

To set up the application on Android with AWS SDK for C++

Follow these steps to set up the application on Android with the SDK for C++:

  • Set up Android Studio
  • Cross-compile the SDK for C++ and the library
  • Build and run the application in Android Studio

Set up Android Studio

First, download and install Android Studio. For more detailed instructions, see the Android Studio Install documentation.

Next, open Android Studio and create a new project. On the Choose your project screen, as shown in the following screenshot, choose Native C++, Next.

Complete all fields. In the following example, you build the SDK for C++ with Android API level 19, so the Minimum API Level is “API 19: Android 4.4 (KitKat)”.

Choose C++ 11 for C++ Standard and choose Finish for the setup phase.

The first time that you open Android Studio, you might see “missing NDK and CMake” errors during automatic installation. Ignore these warnings for the moment. You install Android NDK and CMake manually. Or you can accept the license to install NDK and CMake within Android Studio. Doing so should suppress the warnings.

After you choose Finish, you should get a sample application with Android Studio. This application publishes some messages on the screens of your devices. For more details, see Create a new project with C++.

Starting from this sample, take the following steps to build your application:

First, cross-compile the SDK for C++ for Android.

Modify the source code and CMake script to build list_all_buckets as a shared object library (liblist_all_buckets.so) rather than an executable. This library has a function: listAllBuckets() to output all buckets.

Specify the path to the library in the module’s build.gradle file so that the Android application can find it.

Load this library in MainActivity by System.loadLibrary("list_all_buckets"), so that the Android application can use the listAllBuckets() function.

Call the listAllBuckets() function in OnCreate() function in MainActivity.

More details for each step will be given in the following sections.

Cross-compile the SDK for C++ and the library

Use Android NDK to cross-compile the SDK for C++. In this example, you are using version r19c. To find whether Android Studio has downloaded NDK by default, check the following:

  • Linux: ~/Android/Sdk/ndk-bundle
  • MacOS: ~/Library/Android/sdk/ndk-bundle
  • Windows: C:\Users\<username>\AppData\Local\Android\Sdk\ndk\<version>

Alternately, download the Android NDK directly.

To cross-compile SDK for C++, run the following code:

cd <workspace>
mkdir build_sdk_android
cd build_sdk_anrdoid
cmake ../aws-sdk-cpp -DNDK_DIR="<path-to-android-ndk>" \
    -DBUILD_SHARED_LIBS=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DCUSTOM_MEMORY_MANAGEMENT=ON \
    -DTARGET_ARCH=ANDROID \
    -DANDROID_NATIVE_API_LEVEL=19 \
    -DBUILD_ONLY="identity-management;s3" \
    -DCMAKE_INSTALL_PREFIX="<workspace>/install_sdk_android"
cmake --build . --target CURL # This step is only required on Windows.
cmake --build .
cmake --build . --target install

On Windows, you might see the error message: “CMAKE_SYSTEM_NAME is ‘Android’ but ‘NVIDIA Nsight Tegra Visual Studio Edition’ is not installed.” In that case, install Ninja and change the generator from Visual Studio to Ninja by passing -GNinja as another parameter to your CMake command.

To build list_all_buckets as an Android-targeted dynamic object library, you must change the source code and CMake script. More specifically, you must alter the source code as follows:

Replace main() function with Java_com_example_mynativecppapplication_MainActivity_listAllBuckets(). In the Android application, the Java code calls this function through JNI (Java Native Interface). You may have a different function name, based on your package name and activity name. For this demo, the package name is com.example.mynativecppapplication, the activity name is MainActivity, and the actual function called by Java code is called listAllBuckets().

Enable LogcatLogSystem, so that you can debug your Android application and see the output in the logcat console.

Your Android devices or emulators may miss CA certificates. So, you should push them to your devices and specify the path in client configuration. In this example, use CA certificates extracted from Mozilla in PEM format.

Download the certificate bundle.

Push this file to your Android devices:

# Change directory to the location of adb
cd <path-to-android-sdk>/platform-tools
# Replace "com.example.mynativecppapplication" with your package name
./adb shell mkdir -p /sdcard/Android/data/com.example.mynativecppapplication/certs
# push the PEM file to your devices
./adb push cacert.pem /sdcard/Android/data/com.example.mynativecppapplication/certs

Specify the path in the client configuration:

config.caFile = "/sdcard/Android/data/com.example.mynativecppapplication/certs/cacert.pem";

The complete source code looks like the following:

// main.cpp
#if __ANDROID__
#include <android/log.h>
#include <jni.h>
#include <aws/core/platform/Android.h>
#include <aws/core/utils/logging/android/LogcatLogSystem.h>
#endif
#include <iostream>
#include <aws/core/Aws.h>
#include <aws/core/utils/Outcome.h>
#include <aws/core/utils/logging/AWSLogging.h>
#include <aws/core/client/ClientConfiguration.h>
#include <aws/core/auth/AWSCredentialsProvider.h>
#include <aws/identity-management/auth/CognitoCachingCredentialsProvider.h>
#include <aws/s3/S3Client.h>

using namespace Aws::Auth;
using namespace Aws::CognitoIdentity;
using namespace Aws::CognitoIdentity::Model;

static const char ALLOCATION_TAG[] = "ListAllBuckets";
static const char ACCOUNT_ID[] = "your-account-id";
static const char IDENTITY_POOL_ID[] = "your-cognito-identity-id";

#ifdef __ANDROID__
extern "C" JNIEXPORT jstring JNICALL
Java_com_example_mynativecppapplication_MainActivity_listAllBuckets(JNIEnv* env, jobject classRef, jobject context)
#else
int main()
#endif
{
    Aws::SDKOptions options;
#ifdef __ANDROID__
    AWS_UNREFERENCED_PARAM(classRef);
    AWS_UNREFERENCED_PARAM(context);
    Aws::Utils::Logging::InitializeAWSLogging(Aws::MakeShared<Aws::Utils::Logging::LogcatLogSystem>(ALLOCATION_TAG, Aws::Utils::Logging::LogLevel::Debug));
#else
    options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Debug;
#endif
    Aws::InitAPI(options);

    Aws::Client::ClientConfiguration config;
#ifdef __ANDROID__
    config.caFile = "/sdcard/Android/data/com.example.mynativecppapplication/certs/cacert.pem";
#endif
    auto cognitoIdentityClient = Aws::MakeShared<CognitoIdentityClient>(ALLOCATION_TAG, Aws::MakeShared<AnonymousAWSCredentialsProvider>(ALLOCATION_TAG), config);
    auto cognitoCredentialsProvider = Aws::MakeShared<CognitoCachingAnonymousCredentialsProvider>(ALLOCATION_TAG, ACCOUNT_ID, IDENTITY_POOL_ID, cognitoIdentityClient);

    Aws::S3::S3Client s3Client(cognitoCredentialsProvider, config);
    auto listBucketsOutcome = s3Client.ListBuckets();
    Aws::StringStream ss;
    if (listBucketsOutcome.IsSuccess())
    {
        ss << "Buckets:" << std::endl;
        for (auto const& bucket : listBucketsOutcome.GetResult().GetBuckets())
        {
            ss << "  " << bucket.GetName() << std::endl;
        }
    }
    else
    {
        ss << "Failed to list buckets." << std::endl;
    }

#if __ANDROID__
    std::string allBuckets(ss.str().c_str());
    Aws::ShutdownAPI(options);
    return env->NewStringUTF(allBuckets.c_str());
#else
    std::cout << ss.str() << std::endl;
    Aws::ShutdownAPI(options);
    return 0;
#endif
}

Next, make the following changes for the CMake script:

  • Set the default values for the parameters used for the Android build, including:
    • The default Android API Level is 19
    • The default Android ABI is armeabi-v7a
    • Use libc++ as the standard library by default
    • Use android.toolchain.cmake supplied by Android NDK by default
  • Build list_all_buckets as a library rather than an executable
  • Link to the external libraries built in the previous step: zlib, ssl, crypto, and curl
# CMakeLists.txt
cmake_minimum_required(VERSION 3.3)
set(CMAKE_CXX_STANDARD 11)

if(TARGET_ARCH STREQUAL "ANDROID") 
    if(NOT NDK_DIR)
        set(NDK_DIR $ENV{ANDROID_NDK})
    endif()
    if(NOT IS_DIRECTORY "${NDK_DIR}")
        message(FATAL_ERROR "Could not find Android NDK (${NDK_DIR}); either set the ANDROID_NDK environment variable or pass the path in via -DNDK_DIR=..." )
    endif()
if(NOT CMAKE_TOOLCHAIN_FILE)
    set(CMAKE_TOOLCHAIN_FILE "${NDK_DIR}/build/cmake/android.toolchain.cmake")
endif()

if(NOT ANDROID_ABI)
    set(ANDROID_ABI "armeabi-v7a")
    message(STATUS "Android ABI: none specified, defaulting to ${ANDROID_ABI}")
else()
    message(STATUS "Android ABI: ${ANDROID_ABI}")
endif()

if(BUILD_SHARED_LIBS)
    set(ANDROID_STL "c++_shared")
else()
    set(ANDROID_STL "c++_static")
endif()

if(NOT ANDROID_NATIVE_API_LEVEL)
    set(ANDROID_NATIVE_API_LEVEL "android-19")
    message(STATUS "Android API Level: none specified, defaulting to ${ANDROID_NATIVE_API_LEVEL}")
else()
    message(STATUS "Android API Level: ${ANDROID_NATIVE_API_LEVEL}")
endif()

    list(APPEND CMAKE_FIND_ROOT_PATH ${CMAKE_PREFIX_PATH})
endif()

project(list_all_buckets LANGUAGES CXX)
find_package(AWSSDK REQUIRED COMPONENTS s3 identity-management)
if(TARGET_ARCH STREQUAL "ANDROID")
    set(SUFFIX so)
    add_library(zlib STATIC IMPORTED)
    set_target_properties(zlib PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/zlib/lib/libz.a)
    add_library(ssl STATIC IMPORTED)
    set_target_properties(ssl PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/openssl/lib/libssl.a)
    add_library(crypto STATIC IMPORTED)
    set_target_properties(crypto PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/openssl/lib/libcrypto.a)
    add_library(curl STATIC IMPORTED)
    set_target_properties(curl PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/curl/lib/libcurl.a)
    add_library(${PROJECT_NAME} "main.cpp")
else()
    add_executable(${PROJECT_NAME} "main.cpp")
endif()
target_link_libraries(${PROJECT_NAME} ${AWSSDK_LINK_LIBRARIES})

Finally, build this library with the following command:

cd <workspace>
mkdir build_app_android
cd build_app_android
cmake ../app_list_all_buckets \
    -DNDK_DIR="<path-to-android-ndk>" \
    -DBUILD_SHARED_LIBS=ON \
    -DTARGET_ARCH=ANDROID \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_PREFIX_PATH="<workspace>/install_sdk_android" \
    -DEXTERNAL_DEPS="<workspace>/build_sdk_android/external"
cmake --build .

This build results in the shared library liblist_all_buckets.so under <workspace>/build_app_android/. It’s time to switch to Android Studio.

Build and run the application in Android Studio

First, the application must find the library (liblist_all_buckets.so) that you built and the standard library (libc++_shared.so). The default search path for JNI libraries is app/src/main/jniLibs/<android-abi>. Create a directory called: <your-android-application-root>/app/src/main/jniLibs/armeabi-v7a/ and copy the following files to this directory:

<workspace>/build_app_android/liblist_all_buckets.so

  • For Linux: <android-ndk>/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/arm-linux-androideabi/libc++_shared.so
  • For MacOS: <android-ndk>/toolchains/llvm/prebuilt/darwin-x86_64/sysroot/usr/lib/arm-linux-androideabi/libc++_shared.so
  • For Windows: <android-ndk>/toolchains/llvm/prebuilt/windows-x86_64/sysroot/usr/lib/arm-linux-androideabi/libc++_shared.so

Next, open the build.gradle file for your module and remove the externalNativeBuild{} block, because you are using prebuilt libraries, instead of building the source with the Android application.

Then, edit MainActivity.java, which is under app/src/main/java/<package-name>/. Replace all native-libs with list_all_buckets and replace all stringFromJNI() with listAllBuckets(). The whole Java file looks like the following code example:

// MainActivity.java
package com.example.mynativecppapplication;

import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.widget.TextView;

public class MainActivity extends AppCompatActivity {

    // Used to load the 'list_all_buckets' library on application startup.
    static {
        System.loadLibrary("list_all_buckets");
    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        // Example of a call to a native method
        TextView tv = findViewById(R.id.sample_text);
        tv.setText(listAllBuckets());
    }

    /**
    * A native method that is implemented by the 'list_all_buckets' native library,
    * which is packaged with this application.
    */
    public native String listAllBuckets();
}

Finally, don’t forget to grant internet access permission to your application by adding the following lines in the AndroidManifest.xml, located at app/src/main/:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.mynativecppapplication">
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    ...
</manifest>

To run the application on Android emulator, make sure that the CPU/ABI is armeabi-v7a for the system image. That’s what you specified when you cross-compiled the SDK and list_all_buckets library for the Android platform.

Run this application by choosing the Run icon or choosing Run, Run [app]. You should see that the application lists all buckets, as shown in the following screenshot.

Summary

With Android Studio and its included tools, you can cross-compile the AWS SDK for C++ and build a sample Android application to get temporary Amazon Cognito credentials and list all S3 buckets. Starting from this simple application, AWS hopes to see more exciting integrations with the SDK for C++.

As always, AWS welcomes all feedback or comment. Feel free to open an issue on GitHub if you have questions or submit a pull request to contribute.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/setting-up-an-android-application-with-aws-sdk-for-c/

Configuring boto to validate HTTPS certificates

Configuring boto to validate HTTPS certificates

We strongly recommend upgrading from boto to boto3, the latest major version of the AWS SDK for Python. The previous major version, boto, does not default to validating HTTPS certificates for Amazon S3 when you are:

  1. Using a Python version less than 2.7.9 or
  2. Using Python 2.7.9 or greater and are connecting to S3 through a proxy

If you are unable to upgrade to boto3, you should configure boto to always validate HTTPS certificates. Be sure to test these changes. You can force HTTPS certification validation by either:

  1. Setting https_validate_certificates to True in your boto config file. For more information on how to use the boto config file, please refer to its documentation, or
  2. Setting validate_certs to True when instantiating an S3Connection:
    >>> from boto.s3.connection import S3Connection
    >>> conn = S3Connection(validate_certs=True)

To get the best experience, we always recommend remaining up-to-date with the latest version of the AWS SDKs and runtimes.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/configure-boto-to-validate-https-certificates/