Tag: SDK

Deploy a VueJS app with the Amplify Console using AWS CloudFormation

Deploy a VueJS app with the Amplify Console using AWS CloudFormation

This article was written by Simon Thulbourn, Solutions Architect, AWS.

Developers and Operations people love automation. It gives them the power to introduce repeatability into their applications. The provisioning of infrastructure components is no different. Being able to create and manage resources through the use of AWS CloudFormation is a powerful way to run and rerun the same code to create resources in AWS across accounts.

Today, the Amplify Console launched support for AWS CloudFormation resources to give developers the ability to have reliable and repeatable access to the Amplify Console service. The Amplify Console offers three new resources:

  • AWS::Amplify::App
  • AWS::Amplify::Branch
  • AWS::Amplify::Domain

For newcomers, AWS Amplify Console provides a Git-based workflow to enable developers to build, deploy and host web application, whether it’s  Angular, ReactJS, VueJS or something else. These web applications can then consume APIs based on GraphQL or Serverless technologies, enabling fullstack serverless applications on AWS.

Working with CloudFormation

As an example, you’ll deploy the Todo example app from VueJS using Amplify Console and the new CloudFormation resources.

deploy the Todo example app from VueJS

You’ll start by forking the Vue repository on GitHub to your account. You have to fork the repository since Amplify Console will want to add a webhook and clone the repository for future builds.

You’ll also create a new personal access token on GitHub since you’ll need one to embed in the CloudFormation. You can read more about creating personal access tokens on GitHub’s website. The token will need the “repo” OAuth scope.

Note: Personal access tokens should be treated as a secret.

You can deploy the Todo application using the CloudFormation template at the end of this blog post. This CloudFormation template will create an Amplify Console App, Branch & Domain with a TLS certificate and an IAM role. To deploy the CloudFormation template, you can either use the AWS Console or the AWS CLI. In this example, we’re using the AWS CLI:

aws cloudformation deploy \
  --template-file ./template.yaml \
  --capabilities CAPABILITY_IAM \
  --parameter-overrides \
      OAuthToken=<GITHUB PERSONAL ACCESS TOKEN> \
      Repository=https://github.com/sthulb/vue \
      Domain=example.com \
  --stack-name TodoApp

After deploying the CloudFormation template, you need to go into the Amplify Console and trigger a build. The CloudFormation template can provision the resources, but can’t trigger a build since it creates resources but cannot trigger actions.

Diving deeper Into the template

The CloudFormation template needs to be updated to add your forked GitHub URL, and the Oauth token created above, and a custom domain you own. The AmplifyApp resource is your project definition – it is a collection of all the branches (or the AmplifyBranch resource) in your repository. The BuildSpec describes the settings used to build and deploy the branches in your app. In this example, we are deploying an example Todo app, which consists of four files. The Todo app expects a vue.min.js file to be available at: https://a1b2c3.amplifyapp.com/dist/vue.min.js; as a part of the buildspec we made sure the vue.min.js was in the deployment artifact, but not in the right location. We used the CustomRules property to rewrite the URL, transforming the URL from https://a1b2c3.amplifyapp.com/vue.min.js to https://a1b2c3.amplifyapp.com/dist/vue.min.js.

The AmplifyDomain resource allows you to connect your domain (https://yourdomain.com) or a subdomain (https://foo.yourdomain.com) so end users can start visiting your site.

Template

AWSTemplateFormatVersion: 2010-09-09

Parameters:
  Repository:
    Type: String
    Description: GitHub Repository URL

  OauthToken:
    Type: String
    Description: GitHub Repository URL
    NoEcho: true

  Domain:
    Type: String
    Description: Domain name to host application

Resources:
  AmplifyRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: 2012-10-17
        Statement:
          - Effect: Allow
            Principal:
              Service:
                - amplify.amazonaws.com
            Action:
              - sts:AssumeRole
      Policies:
        - PolicyName: Amplify
          PolicyDocument:
            Version: 2012-10-17
            Statement:
              - Effect: Allow
                Action: "amplify:*"
                Resource: "*"

  AmplifyApp:
    Type: "AWS::Amplify::App"
    Properties:
      Name: TodoApp
      Repository: !Ref Repository
      Description: VueJS Todo example app
      OauthToken: !Ref OauthToken
      EnableBranchAutoBuild: true
      BuildSpec: |-
        version: 0.1
        frontend:
          phases:
            build:
              commands:
                - cp dist/vue.min.js examples/todomvc/
          artifacts:
            baseDirectory: examples/todomvc/
            files:
              - '*'
      CustomRules:
        - Source: /dist/vue.min.js
          Target: /vue.min.js
          Status: '200'
      Tags:
        - Key: Name
          Value: Todo
      IAMServiceRole: !GetAtt AmplifyRole.Arn

  AmplifyBranch:
    Type: AWS::Amplify::Branch
    Properties:
      BranchName: master
      AppId: !GetAtt AmplifyApp.AppId
      Description: Master Branch
      EnableAutoBuild: true
      Tags:
        - Key: Name
          Value: todo-master
        - Key: Branch
          Value: master

  AmplifyDomain:
    Type: AWS::Amplify::Domain
    Properties:
      DomainName: !Ref Domain
      AppId: !GetAtt AmplifyApp.AppId
      SubDomainSettings:
        - Prefix: master
          BranchName: !GetAtt AmplifyBranch.BranchName

Outputs:
  DefaultDomain:
    Value: !GetAtt AmplifyApp.DefaultDomain

  MasterBranchUrl:
    Value: !Join [ ".", [ !GetAtt AmplifyBranch.BranchName, !GetAtt AmplifyDomain.DomainName ]]

Conclusion

To start using Amplify Console’s CloudFormation resources, visit the CloudFormation documentation page.

Acknowledgements

All of the code in the VueJS repository is licensed under the MIT license and property of Evan You and contributors.

 

from AWS Mobile Blog

Referencing the AWS SDK for .NET Standard 2.0 from Unity, Xamarin, or UWP

Referencing the AWS SDK for .NET Standard 2.0 from Unity, Xamarin, or UWP

In March 2019, AWS announced support for .NET Standard 2.0 in SDK for .NET. They also announced plans to remove the Portable Class Library (PCL) assemblies from NuGet packages in favor of the .NET Standard 2.0 binaries.

If you’re starting a new project targeting a platform supported by .NET Standard 2.0, especially recent versions of Unity, Xamarin and UWP, you may want to use the .NET Standard 2.0 assemblies for the AWS SDK instead of the PCL assemblies.

Currently, it’s challenging to consume .NET Standard 2.0 assemblies from NuGet packages directly in your PCL, Xamarin, or UWP applications. Unfortunately, the new csproj file format and NuGet don’t let you select assemblies for a specific target framework (in this case, .NET Standard 2.0). This limitation can cause problems because NuGet always restores the assemblies for the target framework of the project being built (in this case, one of the legacy PCL assemblies).

Considering this limitation, our guidance is for your application to directly reference the AWS SDK assemblies (DLL files) instead of the NuGet packages.

  1. Go to the NuGet page for the specific package (for example, AWSSDK.Core) and choose Download Package.
  2. Rename the downloaded .nupkg file with a .zip extension.
  3. Open it to extract the assemblies for a specific target framework (for example /lib/netstandard2.0/AWSSDK.Core.dll).

When using Unity (2018.1 or newer), choose .NET 4.x Equivalent as Scripting Runtime Version and copy the AWS SDK for .NET assemblies into the Asset folder.

Because this process can be time-consuming and error-prone, you should use a script to perform the download and extraction, especially if your project references multiple AWS services. The following PowerShell script downloads and extracts all the latest SDK .dll files into the current folder:

<#
.Synopsis
    Downloads all assemblies of the AWS SDK for .NET for a specific target framework.
.DESCRIPTION
    Downloads all assemblies of the AWS SDK for .NET for a specific target framework.
    This script allows specifying a version of the SDK to download or a target framework.

.NOTES
    This script downloads all files to the current folder (the folder returned by Get-Location).
    This script depends on GitHub to retrieve the list of assemblies to download and on NuGet
    to retrieve the relative packages.

.EXAMPLE
   ./DownloadSDK.ps1

   Downloads the latest AWS SDK for .NET assemblies for .NET Standard 2.0.

.EXAMPLE
    ./DownloadSDK.ps1 -TargetFramework net35

    Downloads the latest AWS SDK for .NET assemblies for .NET Framework 3.5.
    
.EXAMPLE
    ./DownloadSDK.ps1 -SDKVersion 3.3.0.0

    Downloads the AWS SDK for .NET version 3.3.0.0 assemblies for .NET Standard 2.0.

.PARAMETER TargetFramework
    The name of the target framework for which to download the AWS SDK for .NET assemblies. It must be a valid Target Framework Moniker, as described in https://docs.microsoft.com/en-us/dotnet/standard/frameworks.

.PARAMETER SDKVersion
    The AWS SDK for .NET version to download. This must be in the full four-number format (e.g., "3.3.0.0") and it must correspond to a tag on the https://github.com/aws/aws-sdk-net/ repository.
#>

Param (
    [Parameter()]
    [ValidateNotNullOrEmpty()]
    [string]$TargetFramework = 'netstandard2.0',
    [Parameter()]
    [ValidateNotNullOrEmpty()]
    [string]$SDKVersion = 'master'
)

function DownloadPackageAndExtractDll
{
    Param (
        [Parameter(Mandatory = $true)]
        [string] $name,
        [Parameter(Mandatory = $true)]
        [string] $version
    )

    Write-Progress -Activity "Downloading $name"

    $packageUri = "https://www.nuget.org/api/v2/package/$name/$version"
    $filePath = [System.IO.Path]::GetTempFileName()
    $WebClient.DownloadFile($packageUri, $filePath)

    #Invoke-WebRequest $packageUri -OutFile $filePath
    try {
        $zipArchive = [System.IO.Compression.ZipFile]::OpenRead($filePath)
        $entry = $zipArchive.GetEntry("lib/$TargetFramework/$name.dll")
        if ($null -ne $entry)
        {
            $entryStream = $entry.Open()
            $dllPath = Get-Location | Join-Path -ChildPath "./$name.dll"
            $dllFileStream = [System.IO.File]::OpenWrite($dllPath)
            $entryStream.CopyTo($dllFileStream)
            $dllFileStream.Close();
        }
    }
    finally {
        if ($null -ne $dllFileStream)
        {
            $dllFileStream.Dispose()
        }
        if ($null -ne $entryStream)
        {
            $entryStream.Dispose()
        }
        if ($null -ne $zipArchive)
        {
            $zipArchive.Dispose()
        }
        Remove-Item $filePath
    }
}

try {
    $WebClient = New-Object System.Net.Webclient
    Add-Type -AssemblyName System.IO.Compression.FileSystem

    $sdkVersionsUri = "https://raw.githubusercontent.com/aws/aws-sdk-net/$SDKVersion/generator/ServiceModels/_sdk-versions.json"
    $versions = Invoke-WebRequest $sdkVersionsUri | ConvertFrom-Json
    DownloadPackageAndExtractDll "AWSSDK.Core" $versions.CoreVersion
    foreach ($service in $versions.ServiceVersions.psobject.Properties)
    {
        DownloadPackageAndExtractDll "AWSSDK.$($service.Name)" $service.Value.Version
    }    
}
finally {
    if ($null -ne $WebClient)
    {
        $WebClient.Dispose()
    } 
}

At this time, not all features specific to the PCL and Unity SDK libraries have been ported over to .NET Standard 2.0. To suggest features, changes, or leave other feedback to make PCL and Unity development easier, open an issue on our aws-sdk-net-issues GitHub repo.

This workaround will only be needed until PCL assemblies are removed from the NuGet packages. At that time, restoring the NuGet packages from an iOS, Android or UWP project (either a Xamarin or Unity project) should result in the .NET Standard 2.0 assemblies being referenced and included in your build outputs.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/referencing-the-aws-sdk-for-net-standard-2-0-from-unity-xamarin-or-uwp/

Amplify Framework Adds Support for AWS Lambda Functions and Amazon DynamoDB Custom Indexes in GraphQL Schemas

Amplify Framework Adds Support for AWS Lambda Functions and Amazon DynamoDB Custom Indexes in GraphQL Schemas

Written by Kurt Kemple, Sr. Developer Advocate at AWS, Nikhil Dabhade, Sr. Product Manager at AWS, & Me!

The Amplify Framework is an open source project for building cloud-enabled mobile and web applications. Today, we’re happy to announce new features for the Function and API categories in the Amplify CLI.

It’s now possible to add an AWS Lambda function as a data source for your AWS AppSync API using the GraphQL transformer that is included in the Amplify CLI. You can also grant permissions for interacting with AWS resources from the Lambda function. This updates the associated IAM execution role policies without needing you to perform manual IAM policy updates.

The GraphQL transformer also includes a new @key directive that simplifies the syntax for creating custom indexes and performing advanced query operations with Amazon DynamoDB. This streamlines the process of configuring complex key structures to fit various access patterns when using DynamoDB as a data source.

Adding a Lambda function as a data source for your AWS AppSync API

The new @function directive in the GraphQL transform library provides an easy mechanism to call a Lambda function from a field in your AppSync API.  To connect a Lambda data source, add the @function directive to a field in your annotated GraphQL schema that’s managed by the Amplify CLI. You can also create and deploy the Lambda functions by using the Amplify CLI.

Let’s look at how you can use this feature.

What are we building?

In this blog post, we will create a React JavaScript application which uses a Lambda function as a data source for your GraphQL API. The Lambda function writes to storage which in this case will be Amazon DynamoDB. In addition, we will illustrate how you can easily grant create/read/update/delete permissions for interacting with AWS resources such as DynamoDB from a Lambda function.

Setting up the project

Pre-requisites

Download, install and configure the Amplify CLI.

$ npm install -g @aws-amplify/cli 
$ amplify configure

Next, create your project if you don’t already have one. We’re creating a React application here, but you can choose to create a project with any other Amplify-supported framework such as Angular, Vue or Ionic.

$ npx create-react-app my-project

Download, install and configure the Amplify CLI.

$ cd my-project
$ amplify init
$ npm i aws-amplify
$ cd my-project<br />$ amplify init<br />$ npm i aws-amplify

The ‘amplify init’ command initializes the project, sets up deployment resources in the cloud, and makes your project ready for Amplify.

Adding storage to your project

Next, we will setup the backend to add Storage using Amazon DynamoDB for your React JavaScript application.

$ amplify add storage
? Please select from one of the below mentioned services NoSQL Database
Welcome to the NoSQL DynamoDB database wizard
This wizard asks you a series of questions to help determine how to set up your NoSQL database table.

? Please provide a friendly name for your resource that will be used to label this category in the project: teststorage
? Please provide table name: teststorage
You can now add columns to the table.
? What would you like to name this column: id
? Please choose the data type: number
? Would you like to add another column? Yes
? What would you like to name this column: email
? Please choose the data type: string
? Would you like to add another column? Yes
? What would you like to name this column: createdAt
? Please choose the data type: string
? Would you like to add another column? No
Before you create the database, you must specify how items in your table are uniquely organized. You do this by specifying a primary key. The primary key uniquely identifies each item in the table so that no two items can have the same key.
This can be an individual column, or a combination that includes a primary key and a sort key.
To learn more about primary keys, see: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.PrimaryKey
? Please choose partition key for the table: id
? Do you want to add a sort key to your table? No
You can optionally add global secondary indexes for this table. These are useful when you run queries defined in a different column than the primary key.
To learn more about indexes, see: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html#HowItWorks.CoreComponents.SecondaryIndexes
? Do you want to add global secondary indexes to your table? No
Successfully added resource teststorage locally

Adding a function to your project

Next, we will add a Lambda function by using the Amplify CLI. We will also grant permissions for the Lambda function to be able to interact with DynamoDB table which we created in the previous step.

$ amplify add function
Using service: Lambda, provided by: awscloudformation
? Provide a friendly name for your resource to be used as a label for this category in the project: addEntry
? Provide the AWS Lambda function name: addEntry
? Choose the function template that you want to use: Hello world function
? Do you want to access other resources created in this project from your Lambda function? Yes
? Select the category storage
? Select the resources for storage category teststorage
? Select the operations you want to permit for teststorage create, read, update, delete

You can access the following resource attributes as environment variables from your Lambda function
var environment = process.env.ENV
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
var storageTeststorageArn = process.env.STORAGE_TESTSTORAGE_ARN

? Do you want to edit the local lambda function now? Yes 

This will open the Hello world function template file ‘index.js’ in the editor you selected during the ‘amplify init’ step.

Auto populating environment variables for your Lambda function

The Amplify CLI adds the environment variables, representing the AWS resources that the Lambda interacts with, as comments to your index.js files at the top for ease of reference. In this case, the AWS resource is DynamoDB. We want to have the Lambda function add an entry to the DynamoDB table with the parameters we pass to the GraphQL API. Add the following code to the Lambda function which utilizes the environment variables representing the DynamoDB table and region:

/* Amplify Params - DO NOT EDIT
You can access the following resource attributes as environment variables from your Lambda function
var environment = process.env.ENV
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
var storageTeststorageArn = process.env.STORAGE_TESTSTORAGE_ARN

Amplify Params - DO NOT EDIT */

var AWS = require('aws-sdk');
var region = process.env.REGION
var storageTeststorageName = process.env.STORAGE_TESTSTORAGE_NAME
AWS.config.update({region: region});
var ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
var ddb_table_name = storageTeststorageName
var ddb_primary_key = 'id';

function write(params, context){
    ddb.putItem(params, function(err, data) {
    if (err) {
      console.log("Error", err);
    } else {
      console.log("Success", data);
    }
  });
}
 

exports.handler = function (event, context) { //eslint-disable-line
  
  var params = {
    TableName: ddb_table_name,
    AWS.DynamoDB.Converter.input(event.arguments)
  };
  
  console.log('len: ' + Object.keys(event).length)
  if (Object.keys(event).length > 0) {
    write(params, context);
  } 
}; 

After you replace the function, jump back to the command line and press Enter to continue.

Next, run the ‘amplify push’ command to deploy your changes to the AWS cloud.

$ amplify push

Adding and updating the Lambda execution IAM role for Amplify managed resources

When you run the ‘amplify push’ command, the IAM execution role policies associated with the permissions you granted earlier are updated automatically to allow the Lambda function to interact with DynamoDB.

Setting up the API

After completing the function setup, the next step is to add a GraphQL API to your project:

$ amplify add api
? Please select from one of the below mentioned services GraphQL
? Provide API name: myproject
? Choose an authorization type for the API API key
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes

This will open the schema.graphql file in the editor you selected during the ‘amplify init’ step.

Replace the annotated schema template located in your <project-root>/amplify/backend/api/<api-name>/schema.graphql file with the following code:

type Customer @model {
  id: ID!
  name: String!
  createdAt: String
}

type Mutation {
  addEntry(id: Int, email: String, createdAt: String): String @function(name: "addEntry-${env}")
}

Check if the updates to your schema are compiled successfully by running the following command:

$ amplify api gql-compile

Now that your API is configured, run the amplify push command to deploy your changes to create the corresponding AWS backend resources.

When you’re prompted about code generation for your API, choose Yes. You can accept all default options. This generates queries, mutations, subscriptions, and boilerplate code for the Amplify libraries to consume. For more information, see Codegen in the Amplify CLI docs.

Accessing the function from your project

Now that your function and API are configured, you can access them through the API class, which is part of the Amplify JavaScript Library.

Open App.js and add the following import and call to Amplify API as shown below:

import awsconfig from './aws-exports';
import { API, graphqlOperation } from "aws-amplify";
import { addEntry }  from './graphql/mutations';
API.configure(awsconfig);

const entry = {id:“1”, email:“[email protected]”, createdAt:“2019-5-29”}
const data = await API.graphql(graphqlOperation(addEntry, entry))
console.log(data)

Running the app

Now that you have your application code complete, run the application and verify that the API call outputs “Success”.

Setting Amazon DynamoDB custom indexes in your GraphQL schemas

When building an application on top of DynamoDB, it helps to first think about access patterns. The new @key directive, which is a part of the GraphQL transformer in the Amplify CLI, makes it simple to configure complex key structures in DynamoDB that can fit your access patterns.

Let’s say we are using DynamoDB as a backend for your GraphQL API. The initial GraphQL schemas we can use to represent @model types Customer and Item are as shown below:

type Customer @model {
  email: String!
  username: String!
}

type Item @model {
    orderId: ID!
    status: Status!
    createdAt: AWSDateTime!
    name: String!
}

enum Status {
    DELIVERED
    IN_TRANSIT
    PENDING
    UNKNOWN
}

Access Patterns

For example, let’s say this application needs to facilitate the following access patterns:

  • Get customers by email – email is the primary key.
  • Get Items by status and by createdAt – orderId is the primary key

Let’s walkthrough how you would accomplish these use cases and call the APIs for these queries in your React JavaScript application.

Assumption: You completed the pre-requisites and created your React JavaScript application as shown in section 1.

Create an API

First, we will create a GraphQL API using the ‘amplify add api’ command:

$ amplify add api
? Please select from one of the below mentioned services GraphQL
? Provide API name: myproject
? Choose an authorization type for the API API key
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: Single object with fields (e.g., “Todo” with ID, name, description)
? Do you want to edit the schema now? Yes
? Press enter to continue

This will open the schema.graphql file under <myproject>/amplify/backend/api/myproject/schema.graphql

Modifying the schema.graphql file

Let’s dive in to the details with respect to the new @key directive.

Query by primary key

Add the following Customer @model type to your schema.graphql

type Customer @model @key(fields: ["email"]) {
    email: String!
    username: String
}

For Customer @model type, a @key without a name specifies the key for the DynamoDB table’s primary index. Here the hash key for the table’s primary index is email. You can only provide one @key without a name per @model type.

Query by composite keys (one or more fields are sort key)

type Item @model
    @key(fields: ["orderId", "status", "createdAt"])
    @key(name: "ByStatusAndCreatedAt", fields: ["status", "createdAt"], queryField: "itemsByStatusAndCreatedAt")
{
    orderId: ID!
    status: Status!
    createdAt: AWSDateTime!
    name: String!
}

enum Status {
    DELIVERED
    IN_TRANSIT
    PENDING
    UNKNOWN
}

Let’s break down the above Item @model type.

DynamoDB lets you query by at most two attributes. We added three fields to our first key directive ‘@key(fields: [“orderId”, “status”, “createdAt”])‘. The first field ‘orderId; will be the hash key as expected, but the sort key will be the new composite key named status#createdAt that is made of the status and createdAt fields. This enables us to run queries using more than two attributes at a time.

Run the ‘amplify push’ command to deploy your changes to the AWS cloud. Since we have the @key directive, it will create the DynamoDB tables for Customer and Item with the primary indexes, sort keys and generate resolvers that inject composite key values during queries and mutation.

$ amplify push
Current Environment: dev
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target javascript
? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2

The file <myproject>/src/graphlql/queries.js will contain the auto generated queries for our  intended access patterns “Get customers by email” and “Get Items by status and by createdAt”.

Accessing the API from your application

Now that your API is configured, you can access it through the API class, which is part of the Amplify JavaScript Library. We will call the query for “Get Items by status and by createdAt”

Open App.js and add the following import and call to Amplify API as shown below:

import awsconfig from './aws-exports';
import { API, graphqlOperation } from "aws-amplify";
import { itemsByStatusAndCreatedAt }  from './graphql/queries';
API.configure(awsconfig);

const entry = {status:'PENDING', createdAt: {beginsWith:"2019"}};
const data = await API.graphql(graphqlOperation(itemsByStatusAndCreatedAt, entry))
console.log(data)

To learn more, refer to the documentation here.

Feedback

We hope you like these new features! As always, let us know how we’re doing, and submit any requests in the Amplify Framework GitHub Repository. You can read more about AWS Amplify on the AWS Amplify website.

 

from AWS Mobile Blog

Using multiple authorization types with AWS AppSync GraphQL APIs

Using multiple authorization types with AWS AppSync GraphQL APIs

Written by Ionut Trestian, Min Bi, Vasuki Balasubramaniam, Karthikeyan, Manuel Iglesias, BG Yathi Raj, and Nader Dabit

Today, AWS announced that AWS AppSync now supports configuring more than one authorization type for GraphQL APIs. You can now configure a single GraphQL API to deliver private and public data. Private data requires authenticated access using authorization mechanisms such as IAM, Amazon Cognito User Pools, and OIDC. Public data does not require authenticated access and is delivered through authorization mechanisms such as API Keys.

You can also configure a single GraphQL API to deliver private data using more than one authorization type. For example, you can configure your GraphQL API to authorize some schema fields using OIDC, while other schema fields through Amazon Cognito User Pools or IAM.

AWS AppSync is a managed GraphQL service that simplifies application development. It allows you to create a flexible API to securely access, manipulate, and combine data from one or more data sources.

With today’s launch, you can configure additional authorization types while retaining the authorization settings of your existing GraphQL APIs. To ensure that there are no behavioral changes in your existing GraphQL APIs, your current authorization settings are set as the default.  You can add additional authorization types using the AWS AppSync console, AWS CLI, or AWS CloudFormation templates.

To add more authorization types using the AWS AppSync console, launch the console, choose your GraphQL API, then choose Settings and scroll to the Authorization settings. The snapshot below shows a GraphQL API configured to use API Key as the default authorization type. It also has two Amazon Cognito user pools and AWS IAM as additional authorization types.

  • To add more authorization types using the AWS CLI, see the create-graphql-api section of the AWS CLI Command Reference.
  • To add more authorization types through AWS CloudFormation, see AWS::AppSync::GraphQLApi in the AWS CloudFormation User Guide.

After configuring the authorization types for your GraphQL API, you can use schema directives to set the authorization types for one or more fields in your GraphQL schema. AWS AppSync now supports the following schema directives for authorization:

  • @aws_api_key to—A field uses API_KEY for authorization.
  • @aws_iam—A field uses AWS_IAM for authorization.
  • @aws_oidc—A field uses OPENID_CONNECT for authorization.
  • @aws_cognito_user_pools—A field uses AMAZON_COGNITO_USER_POOLS for authorization.

The following code example shows using schema directives for authorization:

schema {
    query: Query
    mutation: Mutation
}

type Query {
    getPost(id: ID): Post
    getAllPosts(): [Post]
    @aws_api_key
}

type Mutation {
    addPost(
        id: ID!
        author: String!
        title: String!
        content: String!
        url: String!
    ): Post!
}

type Post @aws_api_key @aws_iam {
    id: ID!
    author: String
    title: String
    content: String
    url: String
    ups: Int!
    downs: Int!
    version: Int!
}

Assume that AWS_IAM is the default authorization type for this GraphQL schema. This means that fields without directives are protected using AWS_IAM. An example is the getPost() field in Query.

Next, look at the getAllPosts() field in Query. This field is protected using @aws_api_key, which means that you can access this field using API keys. Directives work at the field level. This means that you must give API_KEY access to the Post type as well. This can be done in two ways:

  • Mark each field in the Post type with a directive.
  • Mark the Post type itself with the @aws_api_key directive.

For this example, I chose the latter option.

Now, to restrict access to fields in the Post type, you can configure directives for individual fields, as shown below. You can add a field called restrictedContent to Post and restrict access to it by using the @aws_iam directive. With this setup, AWS_IAM authenticated requests can access restrictedContent, while requests authenticated with API keys do not have access.

type Post @aws_api_key @aws_iam {
    id: ID!
    author: String
    title: String
    content: String
    url: String
    ups: Int!
    downs: Int!
    version: Int!
    restrictedContent: String!
    @aws_iam
}

Amplify CLI

Amplify CLI version 1.6.8 supports adding AWS AppSync APIs configured with multiple authorization types. To add an API with mixed authorization mode, you can run the following command:

$ amplify add codegen —apiId <API_ID>

✖ Getting API details
✔ Getting API details
Successfully added API to your Amplify project
? Enter the file name pattern of graphql queries, mutations and subscriptions graphql/**/*.graphql
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2
? Enter the file name for the generated code API.swift
? Do you want to generate code for your newly created GraphQL API Yes
✔ Downloaded the schema
✔ Generated GraphQL operations successfully and saved at graphql
✔ Code generated successfully and saved in file API.swift

Android & iOS client support

AWS also updated the Android and iOS clients to support multiple authorization types. You can enable multiple clients by setting the useClientDatabasePrefix flag to true. The awsconfiguration.json file is generated by the AWS AppSync console, and the Amplify CLI adds an entry in the AWS AppSync section. This is used to separate the caches used for operations such as query, mutation, and subscription.

Important: If you are an existing client, the useClientDatabasePrefix flag has a default value of false.  When you use multiple clients, setting useClientDatabasePrefix to true changes the location of the caches used by the client. You also must migrate any data within the caches to keep.

The following code examples highlight the new values in the awsconfiguration.json and the client code configurations.

awsconfiguration.json

The friendly_name illustrated here is created from a prompt from the Amplify CLI. There are four clients in this configuration that connect to the same API, except that they use different AuthMode and ClientDatabasePrefix settings.

{
  "Version": "1.0",
  "AppSync": {
    "Default": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "API_KEY",
      "ApiKey": "da2-xyz",
      "ClientDatabasePrefix": "friendly_name_API_KEY"
    },
    "friendly_name_AWS_IAM": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "AWS_IAM",
      "ClientDatabasePrefix": "friendly_name_AWS_IAM"
    },
    "friendly_name_AMAZON_COGNITO_USER_POOLS": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "AMAZON_COGNITO_USER_POOLS",
      "ClientDatabasePrefix": "friendly_name_AMAZON_COGNITO_USER_POOLS"
    },
    "friendly_name_OPENID_CONNECT": {
      "ApiUrl": "https://xyz.us-west-2.amazonaws.com/graphql",
      "Region": "us-west-2",
      "AuthMode": "OPENID_CONNECT",
      "ClientDatabasePrefix": "friendly_name_OPENID_CONNECT"
    }
  }
}

 

Android—Java

The useClientDatabasePrefix is added on the client builder, which signals to the builder that the ClientDatabasePrefix value should be used from the AWSConfiguration object (awsconfiguration.json).

AWSAppSyncClient client = AWSAppSyncClient.builder()
   .context(getApplicationContext())
   .awsConfiguration(new AWSConfiguration(getApplicationContext()))
   .useClientDatabasePrefix(true)
   .build();

iOS—Swift

The useClientDatabasePrefix is added to the AWSAppSyncCacheConfiguration, which reads the ClientDatabasePrefix value from the AWSAppSyncServiceConfig object (awsconfiguration.json).

let serviceConfig = try AWSAppSyncServiceConfig()
let cacheConfig = AWSAppSyncCacheConfiguration(useClientDatabasePrefix: true,
                                            appSyncServiceConfig: serviceConfig)
let clientConfig = AWSAppSyncClientConfiguration(appSyncServiceConfig: serviceConfig,
                                                   cacheConfiguration: cacheConfig)

let client = AWSAppSyncClient(appSyncConfig: clientConfig)

Public/private use case example

Here’s an example of how the newly introduced capabilities can be used in a client application.

Android—Java

The following code example creates a client factory to retrieve the client based on the need to operate in public (API_KEY) or private (AWS_IAM) authorization mode.

// AppSyncClientMode.java
public enum AppSyncClientMode {
    PUBLIC,
    PRIVATE
}

// ClientFactory.java
public class ClientFactory {

    private static Map<AWSAppSyncClient> CLIENTS;

    public static AWSAppSyncClient getAppSyncClient(AppSyncClientMode choice) {
        return CLIENTS[choice];
    }

    public static void initClients(final Context context) {
        AWSConfiguration awsConfigPublic = new AWSConfiguration(context);
        CLIENTS[PUBLIC] = AWSAppSyncClient.builder()
                                          .context(context)
                                          .awsConfiguration(awsConfigPublic)
                                          .useClientDatabasePrefix(true)
                                          .build();
        AWSConfiguration awsConfigPrivate = new AWSConfiguration(context);
        awsConfigPrivate.setConfiguration("friendly_name_AWS_IAM");
        CLIENTS[PRIVATE] = AWSAppSyncClient.builder()
                                           .context(context)
                                           .awsConfiguration(awsConfigPrivate)
                                           .useClientDatabasePrefix(true)
                                           .credentialsProvider(AWSMobileClient.getInstance())
                                           .build();
    }
}

This is what the usage would look like.

ClientFactory.getAppSyncClient(AppSyncClientMode.PRIVATE).query(fooQuery).enqueue(...);

iOS—Swift

The following code example creates a client factory to retrieve the client based on the need to operate in public (API_KEY) or private (AWS_IAM) authorization mode.

public enum AppSyncClientMode {
    case `public`
    case `private`
}

public class ClientFactory {
    static var clients: [AppSyncClientMode:AWSAppSyncClient] = [:]

    class func getAppSyncClient(mode: AppSyncClientMode) -> AWSAppSyncClient? {
        return clients[mode];
    }

    class func initClients() throws {
        let serviceConfigAPIKey = try AWSAppSyncServiceConfig()
        let cacheConfigAPIKey = try AWSAppSyncCacheConfiguration(useClientDatabasePrefix: true, appSyncServiceConfig: serviceConfigAPIKey)
        let clientConfigAPIKey = try AWSAppSyncClientConfiguration(appSyncServiceConfig: serviceConfigAPIKey, cacheConfiguration: cacheConfigAPIKey)
        clients[AppSyncClientMode.public] = try AWSAppSyncClient(appSyncConfig: clientConfigAPIKey)

        let serviceConfigIAM = try AWSAppSyncServiceConfig(forKey: "friendly_name_AWS_IAM")
        let cacheConfigIAM = try AWSAppSyncCacheConfiguration(useClientDatabasePrefix: true, appSyncServiceConfig: serviceConfigIAM)
        let clientConfigIAM = try AWSAppSyncClientConfiguration(appSyncServiceConfig: serviceConfigIAM,cacheConfiguration: cacheConfigIAM)
        clients[AppSyncClientMode.private] = try AWSAppSyncClient(appSyncConfig: clientConfigIAM)
    }
}

Conclusion

In this post, we showed how you can use the new multiple authorization type setting in AWS AppSync to allow separate public and private data authorization in your GraphQL API. While your current authorization settings are the default on existing GraphQL APIs, you can add additional authorization types using the AWS AppSync console, AWS CLI, or AWS CloudFormation templates.

from AWS Mobile Blog

Getting more visibility into GraphQL performance with AWS AppSync logs

Getting more visibility into GraphQL performance with AWS AppSync logs

Written by Shankar Raju, SDE at AWS & Nader Dabit, Sr. Developer Advocate at AWS.

Today, we are happy to announce that AWS AppSync now enables you to better understand the performance of your GraphQL requests and usage characteristics of your schema fields. You can easily identify resolvers with large latencies that may be the root cause of a performance issue. You can also identify the most and least frequently used fields in your schema and assess the impact of removing GraphQL fields. Offering support for these capabilities has been one of the top feature requests by our customers.

AWS AppSync is a managed GraphQL service that simplifies application development by letting you create a flexible API to securely access, manipulate, and combine data from one or more data sources. AWS AppSync now emits log events in a fully structured JSON format. This enables seamless integration with log analytics services such as Amazon CloudWatch Logs Insights and Amazon Elasticsearch Service (Amazon ES), and other log analytics solutions.

We have also added new fields to log events to increase your visibility into the performance and health of your GraphQL operations:

  • To search and analyze one or more log types, GraphQL requests, and multiple GraphQL API actions, we added new log fields (logType, requestId, and graphQLAPIId) to every log event that AWS AppSync emits.
  • To quickly identify errors and performance bottlenecks, we added new log fields to the existing request-level logs. These log fields contain information about the HTTP response status code (HTTPStatusResponseCode) and latency of a GraphQL request (latency).
  • To uniquely identify and run queries against any field in your GraphQL schema, we added new log fields to the existing field-level logs. These log fields contain information about the parent (parentType) and name (fieldName) of a GraphQL field.
  • To gain visibility into the time taken to resolve a GraphQL field, we also included the resolver ARN (resolverARN) in the tracing information of GraphQL fields in the field-level logs.

In this post, we show how you can get more visibility into the performance and health of your GraphQL operations using CloudWatch Logs Insights and Amazon ES. As a prerequisite, you must first enable field-level logging for your GraphQL API so that AWS AppSync can emit logs to CloudWatch Logs.

Analyzing your logs with CloudWatch Logs Insights

You can analyze your AWS AppSync logs with CloudWatch Logs Insights to identify performance bottlenecks and the root cause of operational issues. For example, you can find the resolvers with the maximum latency, the most (or least) frequently invoked resolvers, and the resolvers with the most errors.

There is no setup required to get started with CloudWatch Logs Insights. This is because AWS AppSync automatically emits logs into CloudWatch Logs when you enable field-level logging on your GraphQL API.

The following are examples of queries that you can run to get actionable insights into the performance and health of your GraphQL operations. For your convenience, we added these examples as sample queries in the CloudWatch Logs Insights console.

In the CloudWatch console, choose Logs, Insights, select the AWS AppSync log group for your GraphQL API, and then choose Sample queries, AWS AppSync queries.

Find top 10 GraphQL requests with maximum latency

fields requestId, latency
| filter logType = "RequestSummary"
| limit 10
| sort latency desc

Find top 10 resolvers with maximum latency

fields resolverArn, duration
| filter logType = "Tracing"
| limit 10
| sort duration desc

Find the most frequently invoked resolvers

fields ispresent(resolverArn) as isRes
| stats count() as invocationCount by resolverArn
| filter isRes and logType = "Tracing"
| limit 10
| sort invocationCount desc

Find resolvers with most errors in mapping templates

fields ispresent(resolverArn) as isRes
| stats count() as errorCount by resolverArn, logType
| filter isRes and (logType = "RequestMapping" or logType = "ResponseMapping") and fieldInError
| limit 10
| sort errorCount desc

The results of CloudWatch Logs Insights queries can be exported to CloudWatch dashboards. We added a CloudWatch dashboard template for AWS AppSync logs in the AWS Samples GitHub repository. You can import this template into CloudWatch dashboards to have continuous visibility into your GraphQL operations.

Analyzing your logs with Amazon ES

You can search, analyze, and visualize your AWS AppSync logs with Amazon ES to identify performance bottlenecks and root cause of operational issues. Not only can you identify resolvers with the maximum latency and errors, but you can also use Kibana to create dashboards with powerful visualizations. Kibana is an open source, data visualization and exploration tool available in Amazon ES.

To get started with Amazon ES:

  1. Create an Amazon ES cluster, if you don’t have one already.
  2. In the CloudWatch Logs console, select the log group for your GraphQL API.
  3. Choose Actions, Stream to Amazon Elasticsearch Service and select the Amazon ES cluster to which to stream your logs. You can also use a log filter pattern to stream a specific set of logs. The following example is the log filter pattern for streaming log events containing information about the request summary, tracing, and GraphQL execution summary for AWS AppSync logs.
{ ($.logType = "Tracing") || ($.logType = "RequestSummary") || ($.logType = "ExecutionSummary") }

You can create Kibana dashboards to help you identify performance bottlenecks and enable you to continuously monitor your GraphQL operations. For example, to debug a performance issue, start by visualizing the P90 latencies of your GraphQL requests and then drill into individual resolver latencies.

To build a Kibana dashboard containing these visualizations, use the following steps:

  1. Launch Kibana and choose Dashboard, Create new dashboard.
  2. Choose Add. For Visualization type, choose Line.
  3. For the filter pattern to search Elasticsearch indexes, use cwl*. Elasticsearch indexes logs streamed from CloudWatch Logs (including AWS AppSync logs) with a prefix of “cwl-”. To differentiate AWS AppSync logs from other CloudWatch logs sent to Amazon ES, we recommend adding an additional filter expression of graphQLAPIID.keyword=<AWS AppSync GraphQL API ID> to your search.
  4. To get GraphQL request data from AWS AppSync logs, choose Add Filter and use the filter expression logType.keyword=RequestSummary.
  5. Choose Metrics, Y-Axis. For Aggregation, choose Percentile; for Field, choose latency, and for Percents, enter a value of 90. This enables you to view GraphQL request latencies on the Y axis.
  6. Choose Buckets, X-Axis. For Aggregation, choose Date Histogram; for Field, choose @timestamp; and for Interval, choose Minute. This enables you to view GraphQL request latencies aggregated in 1-minute intervals. You can change the aggregation internal to view latencies aggregated at a coarser or finer grained time interval to match your data density.
  7. Save your widget and add it to the Kibana dashboard, as shown below:

  1. To build a widget that visualizes the P90 latency of each resolver, repeat steps 1, 2, 3, and 4 earlier. For step 4, use a filter expression of logType.keyword=Tracing to get resolver latencies from AWS AppSync Logs.
  2. Repeat step 5 using duration as the Field value and then repeat step 6.
  3. Choose Add sub-buckets, Split Series. For Sub Aggregation, use Terms and for Field, choose resolverArn.keyword. This enables you to visualize the latencies of individual resolvers.
  4. Save your widget and add it to the Kibana dashboard, as shown below:

Here’s a Kibana dashboard containing widgets for the P90 request latencies and individual resolver latencies:

Availability
The new logging capabilities are available in the following AWS Regions and you can start analyzing your logs today:

  • US East (N. Virginia)
  • US East (Ohio)
  • US West (Oregon)
  • Europe (Ireland)
  • Europe (Frankfurt)
  • Europe (London)
  • Asia Pacific (Tokyo)
  • Asia Pacific (Mumbai)
  • Asia Pacific (Seoul)
  • Asia Pacific (Sydney)
  • Asia Pacific (Singapore)

Log events emitted on May 8, 2019, or later use the new logging format. To analyze GraphQL requests before May 8, 2019, you can migrate older logs to the new format using a script available in the GitHub sample.

from AWS Mobile Blog

Getting started with the AWS Cloud Development Kit and Python

Getting started with the AWS Cloud Development Kit and Python

This post introduces you to the new Python bindings for the AWS Cloud Development Kit (AWS CDK).

What’s the AWS CDK, you might ask? Good question! You are probably familiar with the concept of infrastructure as code (IaC). When you think of IaC, you might think of things like AWS CloudFormation.

AWS CloudFormation allows you to define your AWS infrastructure in JSON or YAML files that can be managed within your source code repository, just like any other code. You can do pull requests and code reviews. When everything looks good, you can use these files as input into an automated process (CI/CD) that deploys your infrastructure changes.

The CDK actually builds on AWS CloudFormation and uses it as the engine for provisioning AWS resources. Rather than using a declarative language like JSON or YAML to define your infrastructure, the CDK lets you do that in your favorite imperative programming language. This includes languages such as TypeScript, Java, C#, and now Python.

About this post
Time to read 19 minutes
Time to complete (estimated) 30 minutes
Cost to complete $0 free tier (tiny fraction of a penny if you aren’t free tier)
Learning level Intermediate (200)
Services used

AWS CDK

AWS CloudFormation

Why would an imperative language be better than a declarative language? Well, it may not always be but there are some real advantages: IDE integration and composition.

IDE integration

You probably have your favorite IDE for your favorite programming language. It provides all kinds of useful features that make you a more productive developer (for example, code completion, integrated documentation, or refactoring tools).

With CDK, you automatically get all of those same advantages when defining your AWS infrastructure. That’s because you’re doing it in the same language that you use for your application code.

Composition

One of the things that modern programming languages do well is composition. By that, I mean the creation of new, higher-level abstractions that hide the details of what is happening underneath and expose a much simpler API. This is one of the main things that we do as developers, creating higher levels of abstraction to simplify code.

It turns out that this is also useful when defining your infrastructure. The existing APIs to AWS services are, by design, fairly low level because they are trying to expose as much functionality as possible to a broad audience of developers. IaC tools like AWS CloudFormation expose a declarative interface, but that interface is at the same level of the API, so it’s equally complex.

In contrast, CDK allows you to compose new abstractions that hide details and simplify common use cases. Then, it packages that code up as a library in your language of choice so that others can easily take advantage.

One of the other neat things about the CDK is that it is designed to support multiple programming languages. The core of the system is written in TypeScript, but bindings for other languages can be added.

That brings me back to the topic of this post, the Python bindings for CDK.

Sample Python application

First, there is some installation that must happen. Rather than describe all of that here, see Getting Started with the AWS CDK.

Create the application

Now, create a sample application.

$ mkdir my_python_sample
$ cd my_python_sample
$ cdk init
Available templates:
* app: Template for a CDK Application
└─ cdk init app --language=[csharp|fsharp|java|python|typescript]
* lib: Template for a CDK Construct Library
└─ cdk init lib --language=typescript
sample-app: Example CDK Application with some constructs
└─ cdk init sample-app —language=[python|typescript]

The first thing you do is create a directory that contains your Python CDK sample. The CDK provides a CLI tool to make it easy to perform many CDK-related operations. You can see that you are running the init command with no parameters.

The CLI is responding with information about all the things that the init command can do. There are different types of apps that you can initialize and there are a number of different programming languages available. Choose sample-app and python, of course.

$ cdk init --language python sample-app
Applying project template sample-app for python
Initializing a new git repository...
Executing python -m venv .env
Welcome to your CDK Python project!

You should explore the contents of this template. It demonstrates a CDK app with two instances of a stack (`HelloStack`) which also uses a user-defined construct (`HelloConstruct`). 

The `cdk.json` file tells the CDK Toolkit how to execute your app.

This project is set up like a standard Python project. The initialization process also creates a virtualenv within this project, stored under the .env directory.

After the init process completes, you can use the following steps to get your project set up.

'''
$ source .env/bin/activate
$ pip install -r requirements.txt
'''

At this point you can now synthesize the CloudFormation template for this code.

'''
$ cdk synth
'''

You can now begin exploring the source code, contained in the hello directory. There is also a very trivial test included that can be run like this:

'''
$ pytest
'''

To add additional dependencies, for example other CDK libraries, just add to your requirements.txt file and rerun the pip install -r requirements.txt command.

Useful commands:

cdk ls          list all stacks in the app
cdk synth       emits the synthesized CloudFormation template
cdk deploy      deploy this stack to your default AWS account/region
cdk diff        compare deployed stack with current state
cdk docs        open CDK documentation

Enjoy!

So, what just happened? Quite a bit, actually. The CDK CLI created some Python source code for your sample application. It also created other support files and infrastructure to make it easy to get started with CDK in Python. Here’s what your directory contains now:

(.env) $ tree
.
├── README.md
├── app.py
├── cdk.json
├── hello
│   ├── __init__.py
│   ├── hello_construct.py
│   └── hello_stack.py
├── requirements.txt
├── setup.py
└── tests
    ├── __init__.py
    └── unit
        ├── __init__.py
        └── test_hello_construct.py

Take a closer look at the contents of your directory:

  • README.md—The introductory README for this project.
  • app.py—The “main” for this sample application.
  • cdk.json—A configuration file for CDK that defines what executable CDK should run to generate the CDK construct tree.
  • hello—A Python module directory.
    • hello_construct.py—A custom CDK construct defined for use in your CDK application.
    • hello_stack.py—A custom CDK stack construct for use in your CDK application.
  • requirements.txt—This file is used by pip to install all of the dependencies for your application. In this case, it contains only -e . This tells pip to install the requirements specified in setup.py. It also tells pip to run python setup.py develop to install the code in the hello module so that it can be edited in place.
  • setup.py—Defines how this Python package would be constructed and what the dependencies are.
  • tests—Contains all tests.
  • unit—Contains unit tests.
    • test_hello_construct.py—A trivial test of the custom CDK construct created in the hello package. This is mainly to demonstrate how tests can be hooked up to the project.

You may have also noticed that as the init command was running, it mentioned that it had created a virtualenv for the project as well. I don’t have time to go into virtualenvs in detail for this post. They are basically a great tool in the Python world for isolating your development environments from your system Python environment and from other development environments.

All dependencies are installed within this virtual environment and have no effect on anything else on your machine. When you are done with this example, you can just delete the entire directory and everything goes away.

You don’t have to use the virtualenv created here but I highly recommend that you do. Here’s how you would initialize your virtualenv and then install all of your dependencies.

$ source .env/bin/activate
(.env) $ pip install -r requirements.txt
...
(.env) $ pytest
============================= test session starts ==============================
platform darwin -- Python 3.7.0, pytest-4.4.0, py-1.8.0, pluggy-0.9.0
rootdir: /Users/garnaat/projects/cdkdev/my_sample
collected 1 item                                                              
tests/unit/test_hello_construct.py .                                     [100%]
=========================== 1 passed in 0.67 seconds ===========================

As you can see, you even have tests included, although they are admittedly simple at this point. It does give you a way to make sure your sample application and all of its dependencies are installed correctly.

Generate an AWS CloudFormation template

Okay, now that you know what’s here, try to generate an AWS CloudFormation template for the constructs that you are defining in your CDK app. You use the CDK Toolkit (the CLI) to do this.

$ cdk synth 
Multiple stacks selected (hello-cdk-1, hello-cdk-2), but output is directed to stdout. Either select one stack, or use --output to send templates to a directory. 
$

Hmm, that was unexpected. What does this mean? Well, as you will see in a minute, your CDK app actually defines two stacks, hello-cdk-1 and hello-cdk-2. The synth command can only synthesize one stack at a time. It is telling you about the two that it has found and asking you to choose one of them.

$ cdk synth hello-cdk-1
Resources:
  MyFirstQueueFF09316A:
    Type: AWS::SQS::Queue
    Properties:
      VisibilityTimeout: 300
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/Resource
  MyFirstQueueMyFirstTopicSubscription774591B6:
    Type: AWS::SNS::Subscription
    Properties:
      Protocol: sqs
      TopicArn:
        Ref: MyFirstTopic0ED1F8A4
      Endpoint:
        Fn::GetAtt:
          - MyFirstQueueFF09316A
          - Arn
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/MyFirstTopicSubscription/Resource
  MyFirstQueuePolicy596EEC78:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
          - Action: sqs:SendMessage
            Condition:
              ArnEquals:
                aws:SourceArn:
                  Ref: MyFirstTopic0ED1F8A4
            Effect: Allow
            Principal:
              Service: sns.amazonaws.com
            Resource:
              Fn::GetAtt:
               - MyFirstQueueFF09316A
                - Arn
        Version: "2012-10-17"
      Queues:
        - Ref: MyFirstQueueFF09316A
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/Policy/Resource
  MyFirstTopic0ED1F8A4:
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: My First Topic
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstTopic/Resource
  MyHelloConstructBucket0DAEC57E1:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-0/Resource
  MyHelloConstructBucket18D9883BE:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-1/Resource
  MyHelloConstructBucket2C1DA3656:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-2/Resource
  MyHelloConstructBucket398A5DE67:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-3/Resource
  MyUserDC45028B:
    Type: AWS::IAM::User
    Metadata:
      aws:cdk:path: hello-cdk-1/MyUser/Resource
  MyUserDefaultPolicy7B897426:
    Type: AWS::IAM::Policy
    Properties:
      PolicyDocument:
        Statement:
         - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket0DAEC57E1
                  - Arn
             - Fn::Join:
                  - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket0DAEC57E1
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                 - MyHelloConstructBucket18D9883BE
                  - Arn
              - Fn::Join:
                  - ""
                 - - Fn::GetAtt:
                        - MyHelloConstructBucket18D9883BE
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket2C1DA3656
                  - Arn
             - Fn::Join:
                 - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket2C1DA3656
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket398A5DE67
                  - Arn
              - Fn::Join:
                  - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket398A5DE67
                       - Arn
                    - /*
        Version: "2012-10-17"
      PolicyName: MyUserDefaultPolicy7B897426
      Users:
       - Ref: MyUserDC45028B
    Metadata:
     aws:cdk:path: hello-cdk-1/MyUser/DefaultPolicy/Resource
  CDKMetadata:
    Type: AWS::CDK::Metadata
    Properties:
      Modules: aws-cdk=0.27.0,@aws-cdk/assets=0.27.0,@aws-cdk/aws-autoscaling-api=0.27.0,@aws-cdk/aws-cloudwatch=0.27.0,@aws-cdk/aws-codepipeline-api=0.27.0,@aws-cdk/aws-ec2=0.27.0,@aws-cdk/aws-events=0.27.0,@aws-cdk/aws-iam=0.27.0,@aws-cdk/aws-kms=0.27.0,@aws-cdk/aws-lambda=0.27.0,@aws-cdk/aws-logs=0.27.0,@aws-cdk/aws-s3=0.27.0,@aws-cdk/aws-s3-notifications=0.27.0,@aws-cdk/aws-sns=0.27.0,@aws-cdk/aws-sqs=0.27.0,@aws-cdk/aws-stepfunctions=0.27.0,@aws-cdk/cdk=0.27.0,@aws-cdk/cx-api=0.27.0,@aws-cdk/region-info=0.27.0,jsii-runtime=Python/3.7.0

That’s a lot of YAML. 147 lines to be exact. If you take some time to study this, you can probably understand all of the AWS resources that are being created. You could probably even understand why they are being created. Rather than go through that in detail right now, instead focus on the Python code that makes up your CDK app. It’s a lot shorter and a lot easier to understand.

First, look at your “main,” app.py.

#!/usr/bin/env python3

from aws_cdk import cdk
from hello.hello_stack import MyStack

app = cdk.App()

MyStack(app, "hello-cdk-1", env={'region': 'us-east-2'})
MyStack(app, "hello-cdk-2", env={'region': 'us-west-2'})

app.run()

Well, that’s short and sweet. You are creating an App, adding two instances of some class called MyStack to the app, and then calling the run method of the App object.

Now find out what’s going on in the MyStack class.

from aws_cdk import (
    aws_iam as iam,
    aws_sqs as sqs,
    aws_sns as sns,
    cdk
)

from hello_construct import HelloConstruct

class MyStack(cdk.Stack):
    def __init__(self, app: cdk.App, id: str, **kwargs) -&gt; None:
        super().__init__(app, id, **kwargs)

        queue = sqs.Queue(
            self, "MyFirstQueue",
            visibility_timeout_sec=300,
        )

        topic = sns.Topic(
            self, "MyFirstTopic",
            display_name="My First Topic"
        )

        topic.subscribe_queue(queue)

        hello = HelloConstruct(self, "MyHelloConstruct", num_buckets=4)
        user = iam.User(self, "MyUser")
        hello.grant_read(user)

This is a bit more interesting. This code is importing some CDK packages and then using those to create a few AWS resources.

First, you create an SQS queue called MyFirstQueue and set the visibility_timeout value for the queue. Then you create an SNS topic called MyFirstTopic.

The next line of code is interesting. You subscribe the SNS topic to the SQS queue and it’s all happening in one simple and easy to understand line of code.

If you have ever done this with the SDKs or with the CLI, you know that there are several steps to this process. You have to create an IAM policy that grants the topic permission to send messages to the queue, you have to create a topic subscription, etc. You can see the details in the AWS CloudFormation stack generated earlier.

All of that gets simplified into a single, readable line of code. That’s an example of what CDK constructs can do to hide complexity in your infrastructure.

The final thing happening here is that you are creating an instance of a HelloConstruct class. Look at the code behind this.


from aws_cdk import (
     aws_iam as iam,
     aws_s3 as s3,
     cdk,
)

class HelloConstruct(cdk.Construct):

    @property
    def buckets(self):
        return tuple(self._buckets)

    def __init__(self, scope: cdk.Construct, id: str, num_buckets: int) ->
 None:
        super().__init__(scope, id)
        self._buckets = []
        for i in range(0, num_buckets):
            self._buckets.append(s3.Bucket(self, f"Bucket-{i}"))

    def grant_read(self, principal: iam.IPrincipal):
        for b in self.buckets:
            b.grant_read(principal, "*")

This code shows an example of creating your own custom constructs in CDK that define arbitrary AWS resources under the hood while exposing a simple API.

Here, your construct accepts an integer parameter num_buckets in the constructor and then creates that number of buckets inside the scope passed in. It also exposes a grant_read method that automatically grants the IAM principal passed in read permissions to all buckets associated with your construct.

Deploy the AWS CloudFormation templates

The whole point of CDK is to create AWS infrastructure and so far you haven’t done any of that. So now use your CDK program to generate the AWS CloudFormation templates. Then, deploy those templates to your AWS account and validate that the right resources got created.

$ cdk deploy
This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening).
Please confirm you intend to make the following modifications:

IAM Statement Changes
┌───┬───────────────┬────────┬───────────────┬───────────────┬────────────────┐
│   │ Resource      │ Effect │ Action        │ Principal     │ Condition      │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyFirstQueu │ Allow  │ sqs:SendMessa │ Service:sns.a │ "ArnEquals": { │
│   │ e.Arn}        │        │ ge            │ mazonaws.com  │   "aws:SourceA │
│   │               │        │               │               │ rn": "${MyFirs │
│   │               │        │               │               │ tTopic}"       │
│   │               │        │               │               │ }              │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 0.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 0.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 1.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 1.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 2.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 2.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 3.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 3.Arn}/*      │        │               │               │                │
└───┴───────────────┴────────┴───────────────┴───────────────┴────────────────┘
(NOTE: There may be security-related changes not in this list. See http://bit.ly/cdk-2EhF7Np)

Do you wish to deploy these changes (y/n)?

Here, the CDK is telling you about the security-related changes that this deployment includes. It shows you the resources or ARN patterns involved, the actions being granted, and the IAM principals to which the grants apply. You can review these and press y when ready. You then see status reported about the resources being created.

hello-cdk-1: deploying...
hello-cdk-1: creating CloudFormation changeset...
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1)
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::IAM::User | MyUser (MyUserDC45028B)
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::IAM::User | MyUser (MyUserDC45028B) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A) Resource creation Initiated
0/12 | 8:41:16 AM | CREATE_IN_PROGRESS | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4) Resource creation Initiated
0/12 | 8:41:16 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) Resource creation Initiated
1/12 | 8:41:16 AM | CREATE_COMPLETE | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A)
1/12 | 8:41:17 AM | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata Resource creation Initiated
2/12 | 8:41:17 AM | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata
3/12 | 8:41:26 AM | CREATE_COMPLETE | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4)
3/12 | 8:41:28 AM | CREATE_IN_PROGRESS | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6)
3/12 | 8:41:29 AM | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78)
3/12 | 8:41:29 AM | CREATE_IN_PROGRESS | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) Resource creation Initiated
4/12 | 8:41:30 AM | CREATE_COMPLETE | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6)
4/12 | 8:41:30 AM | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) Resource creation Initiated
5/12 | 8:41:30 AM | CREATE_COMPLETE | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78)
6/12 | 8:41:35 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1)
7/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67)
8/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE)
9/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656)
10/12 | 8:41:50 AM | CREATE_COMPLETE | AWS::IAM::User | MyUser (MyUserDC45028B)
10/12 | 8:41:53 AM | CREATE_IN_PROGRESS | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426)
10/12 | 8:41:53 AM | CREATE_IN_PROGRESS | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) Resource creation Initiated
11/12 | 8:42:02 AM | CREATE_COMPLETE | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426)
12/12 | 8:42:03 AM | CREATE_COMPLETE | AWS::CloudFormation::Stack | hello-cdk-1

✅ hello-cdk-1

Stack ARN:
arn:aws:cloudformation:us-east-2:433781611764:stack/hello-cdk-1/87482f50-6c27-11e9-87d0-026465bb0bfc

At this point, the CLI presents you with another summary of IAM changes and asks you to confirm. This is because your CDK sample application creates two stacks in two different AWS Regions. Approve the changes for the second stack and you see similar status output.

Clean up

Now you can use the AWS Management Console to look at the resources that were created and validate that it all makes sense. After you are finished, you can easily destroy all of these resources with a single command.

$ cdk destroy
Are you sure you want to delete: hello-cdk-2, hello-cdk-1 (y/n)? y

hello-cdk-2: destroying...
   0 | 8:48:31 AM | DELETE_IN_PROGRESS   | AWS::CloudFormation::Stack | hello-cdk-2 User Initiated
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::CDK::Metadata     | CDKMetadata 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   1 | 8:48:34 AM | DELETE_COMPLETE      | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) <br />   2 | 8:48:34 AM | DELETE_COMPLETE      | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   3 | 8:48:34 AM | DELETE_COMPLETE      | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   4 | 8:48:35 AM | DELETE_COMPLETE      | AWS::CDK::Metadata     | CDKMetadata 
   4 | 8:48:35 AM | DELETE_IN_PROGRESS   | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   4 | 8:48:36 AM | DELETE_IN_PROGRESS   | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4)
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) 
   4 | 8:48:36 AM | DELETE_IN_PROGRESS   | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) 
   5 | 8:48:36 AM | DELETE_COMPLETE      | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
   6 | 8:48:36 AM | DELETE_COMPLETE      | AWS::IAM::User         | MyUser (MyUserDC45028B) 
 6 Currently in progress: hello-cdk-2, MyFirstQueueFF09316A

 ✅  hello-cdk-2: destroyed
hello-cdk-1: destroying...
   0 | 8:49:38 AM | DELETE_IN_PROGRESS   | AWS::CloudFormation::Stack | hello-cdk-1 User Initiated
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::CDK::Metadata     | CDKMetadata 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   1 | 8:49:41 AM | DELETE_COMPLETE      | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   2 | 8:49:41 AM | DELETE_COMPLETE      | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   3 | 8:49:41 AM | DELETE_COMPLETE      | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   4 | 8:49:42 AM | DELETE_COMPLETE      | AWS::CDK::Metadata     | CDKMetadata 
   4 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) 
   4 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) 
   5 | 8:49:42 AM | DELETE_COMPLETE      | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   5 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A) 
   6 | 8:49:43 AM | DELETE_COMPLETE      | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
 6 Currently in progress: hello-cdk-1, MyFirstQueueFF09316A
   7 | 8:50:43 AM | DELETE_COMPLETE      | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A)

 ✅  hello-cdk-1: destroyed

 

Conclusion

In this post, I introduced you to the AWS Cloud Development Kit. You saw how it enables you to define your AWS infrastructure in modern programming languages like TypeScript, Java, C#, and now Python. I showed you how to use the CDK CLI to initialize a new sample application in Python, and walked you though the project structure. I taught you how to use the CDK to synthesize your Python code into AWS CloudFormation templates and deploy them through AWS CloudFormation to provision AWS infrastructure. Finally, I showed you how to clean up these resources when you’re done.

Now it’s your turn. Go build something amazing with the AWS CDK for Python! To help get you started, see the following resources:

The CDK and the Python language binding are currently in developer preview, so I’d love to get feedback on what you like, and where AWS can do better. The team lives on GitHub at https://github.com/awslabs/aws-cdk where it’s easy to get directly in touch with the engineers building the CDK. Raise an issue if you discover a bug or want to make a feature request. Join the conversation on the aws-cdk Gitter channel to ask questions.

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/getting-started-with-the-aws-cloud-development-kit-and-python/

Use the Amplify Console with incoming webhooks to trigger deployments

Use the Amplify Console with incoming webhooks to trigger deployments

Written by Nikhil Swaminathan, Sr. Product Manager (Tech) at AWS.

The Amplify Console recently launched support for incoming webhooks. This feature enables you to use third-party applications such as Contentful and Zapier to trigger deployments in the Amplify Console without requiring a code commit.

You can use headless CMS tools such as Contentful with the Amplify Console incoming webhook feature to trigger a deployment every time content is updated—for example, when a blog author publishes a new post. Modern CMSs are headless in nature, which gives developers the freedom to develop with any technology because the content itself doesn’t have a presentation layer. Content creators get the added benefit of publishing a single instance of the content to both web and mobile devices.

In this blog post, we set up Amplify Console to deploy updates every time new content is published.

1. Create a Contentful account using the Contentful CLI, and follow the steps in the getting started guide. The CLI helps you create a Contentful account, a Contentful project (called a space) with a sample blog content model, and a starter repository that’s downloaded to your local machine.

2. After the CLI creates a Contentful space, log in to your Contentful space at the Contentful website and choose ‘Settings > API Keys’.

3. The API keys were generated when you ran the CLI. Copy the Space ID and Content Delivery API. You’ll need these to trigger content deployments.

4. Push the code to a Git repo of your choice (Amplify Console supports GitHub, BitBucket, GitLab, and CodeCommit).

Log in to the Amplify Console, connect your repo, and pick a branch. On the Build Settings page, enter the CONTENTFUL_DELIVERY_TOKEN and the CONTENTFUL_SPACE_ID into the environment variables section. These tokens are used by your app during the build to authenticate with the Contentful service. Review the changes, and choose Save and deploy. Your app builds and deploys to an amplifyapp.com URL. It should look like this:

5. Create an incoming webhook to publish content updates. Choose App Settings > Build Settings, and then choose Create webhook. This webhook enables you to trigger a build in the Amplify Console on every POST to the HTTP endpoint. After you create the webhook, copy the URL (it looks like  https://webhooks.amplify…)

6. Go back to the Contentful dashboard, and choose Settings > Webhooks. Then choose  Add Webhook. Paste the webhook URL you copied from the Amplify Console into the URL section and update the Content Type to application/json. Choose Save.

 

7. We’re now ready to trigger a new build through a content update! Go to the Content tab on Contentful and add a new entry with the following fields—Name: Deploying to Amplify Console and Content Type: Blog Post. Enter the other required fields, and choose Publish.

8. The Amplify Console will kickoff a new build with the newest post.

You can also use the incoming webhook feature to trigger builds on post-commit Git hooks, and through daily build schedulers. We hope you like this new feature – learn more about the Amplify Console at https://console.amplify.aws.

from AWS Mobile Blog

Node.js 6 is approaching End-of-Life – upgrade your AWS Lambda functions to the Node.js 10 LTS

Node.js 6 is approaching End-of-Life – upgrade your AWS Lambda functions to the Node.js 10 LTS

This blog was authored by Liz Parody, Developer Relations Manager at NodeSource.

 

Node.js 6.x (“Boron”), which has been maintained as a long-term stable (LTS) release line since fall of 2016, is reaching its scheduled end-of-life (EOL) on April 30, 2019. After the maintenance period ends, Node.js 6 will no longer be included in Node.js releases. This includes releases that address critical bugs, security fixes, patches, or other important updates.

[Image source]

Recently, AWS has been reminding users to upgrade AWS Lambda functions built on the Node.js 6 runtime to a newer version. This is because language runtimes that have reached EOL are unsupported in Lambda.

Requests for feature additions to this release line aren’t accepted. Continued use of the Node.js 6 runtime after April 30, 2019 increases your exposure to various risks, including the following:

  • Security vulnerabilities – Node.js contributors are constantly working to fix security flaws of all severity levels (low, moderate, and high). In the February 2019 Security Release, all actively maintained Node.js release lines were patched, including “Boron”. After April 30, security releases will no longer be applied to Node.js 6, increasing the potential for malicious attacks.
  • Software incompatibility – Newer versions of Node.js better support current best practices and newer design patterns. For example, the popular async/await pattern to interact with promises was first introduced in the Node.js 8 (“Carbon”) release line. “Boron” users can’t take advantage of this feature. If you don’t upgrade to a newer release line, you miss out on features and improvements that enable you to write better, more performant applications.
  • Compliance issues – This risk applies most to teams in highly regulated industries such as healthcare, finance, or ecommerce. It also applies to those who deal with sensitive data such as personally identifiable information (PII). Exposing these types of data to unnecessary risk can result in severe consequences, ranging from extended legal battles to hefty fines.
  • Poor performance and reliability – The Node.js 10 (“Dubnium”) runtime is significantly faster than Node.js 6, with the capacity to perform twice as many operations per second. Lambda is an especially popular choice for applications that must deliver low latency and high performance. Upgrading to a newer version of the Node.js runtime is a relatively painless way to improve the performance of your application.
  • Higher operating costs – The performance benefits of the Node.js 10 runtime compared to Node.js 6 can directly translate to reduced operational costs. Aside from missing the day-to-day savings, running an unmaintained version of the Node.js runtime also significantly increases the likelihood of unexpected costs associated with an outage or critical issue.

Key differences between Node.js 6 and Node.js 10

Metrics provided by the Node.js Benchmarking working group highlight the performance benefits of upgrading from Node.js 6 to the most recent LTS release line, Node.js 10:

  • Operations per second are nearly two times higher in Node.js 10 versus Node.js 6.
  • Latency has decreased by 65% in Node.js 10 versus Node.js 6.
  • The footprint after load is 35% lower in Node.js 10 versus Node.js 6, resulting in improved performance in the event of a cold start.

While benchmarks don’t always reflect real-world results, the trend is clear that performance is increasing in each new Node.js release. [Data Source]

The most recent LTS release line is Node.js 10 (“Dubnium”). This release line features several enhancements and improvements over earlier versions, including the following:

  • Node.js 10 is the first release line to upgrade to OpenSSL version 1.1.0.
  • Native support for HTTP/2, first added to the Node.js 8 LTS release line, was stabilized in Node.js 10. It offers massive performance improvements over HTTP/1 (including reduced latency and minimized protocol overhead), and adds support for request prioritization and server push.
  • Node.js 10 introduces new JavaScript language capabilities, such as Function.prototype.toString() and mitigations for side-channel vulnerabilities, to help prevent information leaks.

“While there are a handful of new features, the standout changes in Node.js 10.0.0 are improvements to error handling and diagnostics that will improve the overall developer experience.” James Snell, a member of the Node.js Technical Steering Committee (TSC) [Quote source]

Upgrade using the N|Solid Lambda layer

AWS doesn’t currently offer the Node.js 10 runtime in Lambda. However, you may want to test the Node.js 10 runtime version in a development or staging environment before rolling out updates to production Lambda functions.

Before AWS adds the Node.js 10 runtime version for Lambda, NodeSource’s N|Solid runtime is available for use as a Lambda layer. It includes a 100%-compatible version for the Node.js 10 LTS release line.

If you install N|Solid as a Lambda layer, you can begin migration and testing before the Node.js 6 EOL date. You can also easily switch to the Node.js 10 runtime provided by AWS when it’s available. Choose between versions based on the Node.js 8 (“Carbon”) and 10 (“Dubnium”) LTS release lines. It takes just a few minutes to get up and running.

First, when you’re creating a function, choose Use custom runtime in function code or layer. (If you’re migrating an existing function, you can change the runtime for the function.)

 

Next, add a new Lambda layer, and choose Provide a layer version ARN. You can find the latest ARN for the N|Solid Lambda layer here. Enter the N|Solid runtime ARN for your AWS Region and Node.js version (Node.js 8 “Carbon” or Node.js 10 “Dubnium”). This is where you can use Node.js 10.

 

That’s it! Your Lambda function is now set up to use Node.js 10.

You can also update your functions to use the N|Solid Lambda layer with the AWS CLI.

To update an existing function:

aws lambda update-function-configuration --function-name <YOUR_FUNCTION_NAME> --layers arn:aws:lambda:<AWS_REGION>:800406105498:layer:nsolid-node-10:6 --runtime provided

In addition to the Node.js 10 runtime, the Lambda layer provided by NodeSource includes N|Solid. N|Solid for AWS Lambda provides low-impact performance monitoring for Lambda functions. To take advantage of this feature, you can also sign up for a free NodeSource account. After you sign up, you just need to set your N|Solid license key as an environment variable in your Lambda function.

That’s all you have to do to start monitoring your Node.js Lambda functions. After you add your license key, your Lambda function invocations should show up on the Functions tab of your N|Solid dashboard.

For more information, see our N|Solid for AWS Lambda getting started guide.

Upgrade to Node.js 10 LTS (“Dubnium”) outside of Lambda

Not only are workloads in Lambda affected, but you must consider other locations where you’re running Node.js 6. I review three more ways to upgrade your version of Node.js in other compute environments.

Use NVM

One of the best practices for upgrading Node.js versions is using NVM. NVM, or Node Version Manager, lets you manage multiple active Node.js versions.

To install NVM on *nix systems, you can use the install script using cURL.

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

or Wget:

$ wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

For Windows-based systems, you can use NVM for Windows.

After NVM is installed, you can manage your versions of Node.js with some simple AWS CLI commands.

To download, compile, and install the latest release of Node.js:

$ nvm install node # "node" is an alias for the latest version

To install a specific version of Node.js:

$ nvm install 10.10.0 # or 8.5.0, 8.9.1, etc.

Upgrade manually

To upgrade Node.js without a tool like NVM, you can manually install a new version. NodeSource provides Linux distributions for Node.js, and recommends that you upgrade using the NodeSource Node.js Binary Distributions.

To install Node.js 10:

Using Ubuntu

$ curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash - 
$ sudo apt-get install -y nodejs

Using Amazon Linux

$ curl -sL https://rpm.nodesource.com/setup_10.x | sudo bash -
$ sudo yum install -y nodejs

Most production applications built on Node.js make use of LTS release lines. We highly recommend that you upgrade any application or Lambda function currently using the Node.js 6 runtime version to Node.js 10, the newest LTS version.

To hear more about the latest release line, check out NodeSource’s webinar, New and Exciting Features Landing in Node.js 12. This release line officially becomes the current LTS version in October 2019.

About the Author

Liz is a self-taught Software Engineer focused on JavaScript, and Developer Relations Manager at NodeSource. She organizes different community events such as JSConf Colombia, Pioneras Developers, Startup Weekend and has been a speaker at EmpireJS, MedellinJS, PionerasDev, and GDG.

She loves sharing knowledge, promoting JavaScript and Node.js ecosystem and participating in key tech events and conferences to enhance her knowledge and network.

Disclaimer
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/node-js-6-is-approaching-end-of-life-upgrade-your-aws-lambda-functions-to-the-node-js-10-lts/

V2 AWS SDK for Go adds Context to API operations

V2 AWS SDK for Go adds Context to API operations

The v2 AWS SDK for Go developer preview made a breaking change in the release of v0.8.0. The v0.8.0 release added a new parameter, context.Context, to the SDK’s Send and Paginate Next methods.

Context was added as a required parameter to the Send and Paginate Next methods to enable you to use the v2 SDK for Go in your application with cancellation and request tracing.

Using the Context pattern helps reduce the chance of code paths mistakenly dropping the Context, causing the cancellation and tracing chain to be lost. When the Context is lost, it can be difficult to track down the missing cancellation and tracing metrics within an application.

Migrating to v0.8.0

After you update your application to depend on v0.8.0 of the v2 SDK, you’ll encounter compile errors. This is because of the Context parameter that was added to the Send and Paginate Next methods.

If your application is already using the Context pattern, you can now pass the Context into Send and Paginate Next methods directly, instead of calling SetContext on the request returned by the client’s operation request method.

If you don’t need a Context within your application, you can use context.Background() or context.TODO() instead of specifying a Context, such as a timeout, deadline, cancel, or httptrace.ClientTrace.

Example code: before v0.8.0

The following code is an example of an application using the Amazon S3 service’s PutObject API operation with the v2 SDK before v0.8.0. The example code is
using the req.SetContext method to specify the Context for the PutObject operation.

func uploadObject(ctx context.Context, bucket, key string, obj io.ReadSeeker) error
	req := svc.PutObjectRequest(&s3.PutObjectInput{
		Bucket: &bucket,
		Key:    &key,
		Body:   obj,
	})
	req.SetContext(ctx)

	_, err := req.Send()
	return err
}

Example code: updated to v0.8.0

To migrate the previous example code to use v0.8.0 of the v2 SDK, we need to remove the req.SetContext method call, and pass the Context directly to
the Send method instead. This change will make the example code compatible with v0.8.0 of the v2 SDK.

func uploadObject(ctx context.Context, bucket, key string, obj io.ReadSeeker) error
	req := svc.PutObjectRequest(&s3.PutObjectInput{
		Bucket: &bucket,
		Key:    &key,
		Body:   obj,
	})

	_, err := req.Send(ctx)
	return err
}

What’s next for the v2 SDK for Go developer preview?

We’re working to improve usability and reduce pain points with the v2 SDK. Two specific areas we’re looking at are the SDK’s request lifecycle and error handling.

Improving the SDK’s request lifecycle will help reduce your application’s CPU and memory performance when using the SDK. It also makes it easier for you to extend and modify the SDK’s core functionality.

For the SDK’s error handling, we’re investigating alternative approaches, such as typed errors for API operation exceptions. By using typed errors, your application can assert directly against the error type. This would reduce the need to do string comparisons for SDK API operation response errors.

See our issues on Github to share your feedback, questions, and feature requests, and to stay current with the v2 AWS SDK for Go developer preview as it moves to GA.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/v2-aws-sdk-for-go-adds-context-to-api-operations/

New — Analyze and debug distributed applications interactively using AWS X-Ray Analytics

New — Analyze and debug distributed applications interactively using AWS X-Ray Analytics

Developers spend a lot of time searching through application logs, service logs, metrics, and traces to understand performance bottlenecks and to pinpoint their root causes. Correlating this information to identify its impact on end users comes with its own challenges of mining the data and performing analysis. This adds to the triaging time when using a distributed microservices architecture, where the call passes through several microservices. To address these challenges, AWS launched AWS X-Ray Analytics.

X-Ray helps you analyze and debug distributed applications, such as those built using a microservices architecture. Using X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root causes of performance issues and errors. It helps you debug and triage distributed applications wherever those applications are running, whether the architecture is serverless, containers, Amazon EC2, on-premises, or a mixture of all of these.

AWS X-Ray Analytics helps you quickly and easily understand:

  • Any latency degradation or increase in error or fault rates.
  • The latency experienced by customers in the 50th, 90th, and 95th percentiles.
  • The root cause of the issue at hand.
  • End users who are impacted, and by how much.
  • Comparisons of trends, based on different criteria. For example, you can understand if new deployments caused a regression.

In this post, I walk you through several use cases to see how you can use X-Ray Analytics to address these issues.

AWS X-Ray Analytics Walkthrough

The following is a service map of an online store application hosted on Amazon EC2 and serverless technologies like Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Using this service map, you can easily see that there are faults in the “products” microservice in the selected time range.

Use X-Ray Analytics to explore the root cause and end-user impact. Looking at the X-Ray Analytics console, you can determine that the 50th-percentile customers have latency of around 1.6 seconds. The 95th-percentile customers have latency of more that 2.5 seconds using the response time distribution.

This chart also helps you see the overall latency distribution of the requests in the selected group for the selected time range. You can learn more about X-Ray groups and their use cases in the Deep dive into AWS X-Ray groups and use cases post.

Now, you want to triage the increase in latency in requests that are taking more than 1.5 seconds and get to its root cause. Select those traces from the graph, as shown below. You see that all the numbers in the chart, like Time series activity and tables, are automatically updated based on the filter criteria. Also, a new Filtered traces trend line, indicated in blue, is added.

This Filtered trace set A trend line keeps updating as you add new criteria. For example, looking at the following tables, you can easily see that around 85% of these high-latency requests result in 500 errors, and Emma is the most impacted customer.

To focus on the traces that result in 500 errors, select that row from the table and see the filtered traces and other data points getting updated. In the Root Cause section, see the root cause of issues resulting in this increased latency. You can see that the DynamoDB wait in the “products” service has resulted in around 57% of the errors. You can also view individual traces that match the selected filters, as shown.

Selecting the Fault Root Cause using the cog icon helps in viewing the fault exception. This indicates that the configured, provisioned throughput capacity of the DynamoDB table has exceeded its capacity, giving a clear indication of the root cause of the issue.

You just saw how you can use X-Ray Analytics to detect an increase in latency and understand the root cause of the issue and end-user impact.

Comparison of trends

Now, see how you can compare two trends using the compare functionality in X-Ray Analytics. You can use this functionality to compare any two filter expressions. For example, you can compare performance experience between two users, or compare and analyze whether a new deployment caused any regression.

Say that you have deployed a new Lambda function at 3:40 AM. You want to compare five minutes before and five minutes after the deployment was completed to understand whether any regression was caused, and what the impact is to end users.

Use the compare functionality provided in X-Ray Analytics. In this case, two different time ranges are represented. Filtered trace set A, starting from 3:35 AM to 3:40 AM, is shown in blue, and Filtered trace set B, starting from 3:40 AM to 3:45 AM, is shown in green.

In compare mode, the percentage deviation column that is automatically calculated clearly indicates that 500 errors decreased by 32 percentage points after the new deployment was completed. This gives a clear indication to the DevOps team that the new deployment didn’t cause any regression and was successful in reducing errors.

Identifying outlying users

Take an example in which one of the end users, “Ava,” is complaining about degradation in performance experience from the application. None of the other users have reported this issue.

Use the compare feature in X-Ray Analytics to compare the response time of all users (blue trend line) with that of Ava (green trend line). Looking at the following response time distribution graph, it’s not easy to notice the difference in end-user experience based on the data.

However, as you look into the details of other attributes, like the annotations that you added during code instrumentation (annotation.ACL_CACHED) and response time root cause, you can get actionable insights. You see that the performance latency is in the “api” service and related to the time spent in the “generate_acl” module. Correlate that to the ACL not being cached, based on the approximate 55% delta that you see in Ava’s requests compared to other users.

You can also validate this by looking at the traces from the trace list and see that there is a 300-millisecond delay added by the “generate_acl” module. This shows how X-Ray Analytics helps correlate different attributes to understand the root cause of the issue.

Getting Started

To get started using X-Ray Analytics, visit the AWS Management Console for X-Ray. There is no additional charge for using this feature.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/new-analyze-and-debug-distributed-applications-interactively-using-aws-x-ray-analytics/