Tag: Elastic Beanstalk

AWS Tech Talk: Infrastructure is Code with the AWS CDK

AWS Tech Talk: Infrastructure is Code with the AWS CDK

If you missed the Infrastructure is Code with the AWS Cloud Development Kit (AWS CDK) Tech Talk last week, you can now watch it online on the AWS Online Tech Talks channel. Elad and I had a ton of fun building this little URL shortener sample app to demo the AWS CDK for Python. If you aren’t a Python developer, don’t worry! The Python code we use is easy to understand and translates directly to other languages. Plus, you can learn a lot about the AWS CDK and get a tour of the AWS Construct Library.

Specifically, in this tech talk, you can see us:

  • Build a URL shortener service using AWS Constructs for AWS Lambda, Amazon API Gateway and Amazon DynamoDB.
  • Demonstrate the concept of shared infrastructure through a base CDK stack class that includes APIs for accessing shared resources such as a domain name and a VPC.
  • Use AWS Fargate to create a custom construct for a traffic generator.
  • Use a 3rd party construct library which automatically defines a monitoring dashboard and alarms for supported resources.

If you’re hungry for more after watching the Tech Talk, check out these resources to learn more about the AWS CDK:

  • https://cdkworkshop.com — This fun, online workshop takes an hour or two to complete and is available in both TypeScript and Python. You learn how to work with the AWS CDK by building an application using AWS Lambda, Amazon DynamoDB, and Amazon API Gateway.
  • https://www.github.com/aws-samples/aws-cdk-examples — After you know the basics of the AWS CDK, you’re ready to jump into the AWS CDK Examples repo! Learn more about the Constructs available in the AWS Construct Library. There are examples available for TypeScript, Python, C#, and Java. You can find the URL shortener sample app from our Tech Talk here, too.

I hope you enjoy learning about the AWS CDK. Let us know what other types of examples, apps, or construct libraries you want to see us build in demos or sample code!

Happy constructing!

Jason and Elad

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/aws-tech-talk-infrastructure-is-code-with-the-aws-cdk/

Setting up an Android application with AWS SDK for C++

Setting up an Android application with AWS SDK for C++

The AWS SDK for C++ can build and run on many different platforms, including Android. In this post, I walk you through building and running a sample application on an Android device.

Overview

I cover the following topics:

  • Building and installing the SDK for C++ and creating a desktop application that implements the basic functionality.
  • Setting up the environment for Android development.
  • Building and installing the SDK for C++ for Android and transplanting the desktop application to Android platform with the cross-compiled SDK.

Prerequisites

To get started, you need the following resources:

  • An AWS account
  • A GitHub environment to build and install AWS SDK for C++

To set up the application on the desktop

Follow these steps to set up the application on your desktop:

  • Create an Amazon Cognito identity pool
  • Build and test the desktop application

Create an Amazon Cognito identity pool

For this demo, I use unauthenticated identities, which typically belong to guest users. To learn about unauthenticated and authenticated identities and choose the one that fits your business, check out Using Identity Pools.

In the Amazon Cognito console, choose Manage identity pools, Create new identity pool. Enter an identity pool name, like “My Android App with CPP SDK”, and choose Enable access to unauthenticated identities.

Next, choose Create Pool, View Details. Two Role Summary sections should display, one for authenticated and the other for unauthenticated identities.

For the unauthenticated identities, choose View Policy Document, Edit. Under Action, add the following line:

s3: ListAllMyBuckets

After you have completed the preceding steps, the policy should read as follows:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListAllMyBuckets",
                "mobileanalytics:PutEvents",
                "cognito-sync:*"
            ],
            "Resource": "*"
        }
    ]
}

To finish the creation, choose Allow.

On the next page, in the Get AWS Credentials section, copy the identity pool ID and keep it somewhere to use later. Or, you can find it after you choose Edit identity pool in the Amazon Cognito console. The identity pool ID has the following format:

<region>:<uuid>

Build and test the desktop application

Before building an Android application, you can build a regular application with the SDK for C++ in your desktop environment for testing purpose. Then you must modify the source code, making the CMake script switch its target to Android.

Here’s how to build and install the SDK for C++ statically:

cd <workspace>
git clone https://github.com/aws/aws-sdk-cpp.git
mkdir build_sdk_desktop
cd build_sdk_desktop
cmake ../aws-sdk-cpp \
    -DBUILD_ONLY="identity-management;s3" \
    -DBUILD_SHARED_LIBS=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_INSTALL_PREFIX="<workspace>/install_sdk_desktop"
cmake --build .
cmake --build . --target install

If you install the SDK for C++ successfully, you can find libaws-cpp-sdk-*.a (or aws-cpp-sdk-*.lib for Windows) under <workspace>/install_sdk_desktop/lib/ (or <workspace>/install_sdk_desktop/lib64).

Next, build the application and link it to the library that you built. Create a folder <workspace>/app_list_all_buckets and place two files under this directory:

  • main.cpp (source file)
  • CMakeLists.txt (CMake file)
// main.cpp
#include <iostream>
#include <aws/core/Aws.h>
#include <aws/core/utils/Outcome.h>
#include <aws/core/utils/logging/AWSLogging.h>
#include <aws/core/client/ClientConfiguration.h>
#include <aws/core/auth/AWSCredentialsProvider.h>
#include <aws/identity-management/auth/CognitoCachingCredentialsProvider.h>
#include <aws/s3/S3Client.h>

using namespace Aws::Auth;
using namespace Aws::CognitoIdentity;
using namespace Aws::CognitoIdentity::Model;

static const char ALLOCATION_TAG[] = "ListAllBuckets";
static const char ACCOUNT_ID[] = "your-account-id";
static const char IDENTITY_POOL_ID[] = "your-cognito-identity-id";

int main()
{
    Aws::SDKOptions options;
    options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Debug;
    Aws::InitAPI(options);

    Aws::Client::ClientConfiguration config;
    auto cognitoIdentityClient = Aws::MakeShared<CognitoIdentityClient>(ALLOCATION_TAG, Aws::MakeShared<AnonymousAWSCredentialsProvider>(ALLOCATION_TAG), config);
    auto cognitoCredentialsProvider = Aws::MakeShared<CognitoCachingAnonymousCredentialsProvider>(ALLOCATION_TAG, ACCOUNT_ID, IDENTITY_POOL_ID, cognitoIdentityClient);

    Aws::S3::S3Client s3Client(cognitoCredentialsProvider, config);
    auto listBucketsOutcome = s3Client.ListBuckets();
    Aws::StringStream ss;
    if (listBucketsOutcome.IsSuccess())
    {
        ss << "Buckets:" << std::endl;
        for (auto const& bucket : listBucketsOutcome.GetResult().GetBuckets())
        {
            ss << "  " << bucket.GetName() << std::endl;
        }
    }
    else
    {
        ss << "Failed to list buckets." << std::endl;
    }
    std::cout << ss.str() << std::endl;
    Aws::ShutdownAPI(options);
    return 0;
}
# CMakeLists.txt
cmake_minimum_required(VERSION 3.3)
set(CMAKE_CXX_STANDARD 11)

project(list_all_buckets LANGUAGES CXX)
find_package(AWSSDK REQUIRED COMPONENTS s3 identity-management)
add_executable(${PROJECT_NAME} "main.cpp")
target_link_libraries(${PROJECT_NAME} ${AWSSDK_LINK_LIBRARIES})

Build and test this desktop application with the following commands:

cd <workspace>
mkdir build_app_desktop
cd build_app_desktop
cmake ../app_list_all_buckets \
    -DBUILD_SHARED_LIBS=ON \
    -DCMAKE_PREFIX_PATH="<workspace>/install_sdk_desktop"
cmake --build .
./list_all_buckets # or ./Debug/list_all_buckets.exe for Windows

The output should read as follows:

Buckets:
  <bucket_1>
  <bucket_2>
  ...

Now you have a desktop application. You’ve accomplished this without touching anything related to Android. The next section covers Android instructions.

To set up the application on Android with AWS SDK for C++

Follow these steps to set up the application on Android with the SDK for C++:

  • Set up Android Studio
  • Cross-compile the SDK for C++ and the library
  • Build and run the application in Android Studio

Set up Android Studio

First, download and install Android Studio. For more detailed instructions, see the Android Studio Install documentation.

Next, open Android Studio and create a new project. On the Choose your project screen, as shown in the following screenshot, choose Native C++, Next.

Complete all fields. In the following example, you build the SDK for C++ with Android API level 19, so the Minimum API Level is “API 19: Android 4.4 (KitKat)”.

Choose C++ 11 for C++ Standard and choose Finish for the setup phase.

The first time that you open Android Studio, you might see “missing NDK and CMake” errors during automatic installation. Ignore these warnings for the moment. You install Android NDK and CMake manually. Or you can accept the license to install NDK and CMake within Android Studio. Doing so should suppress the warnings.

After you choose Finish, you should get a sample application with Android Studio. This application publishes some messages on the screens of your devices. For more details, see Create a new project with C++.

Starting from this sample, take the following steps to build your application:

First, cross-compile the SDK for C++ for Android.

Modify the source code and CMake script to build list_all_buckets as a shared object library (liblist_all_buckets.so) rather than an executable. This library has a function: listAllBuckets() to output all buckets.

Specify the path to the library in the module’s build.gradle file so that the Android application can find it.

Load this library in MainActivity by System.loadLibrary("list_all_buckets"), so that the Android application can use the listAllBuckets() function.

Call the listAllBuckets() function in OnCreate() function in MainActivity.

More details for each step will be given in the following sections.

Cross-compile the SDK for C++ and the library

Use Android NDK to cross-compile the SDK for C++. In this example, you are using version r19c. To find whether Android Studio has downloaded NDK by default, check the following:

  • Linux: ~/Android/Sdk/ndk-bundle
  • MacOS: ~/Library/Android/sdk/ndk-bundle
  • Windows: C:\Users\<username>\AppData\Local\Android\Sdk\ndk\<version>

Alternately, download the Android NDK directly.

To cross-compile SDK for C++, run the following code:

cd <workspace>
mkdir build_sdk_android
cd build_sdk_anrdoid
cmake ../aws-sdk-cpp -DNDK_DIR="<path-to-android-ndk>" \
    -DBUILD_SHARED_LIBS=OFF \
    -DCMAKE_BUILD_TYPE=Release \
    -DCUSTOM_MEMORY_MANAGEMENT=ON \
    -DTARGET_ARCH=ANDROID \
    -DANDROID_NATIVE_API_LEVEL=19 \
    -DBUILD_ONLY="identity-management;s3" \
    -DCMAKE_INSTALL_PREFIX="<workspace>/install_sdk_android"
cmake --build . --target CURL # This step is only required on Windows.
cmake --build .
cmake --build . --target install

On Windows, you might see the error message: “CMAKE_SYSTEM_NAME is ‘Android’ but ‘NVIDIA Nsight Tegra Visual Studio Edition’ is not installed.” In that case, install Ninja and change the generator from Visual Studio to Ninja by passing -GNinja as another parameter to your CMake command.

To build list_all_buckets as an Android-targeted dynamic object library, you must change the source code and CMake script. More specifically, you must alter the source code as follows:

Replace main() function with Java_com_example_mynativecppapplication_MainActivity_listAllBuckets(). In the Android application, the Java code calls this function through JNI (Java Native Interface). You may have a different function name, based on your package name and activity name. For this demo, the package name is com.example.mynativecppapplication, the activity name is MainActivity, and the actual function called by Java code is called listAllBuckets().

Enable LogcatLogSystem, so that you can debug your Android application and see the output in the logcat console.

Your Android devices or emulators may miss CA certificates. So, you should push them to your devices and specify the path in client configuration. In this example, use CA certificates extracted from Mozilla in PEM format.

Download the certificate bundle.

Push this file to your Android devices:

# Change directory to the location of adb
cd <path-to-android-sdk>/platform-tools
# Replace "com.example.mynativecppapplication" with your package name
./adb shell mkdir -p /sdcard/Android/data/com.example.mynativecppapplication/certs
# push the PEM file to your devices
./adb push cacert.pem /sdcard/Android/data/com.example.mynativecppapplication/certs

Specify the path in the client configuration:

config.caFile = "/sdcard/Android/data/com.example.mynativecppapplication/certs/cacert.pem";

The complete source code looks like the following:

// main.cpp
#if __ANDROID__
#include <android/log.h>
#include <jni.h>
#include <aws/core/platform/Android.h>
#include <aws/core/utils/logging/android/LogcatLogSystem.h>
#endif
#include <iostream>
#include <aws/core/Aws.h>
#include <aws/core/utils/Outcome.h>
#include <aws/core/utils/logging/AWSLogging.h>
#include <aws/core/client/ClientConfiguration.h>
#include <aws/core/auth/AWSCredentialsProvider.h>
#include <aws/identity-management/auth/CognitoCachingCredentialsProvider.h>
#include <aws/s3/S3Client.h>

using namespace Aws::Auth;
using namespace Aws::CognitoIdentity;
using namespace Aws::CognitoIdentity::Model;

static const char ALLOCATION_TAG[] = "ListAllBuckets";
static const char ACCOUNT_ID[] = "your-account-id";
static const char IDENTITY_POOL_ID[] = "your-cognito-identity-id";

#ifdef __ANDROID__
extern "C" JNIEXPORT jstring JNICALL
Java_com_example_mynativecppapplication_MainActivity_listAllBuckets(JNIEnv* env, jobject classRef, jobject context)
#else
int main()
#endif
{
    Aws::SDKOptions options;
#ifdef __ANDROID__
    AWS_UNREFERENCED_PARAM(classRef);
    AWS_UNREFERENCED_PARAM(context);
    Aws::Utils::Logging::InitializeAWSLogging(Aws::MakeShared<Aws::Utils::Logging::LogcatLogSystem>(ALLOCATION_TAG, Aws::Utils::Logging::LogLevel::Debug));
#else
    options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Debug;
#endif
    Aws::InitAPI(options);

    Aws::Client::ClientConfiguration config;
#ifdef __ANDROID__
    config.caFile = "/sdcard/Android/data/com.example.mynativecppapplication/certs/cacert.pem";
#endif
    auto cognitoIdentityClient = Aws::MakeShared<CognitoIdentityClient>(ALLOCATION_TAG, Aws::MakeShared<AnonymousAWSCredentialsProvider>(ALLOCATION_TAG), config);
    auto cognitoCredentialsProvider = Aws::MakeShared<CognitoCachingAnonymousCredentialsProvider>(ALLOCATION_TAG, ACCOUNT_ID, IDENTITY_POOL_ID, cognitoIdentityClient);

    Aws::S3::S3Client s3Client(cognitoCredentialsProvider, config);
    auto listBucketsOutcome = s3Client.ListBuckets();
    Aws::StringStream ss;
    if (listBucketsOutcome.IsSuccess())
    {
        ss << "Buckets:" << std::endl;
        for (auto const& bucket : listBucketsOutcome.GetResult().GetBuckets())
        {
            ss << "  " << bucket.GetName() << std::endl;
        }
    }
    else
    {
        ss << "Failed to list buckets." << std::endl;
    }

#if __ANDROID__
    std::string allBuckets(ss.str().c_str());
    Aws::ShutdownAPI(options);
    return env->NewStringUTF(allBuckets.c_str());
#else
    std::cout << ss.str() << std::endl;
    Aws::ShutdownAPI(options);
    return 0;
#endif
}

Next, make the following changes for the CMake script:

  • Set the default values for the parameters used for the Android build, including:
    • The default Android API Level is 19
    • The default Android ABI is armeabi-v7a
    • Use libc++ as the standard library by default
    • Use android.toolchain.cmake supplied by Android NDK by default
  • Build list_all_buckets as a library rather than an executable
  • Link to the external libraries built in the previous step: zlib, ssl, crypto, and curl
# CMakeLists.txt
cmake_minimum_required(VERSION 3.3)
set(CMAKE_CXX_STANDARD 11)

if(TARGET_ARCH STREQUAL "ANDROID") 
    if(NOT NDK_DIR)
        set(NDK_DIR $ENV{ANDROID_NDK})
    endif()
    if(NOT IS_DIRECTORY "${NDK_DIR}")
        message(FATAL_ERROR "Could not find Android NDK (${NDK_DIR}); either set the ANDROID_NDK environment variable or pass the path in via -DNDK_DIR=..." )
    endif()
if(NOT CMAKE_TOOLCHAIN_FILE)
    set(CMAKE_TOOLCHAIN_FILE "${NDK_DIR}/build/cmake/android.toolchain.cmake")
endif()

if(NOT ANDROID_ABI)
    set(ANDROID_ABI "armeabi-v7a")
    message(STATUS "Android ABI: none specified, defaulting to ${ANDROID_ABI}")
else()
    message(STATUS "Android ABI: ${ANDROID_ABI}")
endif()

if(BUILD_SHARED_LIBS)
    set(ANDROID_STL "c++_shared")
else()
    set(ANDROID_STL "c++_static")
endif()

if(NOT ANDROID_NATIVE_API_LEVEL)
    set(ANDROID_NATIVE_API_LEVEL "android-19")
    message(STATUS "Android API Level: none specified, defaulting to ${ANDROID_NATIVE_API_LEVEL}")
else()
    message(STATUS "Android API Level: ${ANDROID_NATIVE_API_LEVEL}")
endif()

    list(APPEND CMAKE_FIND_ROOT_PATH ${CMAKE_PREFIX_PATH})
endif()

project(list_all_buckets LANGUAGES CXX)
find_package(AWSSDK REQUIRED COMPONENTS s3 identity-management)
if(TARGET_ARCH STREQUAL "ANDROID")
    set(SUFFIX so)
    add_library(zlib STATIC IMPORTED)
    set_target_properties(zlib PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/zlib/lib/libz.a)
    add_library(ssl STATIC IMPORTED)
    set_target_properties(ssl PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/openssl/lib/libssl.a)
    add_library(crypto STATIC IMPORTED)
    set_target_properties(crypto PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/openssl/lib/libcrypto.a)
    add_library(curl STATIC IMPORTED)
    set_target_properties(curl PROPERTIES IMPORTED_LOCATION ${EXTERNAL_DEPS}/curl/lib/libcurl.a)
    add_library(${PROJECT_NAME} "main.cpp")
else()
    add_executable(${PROJECT_NAME} "main.cpp")
endif()
target_link_libraries(${PROJECT_NAME} ${AWSSDK_LINK_LIBRARIES})

Finally, build this library with the following command:

cd <workspace>
mkdir build_app_android
cd build_app_android
cmake ../app_list_all_buckets \
    -DNDK_DIR="<path-to-android-ndk>" \
    -DBUILD_SHARED_LIBS=ON \
    -DTARGET_ARCH=ANDROID \
    -DCMAKE_BUILD_TYPE=Release \
    -DCMAKE_PREFIX_PATH="<workspace>/install_sdk_android" \
    -DEXTERNAL_DEPS="<workspace>/build_sdk_android/external"
cmake --build .

This build results in the shared library liblist_all_buckets.so under <workspace>/build_app_android/. It’s time to switch to Android Studio.

Build and run the application in Android Studio

First, the application must find the library (liblist_all_buckets.so) that you built and the standard library (libc++_shared.so). The default search path for JNI libraries is app/src/main/jniLibs/<android-abi>. Create a directory called: <your-android-application-root>/app/src/main/jniLibs/armeabi-v7a/ and copy the following files to this directory:

<workspace>/build_app_android/liblist_all_buckets.so

  • For Linux: <android-ndk>/toolchains/llvm/prebuilt/linux-x86_64/sysroot/usr/lib/arm-linux-androideabi/libc++_shared.so
  • For MacOS: <android-ndk>/toolchains/llvm/prebuilt/darwin-x86_64/sysroot/usr/lib/arm-linux-androideabi/libc++_shared.so
  • For Windows: <android-ndk>/toolchains/llvm/prebuilt/windows-x86_64/sysroot/usr/lib/arm-linux-androideabi/libc++_shared.so

Next, open the build.gradle file for your module and remove the externalNativeBuild{} block, because you are using prebuilt libraries, instead of building the source with the Android application.

Then, edit MainActivity.java, which is under app/src/main/java/<package-name>/. Replace all native-libs with list_all_buckets and replace all stringFromJNI() with listAllBuckets(). The whole Java file looks like the following code example:

// MainActivity.java
package com.example.mynativecppapplication;

import android.support.v7.app.AppCompatActivity;
import android.os.Bundle;
import android.widget.TextView;

public class MainActivity extends AppCompatActivity {

    // Used to load the 'list_all_buckets' library on application startup.
    static {
        System.loadLibrary("list_all_buckets");
    }

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_main);

        // Example of a call to a native method
        TextView tv = findViewById(R.id.sample_text);
        tv.setText(listAllBuckets());
    }

    /**
    * A native method that is implemented by the 'list_all_buckets' native library,
    * which is packaged with this application.
    */
    public native String listAllBuckets();
}

Finally, don’t forget to grant internet access permission to your application by adding the following lines in the AndroidManifest.xml, located at app/src/main/:

<manifest xmlns:android="http://schemas.android.com/apk/res/android"
package="com.example.mynativecppapplication">
    <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
    ...
</manifest>

To run the application on Android emulator, make sure that the CPU/ABI is armeabi-v7a for the system image. That’s what you specified when you cross-compiled the SDK and list_all_buckets library for the Android platform.

Run this application by choosing the Run icon or choosing Run, Run [app]. You should see that the application lists all buckets, as shown in the following screenshot.

Summary

With Android Studio and its included tools, you can cross-compile the AWS SDK for C++ and build a sample Android application to get temporary Amazon Cognito credentials and list all S3 buckets. Starting from this simple application, AWS hopes to see more exciting integrations with the SDK for C++.

As always, AWS welcomes all feedback or comment. Feel free to open an issue on GitHub if you have questions or submit a pull request to contribute.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/setting-up-an-android-application-with-aws-sdk-for-c/

Configuring boto to validate HTTPS certificates

Configuring boto to validate HTTPS certificates

We strongly recommend upgrading from boto to boto3, the latest major version of the AWS SDK for Python. The previous major version, boto, does not default to validating HTTPS certificates for Amazon S3 when you are:

  1. Using a Python version less than 2.7.9 or
  2. Using Python 2.7.9 or greater and are connecting to S3 through a proxy

If you are unable to upgrade to boto3, you should configure boto to always validate HTTPS certificates. Be sure to test these changes. You can force HTTPS certification validation by either:

  1. Setting https_validate_certificates to True in your boto config file. For more information on how to use the boto config file, please refer to its documentation, or
  2. Setting validate_certs to True when instantiating an S3Connection:
    >>> from boto.s3.connection import S3Connection
    >>> conn = S3Connection(validate_certs=True)

To get the best experience, we always recommend remaining up-to-date with the latest version of the AWS SDKs and runtimes.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/configure-boto-to-validate-https-certificates/

Introducing the ‘aws-rails-provisioner’ gem developer preview

Introducing the ‘aws-rails-provisioner’ gem developer preview

AWS is happy to announce that the aws-rails-provisioner gem for Ruby is now in developer preview and available for you to try!

What is aws-rails-provisioner?

The new aws-rails-provisioner gem is a tool that helps you define and deploy your containerized Ruby on Rails applications on AWS. It currently only supports AWS Fargate.

aws-rails-provisioner is a command line tool using the configuration file aws-rails-provisioner.yml to generate AWS Cloud Development Kit (CDK) stacks on your behalf. It automates provisioning AWS resources to run your containerized Ruby on Rails applications on AWS Fargate with a few commands. It can also generate a CI/CD AWS CodePipeline pipeline for your applications when you enable its CI/CD option.

Why use aws-rails-provisioner?

Moving a local Ruby on Rails application to a well-configured web application running on the cloud is a complicated task. While aws-rails-provisioner doesn’t change this into a “one-click” task, it helps ease the most monotonous and detail-oriented aspects of the job.

The aws-rails-provisioner gem shifts your primary focus to component-oriented definitions inside a concise aws-rails-provisioner.yml file. This file defines the AWS resources your application needs, such as container image environment, a database cluster engine, or Auto Scaling strategies.

The new gem handles default details—like VPC configuration, subnet placement, inbound traffic rules between databases and applications—for you. With CI/CD opt-in, aws-rails-provisioner can also generate and provision a predefined CI/CD pipeline, including a database migration phase.

For containerized Ruby on Rails applications that you already maintain on AWS, aws-rails-provisioner helps to keep the AWS infrastructure for your application as code in a maintainable way. This ease allows you to focus more on application development.

Prerequisites

Before using the preview gem, you must have the following resources:
• A Ruby on Rails application with Dockerfile
• A Docker daemon set up locally
• The AWS CDK installed (requires `Node.js` >= 8.11.x) npm i -g aws-cdk

Using aws-rails-provisioner

Getting started with aws-rails-provisioner is fast and easy.

Step 1: Install aws-rails-provisioner

You can download the aws-rails-provisioner preview gem from RubyGems.

To install the gem, run the following command:


gem install 'aws-rails-provisioner' -v 0.0.0.rc1

Step 2: Define your aws-rails-provisioner.yml file

The aws-rails-provisioner.yml configuration file allows you to define how and what components you want aws-rails-provisioner to provision to run your application image on Fargate.

An aws-rails-provisioner.yml file looks like the following format:

version: '0'
vpc:
	max_azs: 2
services:
	rails_foo:
    source_path: ../sandbox/rails_foo
    fargate:
      desired_count: 3
      public: true
      envs:
        PORT: 80
        RAILS_LOG_TO_STDOUT: true
    db_cluster:
      engine: aurora-postgresql
      db_name: myrails_db
      instance: 2	
	  scaling:
		max_capacity: 2
		on_cpu:
			target_util_percent: 80
			scale_in_cool_down: 300
	rails_bar:
		...

aws-rails-provisioner.yml file overview

The aws-rails-provisioner.yml file contains two parts: vpc and services. VPC defines networking settings, such as Amazon VPC hosting your applications and databases. It can be as simple as:

vpc:
  max_az: 3
  cidr: '10.0.0.0/21'
	enable_dns: true

Left unmodified, aws-rails-provisioner defines a default VPC with three Availability Zones containing public, private, and isolated subnets with a CIDR range of 10.0.0.0/21 and DNS enabled. If these default settings don’t meet your needs, you can configure settings yourself, such as in the following example, which defines the subnets with details:

vpc:
	subnets:
		application: # a subnet name
 			cidr_mark: 24
			type: private
		...

You can review the full range of VPC configuration options to meet your exact needs.

The services portion of aws-rails-provisioner.yml allows you to define your Rails applications, Database cluster, and Auto Scaling policies.
For every application, you can add their entry with identifiers like:

services:
	my_awesome_rails_app:
	  source_path: ../path/to/awesome_app # relative path from `aws-rails-provisioner.yml`
		...
	my_another_awesome_rails_app:
	  source_path: ./path/to/another_awesome_app # relative path from `aws-rails-provisioner.yml`
		...

When you run aws-rails-provisioner commands later, it takes the configuration values of a service—under fargate:, db_cluster:, and scaling: to provision a Fargate service fronted by an Application Load Balancer (DBClusters resource and Auto Scaling policies are optional for a service).

Database cluster

The db_cluster portion of aws-rails-provisioner.yml defines database settings for your Rails application. It currently supports Aurora PostgreSQL, Aurora MySQL, and Aurora. You can specify the engine version by appending engine_version to the command. You can also choose to provide a user name for your databases; if not, aws-rails-provisioner automatically generates username and password and stores it in AWS Secrets Manager.

To enable storage encryption for the Amazon RDS database cluster, provide kms_key_arn with the AWS KMS key ARN you use for storage encryption:

 my_awesome_rails_app:
	  source_path: ../path/to/awesome_app
	  db_cluster:
		engine: aurora-postgresql
      db_name: awesome_db
		username: myadmin

You can review the full list of db_cluster: configuration options to meet your specific needs.

AWS Fargate
The fargate: portion of aws-rails-provisioner.yml defines which Fargate services and Tasks that running your application image, for example:

my_awesome_rails_app:
	source_path: ../path/to/awesome_app
	fargate:
		public: true
		memory: 512
		cpu: 256
		container_port: 80
		envs:
			RAILS_ENV: ...
			RAILSLOGTO_STDOUT: ...
  ...

For HTTPs applications, you can provide certificate with a certificate ARN from AWS Certificate Manager. This automatically associates with the Application Load Balancer and sets container_port to 443. You can also provide a domain name and domain zone for your application under domain_name and domain_zone. If you don’t provide these elements, the system provides a default DNS address from the Application Load Balancer.

When providing environment variables for your application image, you don’t have to define DATABASE_URL by yourself; aws-rails-provisioner computes the value based on your db_cluster configuration. Make sure to update the config/database.yml file for your Rails application to recognize the DATABASE_URL environment variable.

You can review the full list of fargate: configuration options to meet your specific needs.

Scaling
You can also configure the Auto Scaling setting for your service. In this prototype stage, you can configure scaling policies on_cpu, on_metric, on_custom_metric, on_memory, on_request, or on_schedule.

my_awesome_rails_app:
	source_path: ../path/to/awesome_app
	scaling:
	  max_capacity: 10
    on_memory:
      target_util_percent: 80
      scale_out_cool_down: 200
    on_request:
      requests_per_target: 100000
      disable_scale_in: true
    ...

You can review the full list of scaling: configuration options to meet your specific needs.

Step 3: Build and deploy

With aws-rails-provisioner.yml defined, you can run a build command. Doing so bootstraps AWS CDK stacks in code, defining all the necessary AWS resources and connections for you.

Run the following:


aws-rails-provisioner build

This command initializes and builds a CDK project with stacks—installing all required CDK packages—leaving a deploy-ready project. By default, it generates an InitStack that defines the VPC and Amazon ECS cluster, hosting Fargate services. It also generates a FargateStack that defines a database cluster and a load-balanced, scaling Fargate service for each service entry.

When you enable --with-cicd, the aws-rails-provisioner also provides a pipeline stack containing source, build, database migration, and deploy stages for each service defined for you. You can enable CI/CD with the following command:


aws-rails-provisioner build --with-cicd

After the build completes, run the following deploy command to deploy all defined AWS resources:


aws-rails-provisioner deploy

Instead of deploying everything all at the same time, you can deploy stack by stack or application by application:


# only deploys the stack that creates the VPC and ECS cluster
aws-rails-provisioner deploy --init

# deploys the fargate service and database cluster when defined
aws-rails-provisioner deploy --fargate

# deploy the CI/CD stack
aws-rails-provisioner deploy --cicd

# deploy only `my_awesome_rails_app` application
aws-rails-provisioner deploy --fargate --service my_awesome_rails_app

You can check on the status of your stacks by logging in to the AWS console and navigating to AWS CloudFormation. Deployment can take several minutes.

Completing deployment leaves your applications running on AWS Fargate, fronted with the Application Load Balancer.

Step 4: View AWS resources

To see your database cluster, log in to the Amazon RDS console.

To see an ECS cluster created, with Fargate Services and tasks running, you can also check the Amazon ECS console.

To view the application via DNS address or domain address, check the Application Load Balancing dashboard.

Any applications with databases need rails migration to work. The generated CI/CD stack contains a migration phase. The following CI/CD section contains additional details.
To view all the aws-rails-provisioner command line options, run:


aws-rails-provisioner -h

Step 5: Trigger the CI/CD pipeline

To trigger the pipeline that aws-rails-provisioner provisioned for you, you must commit your application source code and Dockerfile with AWS CodeBuild build specs into an AWS CodeCommit repository. The aws-rails-provisioner gem automatically creates this repository for you.

To experiment with application image build and database migration on your own, try these example build spec files from the aws-rails-provisioner GitHub repo.

Conclusion
Although aws-rails-provisioner for RubyGems is currently under developer preview, it provides you with a powerful, time-saving tool. I would love for you to try it out and return with feedback for how AWS can improve this asset before its final launch. As always, you can leave your thoughts and feedback on GitHub.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/introducing-the-aws-rails-provisioner-gem-developer-preview/

Real-time streaming transcription with the AWS C++ SDK

Real-time streaming transcription with the AWS C++ SDK

Today, I’d like to walk you through how to use the AWS C++ SDK to leverage Amazon Transcribe streaming transcription. This service allows you to do speech-to-text processing in real time. Streaming transcription uses HTTP/2 technology to communicate efficiently with clients.

In this walkthrough, you build a command line application that captures audio from the computer’s microphone, sends it to Amazon Transcribe streaming, and prints out transcribed text as you speak. You use PortAudio (a third-party library) to capture and sample audio. PortAudio is a free, cross-platform library, so you should be able to build this on Windows, macOS, and Linux.

Note

Amazon Transcribe streaming transcription has a separate API from Amazon Transcribe, which also allows you to do speech-to-text, albeit not in real time.

Prerequisites

You must have the following tools installed to build the application:

  • CMake (preferably a recent version 3.11 or later)
  • A modern C++ compiler that supports C++11, a minimum of GCC 5.0, Clang 4.0, or Visual Studio 2015
  • Git
  • An HTTP/2 client
    • On *nix, you must have libcurl with HTTP/2 support installed on the system. To ensure that the version of libcurl you have supports HTTP/2, run the following command:
      • $ curl --version
      • You should see HTTP2 listed as one of the features.
    • On Windows, you must be running Windows 10.
  • An AWS account configured with the CLI

Walkthrough

The first step is to download and install PortAudio from the source. If you’re using Linux or macOS, you can use the system’s package manager to install the library (for example: apt, yum, or Homebrew).

  1. Browse to http://www.portaudio.com/download.html and download the latest stable release.
  2. Unzip the archive to a PortAudio directory.
  3. If you’re running Windows, run the following commands to build and install the library.
$ cd portaudio
$ mkdir build
$ cd build
$ cmake. 
$ cmake --build . --config Release

Those commands should build both a DLL and a static library. PortAudio does not define an install target when building on Windows. In the Release directory, copy the file named portaudio_static_x64.lib and the file named portaudio.h to another temporary directory. You need both files for the subsequent steps.

4. If you’re running on Linux or macOS, run the following commands instead.

$ cd portaudio
$ mkdir build
$ cd build
$ cmake .. -DCMAKE_BUILD_TYPE=Release
$ cmake --build .
$ cmake --build . --target install

For this demonstration, you can safely ignore any warnings you see while PortAudio builds.

The next step is to download and install the Amazon Transcribe Streaming C++ SDK. You can use vcpkg or Homebrew to do that step. But I show you how to build the SDK from source.

$ git clone https://github.com/aws/aws-sdk-cpp.git
$ cd aws-sdk-cpp
$ mkdir build
$ cd build
$ cmake .. -DBUILD_ONLY=”transcribestreaming” -DCMAKE_BUILD_TYPE=Release -DBUILD_SHARED_LIBS=OFF
$ cmake --build . --config Release
$ sudo cmake --build . --config Release --target install

Now, you’re ready to write the command line application.

Before diving into the code, I’d like to explain a few things about the structure of the application.

First, you need to tell PortAudio to capture audio from the microphone and write the sampled bits to a stream. Second, you want to simultaneously consume that stream and send the bits you captured to the service. To do those two operations concurrently, the application uses multiple threads. The SDK and PortAudio create and manage the threads. PortAudio is responsible for the audio capturing thread, and the SDK is responsible for the API communication thread.

If you have used the C++ SDK asynchronous APIs before, then you might notice the new pattern introduced with the Amazon Transcribe Streaming API. Specifically, in addition to the callback parameter invoked on request completion, there’s now another callback parameter. The new callback is invoked when the stream is ready to be written to by the application. More on that later.

The application can be a single C++ source file. However, I split the logic into two source files: one file contains the SDK-related logic, and the other file contains the PortAudio specific logic. That way, you can easily swap out the PortAudio specific code and replace it with any other audio source that fits your use case. So, go ahead and create a new directory for the demo application and save the following files into it.

The first file (main.cpp) contains the logic of using the Amazon Transcribe Streaming SDK:

// main.cpp
#include <aws/core/Aws.h>
#include <aws/core/utils/threading/Semaphore.h>
#include <aws/transcribestreaming/TranscribeStreamingServiceClient.h>
#include <aws/transcribestreaming/model/StartStreamTranscriptionHandler.h>
#include <aws/transcribestreaming/model/StartStreamTranscriptionRequest.h>
#include <cstdio>

using namespace Aws;
using namespace Aws::TranscribeStreamingService;
using namespace Aws::TranscribeStreamingService::Model;

int SampleRate = 16000; // 16 Khz
int CaptureAudio(AudioStream& targetStream);

int main()
{
    Aws::SDKOptions options;
    options.loggingOptions.logLevel = Aws::Utils::Logging::LogLevel::Trace;
    Aws::InitAPI(options);
    {
        Aws::Client::ClientConfiguration config;
#ifdef _WIN32
        config.httpLibOverride = Aws::Http::TransferLibType::WIN_INET_CLIENT;
#endif
        TranscribeStreamingServiceClient client(config);
        StartStreamTranscriptionHandler handler;
        handler.SetTranscriptEventCallback([](const TranscriptEvent& ev) {
            for (auto&& r : ev.GetTranscript().GetResults()) {
                if (r.GetIsPartial()) {
                    printf("[partial] ");
                } else {
                    printf("[Final] ");
                }
                for (auto&& alt : r.GetAlternatives()) {
                    printf("%s\n", alt.GetTranscript().c_str());
                }
            }
        });

        StartStreamTranscriptionRequest request;
        request.SetMediaSampleRateHertz(SampleRate);
        request.SetLanguageCode(LanguageCode::en_US);
        request.SetMediaEncoding(MediaEncoding::pcm);
        request.SetEventStreamHandler(handler);

        auto OnStreamReady = [](AudioStream& stream) {
            CaptureAudio(stream);
            stream.flush();
            stream.Close();
        };

        Aws::Utils::Threading::Semaphore signaling(0 /*initialCount*/, 1 /*maxCount*/);
        auto OnResponseCallback = [&signaling](const TranscribeStreamingServiceClient*,
                  const Model::StartStreamTranscriptionRequest&,
                  const Model::StartStreamTranscriptionOutcome&,
                  const std::shared_ptr<const Aws::Client::AsyncCallerContext>&) { signaling.Release(); };

        client.StartStreamTranscriptionAsync(request, OnStreamReady, OnResponseCallback, nullptr /*context*/);
        signaling.WaitOne(); // prevent the application from exiting until we're done
    }

    Aws::ShutdownAPI(options);

    return 0;
}

The second file (audio-capture.cpp) contains the logic related to capturing audio from the microphone.

// audio-capture.cpp
#include <aws/core/utils/memory/stl/AWSVector.h>
#include <aws/core/utils/threading/Semaphore.h>
#include <aws/transcribestreaming/model/AudioStream.h>
#include <csignal>
#include <cstdio>
#include <portaudio.h>

using SampleType = int16_t;
extern int SampleRate;
int Finished = paContinue;
Aws::Utils::Threading::Semaphore pasignal(0 /*initialCount*/, 1 /*maxCount*/);

static int AudioCaptureCallback(const void* inputBuffer, void* outputBuffer, unsigned long framesPerBuffer,
    const PaStreamCallbackTimeInfo* timeInfo, PaStreamCallbackFlags statusFlags, void* userData)
{
    auto stream = static_cast<Aws::TranscribeStreamingService::Model::AudioStream>(userData);
    const auto beg = static_cast<const unsigned char*>(inputBuffer);
    const auto end = beg + framesPerBuffer * sizeof(SampleType);

    (void)outputBuffer; // Prevent unused variable warnings
    (void)timeInfo;
    (void)statusFlags;
 
    Aws::Vector<unsigned char> bits { beg, end };
    Aws::TranscribeStreamingService::Model::AudioEvent event(std::move(bits));
    stream->WriteAudioEvent(event);

    if (Finished == paComplete) {
        pasignal.Release(); // signal the main thread to close the stream and exit
    }

    return Finished;
}

void interruptHandler(int)
{
    Finished = paComplete;
}
 
int CaptureAudio(Aws::TranscribeStreamingService::Model::AudioStream& targetStream)
{
 
    signal(SIGINT, interruptHandler); // handle ctrl-c
    PaStreamParameters inputParameters;
    PaStream* stream;
    PaError err = paNoError;
 
    err = Pa_Initialize();
    if (err != paNoError) {
        fprintf(stderr, "Error: Failed to initialize PortAudio.\n");
        return -1;
    }

    inputParameters.device = Pa_GetDefaultInputDevice(); // default input device
    if (inputParameters.device == paNoDevice) {
        fprintf(stderr, "Error: No default input device.\n");
        Pa_Terminate();
        return -1;
    }

    inputParameters.channelCount = 1;
    inputParameters.sampleFormat = paInt16;
    inputParameters.suggestedLatency = Pa_GetDeviceInfo(inputParameters.device)->defaultHighInputLatency;
    inputParameters.hostApiSpecificStreamInfo = nullptr;

    // start the audio capture
    err = Pa_OpenStream(&stream, &inputParameters, nullptr, /* &outputParameters, */
        SampleRate, paFramesPerBufferUnspecified,
        paClipOff, // you don't output out-of-range samples so don't bother clipping them.
        AudioCaptureCallback, &targetStream);

    if (err != paNoError) {
        fprintf(stderr, "Failed to open stream.\n");        
        goto done;
    }

    err = Pa_StartStream(stream);
    if (err != paNoError) {
        fprintf(stderr, "Failed to start stream.\n");
        goto done;
    }
    printf("=== Now recording!! Speak into the microphone. ===\n");
    fflush(stdout);

    if ((err = Pa_IsStreamActive(stream)) == 1) {
        pasignal.WaitOne();
    }
    if (err < 0) {
        goto done;
    }

    Pa_CloseStream(stream);

done:
    Pa_Terminate();
    return 0;
}

There is one line in the audio-capture.cpp file that is related to the Amazon Transcribe Streaming SDK. That is the line in which you wrap the audio bits in an AudioEvent object and write the event to the stream. That is required regardless of the audio source.

And finally, here’s a simple CMake script to build the application:

# CMakeLists.txt
cmake_minimum_required(VERSION 3.11)
set(CMAKE_CXX_STANDARD 11)
project(demo LANGUAGES CXX)

find_package(AWSSDK COMPONENTS transcribestreaming)

add_executable(${PROJECT_NAME} "main.cpp" "audio-capture.cpp")

target_link_libraries(${PROJECT_NAME} PRIVATE ${AWSSDK_LINK_LIBRARIES})

if(MSVC)
    target_include_directories(${PROJECT_NAME} PRIVATE "portaudio")
    target_link_directories(${PROJECT_NAME} PRIVATE "portaudio")
    target_link_libraries(${PROJECT_NAME} PRIVATE “portaudio_static_x64”) # might have _x86 suffix instead
    target_compile_options(${PROJECT_NAME} PRIVATE "/W4" "/WX")
else()
    target_compile_options(${PROJECT_NAME} PRIVATE "-Wall" "-Wextra" "-Werror")
    target_link_libraries(${PROJECT_NAME} PRIVATE portaudio)
endif()

If you are building on Windows, copy the two files (portaudio.h and portaudio_static_x64.lib) from PortAudio’s build output that you copied earlier, into a directory named “portaudio” under the demo directory as such:

├── CMakeLists.txt

├── audio-capture.cpp

├── build

├── main.cpp

└── portaudio

    ├── portaudio.h

    └── portaudio_static_x64.lib

If you’re building on macOS or Linux, skip this step.

Now, “cd” into that directory and run the following commands:

$ mkdir build
$ cd build
$ cmake .. -DCMAKE_BUILD_TYPE=Release
$ cmake --build . --config Release

These commands should create the executable ‘demo’ under the build directory. Go ahead and execute it and start speaking into the microphone when prompted. The transcribed words should start appearing printed on the screen.

Amazon Transcribe streaming transcription sends partial results while it is receiving audio input. You might notice that the partial results changing as you speak. If you pause long enough, it recognizes your silence as the end of a statement and sends a final result. As soon as you start speaking again, streaming transcription resumes sending partial results.

Clarifications

If you’ve used the asynchronous APIs in the C++ SDK previously, you might be wondering why the stream is passed through a callback instead of setting it as a property on the request directly.

The reason is that Amazon Transcribe Streaming is one of the first AWS services to use a new binary serialization and deserialization protocol. The protocol uses the notion of an “event” to group application-defined chunks of data. Each event is signed using the signature of the previous event as a seed. The first event is seeded using the request’s signature. Therefore, the stream must be “primed” by seeding it with the initial request’s signature before you can write to it.

 Summary

HTTP/2 opens new opportunities for using the web and AWS in real-time scenarios. With the support of the AWS C++ SDK, you can write applications that translate speech to text on the fly.

Amazon Transcribe Streaming follows the same pricing model as Amazon Transcribe. For more details, see Amazon Transcribe Pricing.

I’m excited to see what you build using this new technology. Send AWS your feedback on GitHub and on Twitter (#awssdk).

from AWS Developer Blog https://aws.amazon.com/blogs/developer/real-time-streaming-transcription-with-the-aws-c-sdk/

Announcing AWS Toolkit for Visual Studio Code

Announcing AWS Toolkit for Visual Studio Code

Visual Studio Code has become an enormously popular tool for serverless developers, partly due to the intuitive user interface. It’s also because of the rich ecosystem of extensions that can customize and automate so much of the development experience. We are excited to announce that the AWS Toolkit for Visual Studio Code extension is now generally available, making it even easier for the development community to build serverless projects using this editor.

The AWS Toolkit for Visual Studio Code entered developer preview in November of 2018 and is open-sourced on GitHub, allowing builders to make their contributions to the code base and feature set. The toolkit enables you to easily develop serverless applications, including creating a new project, local debugging, and deploying your project—all conveniently from within the editor. The toolkit supports Node.js, Python, and .NET.

Using the AWS Toolkit for Visual Studio Code, you can:

  • Test your code locally with step-through debugging in a Lambda-like environment.
  • Deploy your applications to the AWS Region of your choice.
  • Invoke your Lambda functions locally or remotely.
  • Specify function configurations such as an event payload and environment variables.

We’re distributing the AWS Toolkit for Visual Studio Code under the open source Apache License, version 2.0.

Installation

From Visual Studio Code, choose the Extensions icon on the Activity Bar. In the Search Extensions in Marketplace box, enter AWS Toolkit and then choose AWS Toolkit for Visual Studio Code as shown below. This opens a new tab in the editor showing the toolkit’s installation page. Choose the Install button in the header to add the extension.

After you install the AWS Toolkit for Visual Studio Code, you must complete these additional steps to access most of its features:

To use this toolkit to develop serverless applications with AWS, you must also do the following on the local machine where you install the toolkit:

For complete setup instructions, see Setting Up the AWS Toolkit for Visual Studio Code in the AWS Toolkit for Visual Studio Code User Guide.

Building a serverless application with the AWS Toolkit for Visual Studio Code

In this example, you set up a Hello World application using Node.js:

1. Open the Command Palette by choosing Command Palette from the View menu. Type AWS to see all the commands available from the toolkit. Choose AWS: Create new AWS SAM Application.

2. Choose the nodejs10.x runtime, specify a folder, and name the application Hello World. Press Enter to confirm these settings. The toolkit uses AWS SAM to create the application files, which appear in the Explorer panel.

3. To run the application locally, open the app.js file in the editor. CodeLenses appears above the handler function, showing options to Run Locally, Debug Locally, or Configure the function. Choose Run Locally.

This uses Docker to run the function on your local machine. You can see the Hello World output in the console window.

Step-through debugging of the application

One of the most exciting features in the toolkit is also one of the most powerful in helping you find problems in your code: step-through debugging.

The AWS Toolkit for Visual Studio Code brings the power of step-through debugging to serverless development. It lets you set breakpoints in your code and evaluate variable values and object states. You can activate this by choosing Debug Locally in the CodeLens that appears above your functions.

This enables the Debug view, displaying information related to debugging the current application. It also shows a command bar with debugging options and launch configuration settings. As your function executes and stops at a breakpoint, the editor shows current object state when you hover over the code and allows data inspection in the Variables panel.

For developers familiar with the power of the debugging support within Visual Studio Code, the toolkit now lets you use those features with serverless development of your Lambda functions. This makes it easier to diagnose problems quickly and iteratively, without needing to build and deploy functions remotely or rely exclusively on verbose logging to isolate issues.

Deploying the application to AWS

The deployment process requires an Amazon S3 bucket with a globally unique name in the same Region where your application runs

1. To create an S3 bucket from the terminal window, enter:

aws s3 mb s3://your-unique-bucket-name --region your-region

This requires the AWS CLI, which you can install by following the instructions detailed in the documentation. Alternatively, you can create a new S3 bucket in the AWS Management Console.

2. Open the Command Palette by choosing Command Palette from the View menu. Type AWS to see all the commands available from the toolkit. Choose AWS: Deploy a SAM Application.

3. Choose the suggested YAML template, select the correct Region, and enter the name of the bucket that you created in step 1. Provide a name for the stack, then press Enter. The process may take several minutes to complete. After it does, the Output panel shows that the deployment completed.

Invoking the remote function

With your application deployed to the AWS Cloud, you can invoke the remote function directly from the AWS Toolkit for Visual Studio Code:

  1. From the Activity Bar, choose the AWS icon to open AWS Explorer.
  2. Choose Click to add a region to view functions… and choose a Region from the list.
  3. Click to expand CloudFormation and Lambda. Explorer shows the CloudFormation stacks and the Lambda functions in this Region.
  4. Right-click the HelloWorld Lambda function in the Explorer panel and choose Invoke on AWS. This opens a new tab in the editor.
  5. Choose Hello World from the payload template dropdown and choose Invoke. The output panel displays the ‘hello world’ JSON response from the remote function.

Congratulations! You successfully deployed a serverless application to production using the AWS Toolkit for Visual Studio Code. The User Guide includes more information on the toolkit’s other development features.

Conclusion

In this post, I demonstrated how to deploy a simple serverless application using the AWS Toolkit for Visual Studio Code. Using this toolkit, developers can test and debug locally before deployment, and modify AWS resources defined in the AWS SAM template.

For example, by using this toolkit to modify your AWS SAM template, you can add an S3 bucket to store images or documents, add a DynamoDB table to store user data, or change the permissions used by your functions. It’s simple to create or update stacks, allowing you to quickly iterate until you complete your application.

AWS SAM empowers developers to build serverless applications more quickly by simplifying AWS CloudFormation and automating the deployment process. The AWS Toolkit for Visual Studio Code takes the next step, allowing you to manage the entire edit, build, and deploy process from your preferred development environment. We are excited to see what you can build with this tool!

from AWS Developer Blog https://aws.amazon.com/blogs/developer/announcing-aws-toolkit-for-visual-studio-code/

Serverless data engineering at Zalando with the AWS CDK

Serverless data engineering at Zalando with the AWS CDK

This blog was authored by Viacheslav Inozemtsev, Data Engineer at Zalando, an active user of the serverless technologies in AWS, and an early adopter of the AWS Cloud Development Kit.

 

Infrastructure is extremely important for any system, but it usually doesn’t carry business logic. It’s also hard to manage and track. Scripts and templates become large and complex, reproducibility becomes nearly impossible, and the system ends up in an unknown state, that’s hard to diagnose.

The AWS Cloud Development Kit (AWS CDK) is a unified interface between you, who defines infrastructure, and AWS CloudFormation, which runs infrastructure in AWS. It’s a powerful tool to create complex infrastructure in a concise and, at the same time, clean way.

Let’s quickly go through the phases of how a company’s infrastructure can be handled in AWS.

1. You start with the UI of AWS, and create resources directly, without any idea of stacks. If something goes wrong, it’s hard to reproduce the same resources in different environments, or even recreate the production system. This approach works only to initially explore the possibilities of the service, and not for production deployments.

2. You begin using the AWS API, most commonly by calling it using the AWS Command Line Interface (AWS CLI). You create bash or Python scripts that contain multiple calls to the API, done in imperative way, trying to reach a state where resources are created without failing. But the scripts (especially bash scripts) become large and messy, and multiply. They’re not written in a standard way, and maintenance becomes extremely tough.

3. You start migrating the scripts to AWS CloudFormation templates. This is better, but although AWS CloudFormation is a declarative and unified way of defining resources in AWS, the templates also get messy pretty quickly, because they mirror what happens in the scripts. You have to create the templates, usually by copying parts from other existing templates, or by using the AWS CloudFormation console. In either case, they grow big pretty fast. Human error occurs. Verification can only be done at the time of submission to AWS CloudFormation. And you still need scripts to deploy. Overall, it’s better, but still not optimal.

4. This is where the AWS CDK makes the difference.

Advantages of using the AWS CDK

The AWS CDK is a layer on top of AWS CloudFormation that simplifies generation of resource definitions. The AWS CDK is a library that you can install using npm package manager, and then use from your CLI. The AWS CDK isn’t an AWS service, and has no user interface. Code is the input and AWS CloudFormation templates are the output.

The AWS CDK does three things:

  • It generates an AWS CloudFormation template from your code. This is a major advantage, because writing those templates manually is tedious and error prone.
  • It generates a list of changes that would happen to the existing stack, if you applied the generated template. This gives better visibility and reduces human error.
  • It deploys the template to AWS CloudFormation, where changes get applied. This simplifies the deployment process and reduces it to a call of a single AWS CDK command.

The AWS CDK enables you to describe your infrastructure in code. Although this code is written in one of these imperative programming languages — TypeScript, Python, Java, C# — it’s declarative. You define resources in the style of object-oriented programming, as if they were simply instances of classes. When this code gets compiled, an engineer can quickly see if there are problems. For example, if AWS CloudFormation requires a parameter for a certain resource, and the code doesn’t provide it, you see a compile error.

Defining resources in a programming language enables flexible, convenient creation of complex infrastructure. For example, one can pass into a function a collection of references to a certain resource type. Additional parameters can be prepared before creating the stack, and passed into the constructor of the stack as arguments. You can use a configuration file, and load settings from it in the code wherever you need it. Auto-completion helps you explore possible options and create definitions faster. An application can consist of two stacks, and output of the creation of one stack can be passed as an argument for the creation of the other.

Every resource created by the AWS CDK is always part of a stack. You can have multiple stacks in one application, and those stacks can be dependent. That’s necessary when one stack contains resources that are referenced by resources in another stack.

You can see how convenient it is. Now let’s have a look at a few common patterns we encountered while using the AWS CDK at Zalando.

Common patterns – reusable constructs

The AWS CDK has four main classes:

  • Resource – Represents a single resource in AWS.
  • Stack – Represents an AWS CloudFormation stack, and contains resources.
  • App – Represents the whole application, and can contain multiple stacks.
  • Construct – A group of resources that are bound together for a purpose.

The following diagram shows how these classes interact.

Every resource in AWS has a corresponding class in the AWS CDK that extends class Resource. All those classes start with the Cfn prefix, which stands for CloudFormation Native, and comprise the so called level 1 of the AWS CDK. Level 1 can be used, but usually it isn’t, because there is level 2.

Level 2 contains standard constructs that represent groups of resources usually used together. For example, a construct Function from the @aws-cdk/aws-lambda package represents not just a Lambda function, but also contains two resources: a Lambda function and its IAM role. This simplifies creation of a Lambda function, because it always requires a role to run with. There are many other level 2 constructs that are handy, and that cover most standard use cases.

There is also level 3, which is a way to share your own constructs between your projects, or even with other people anywhere in the world. The idea is that one can come up with a generic enough construct, push it to the npm registry — the same way, as, for example, @aws-cdk/aws-lambda package is pushed — and then, anyone can install this package and use that construct in their code. This is an extremely powerful way of sharing definitions of resources.

Now I am going to talk about a few patterns we uncovered so far working on data pipelines. These are:

  • Periodic action
  • Reactive processing
  • Fan-out with Lambda functions
  • Signaling pub/sub system
  • Long-running data processing

Periodic action

The first pattern I want to show is a combination of a cron rule and a Lambda function. Lambda gets triggered every N minutes to orchestrate other parts of the system, for example, to pull new messages from an external queue, and push them into an internal Amazon SQS queue.

The following code example shows how the AWS CDK helps you manage defining infrastructure. In the code, a class CronLambdaConstruct extends class Construct of the AWS CDK. Inside there’s only a constructor. Every resource, defined in the constructor, will be created as part of the stack containing this construct. In this case the resources are a Lambda function, its IAM role, a rule with rate, and a permission to allow the rule to trigger the function.

import { Construct, Duration } from '@aws-cdk/core'
import { Function, S3Code, Runtime } from '@aws-cdk/aws-lambda'
import { Rule, Schedule } from '@aws-cdk/aws-events'
import { LambdaFunction } from '@aws-cdk/aws-events-targets';

export interface CronLambdaProps {
  codeReference: S3Code,
  handlerFullName: string,
  lambdaTimeout: number,
  lambdaMemory: number
}

export class CronLambdaConstruct extends Construct {
  constructor(parent: Construct, id: string, props: CronLambdaProps) {
    super(parent, id);

    const myLambda = new Function(this, 'MyLambda', {
      runtime: Runtime.JAVA_8,
      code: props.codeReference,
      handler: props.handlerFullName,
      timeout: Duration.seconds(props.lambdaTimeout),
      memorySize: props.lambdaMemory,
      reservedConcurrentExecutions: 1
    })

    new Rule(this, 'MyCronRule', {
      enabled: true,
      schedule: Schedule.rate(Duration.minutes(1)),
      targets: [new LambdaFunction(myLambda)]
    })
  }
}

The following picture depicts the output of the cdk diff, a command showing the plan of changes. Although only Function and Rule objects are explicitly initiated in the code, four resources will be created. This is because Function hides creation of its associated role, and Rule object results in creation of the rule itself together with the permission to trigger Lambda function.

Reactive processing

 

One of the major patterns useful in data pipelines is to connect an Amazon SQS queue to a Lambda function as its source, using event source mapping. This technique allows you to immediately process everything coming to the queue with the Lambda function, as new messages arrive. Lambda scales out as needed. You can detach the queue to stop processing, for example, for troubleshooting purposes.

The most interesting part is that you can attach another SQS queue to the source queue, known as a dead letter queue. A retry policy specifies how many attempts to process a message must fail before it will go to the dead letter queue. Then the same Lambda function can use the dead letter queue as its second source, normally in a detached state, to reprocess failed messages later. As soon as you fix the problem that caused Lambda to fail, you can attach the dead letter queue to the Lambda function. Then it can try to reprocess all the failed messages. This activity takes a click of a button in the Lambda console.

import { Construct, Duration } from '@aws-cdk/core'
import { Queue } from '@aws-cdk/aws-sqs'
import { Function, S3Code, Runtime, EventSourceMapping } from '@aws-cdk/aws-lambda';

export interface QueueDLQLambdaProps {
  codeReference: S3Code,
  handlerFullName: string,
  lamdbaTimeout: number,
  lambdaMemory: number
}

export class QueueDLQLambdaConstruct extends Construct {
  constructor(parent: Construct, id: string, props: QueueDLQLambdaProps) {
    super(parent, id);
 
    const inputQueue = new Queue(this, 'InputQueue', {
      maxMessageSizeBytes: 262144,
      retentionPeriod: Duration.days(14),
      visibilityTimeout: Duration.seconds(props.lamdbaTimeout)
    })

    const dlq = new Queue(this, 'DLQ', {
      maxMessageSizeBytes: 2621244,
      retentionPeriod: Duration.days(14),
      visibilityTimeout: Duration.seconds(props.lamdbaTimeout)
    })

    const myLambda = new Function(this, 'MyLambda', {
      runtime: Runtime.JAVA_8,
      code: props.codeReference,
      handler: props.handlerFullName,
      timeout: Duration.seconds(props.lamdbaTimeout),
      memorySize: props.lambdaMemory,
    reservedConcurrentExecutions: 1
    })

    new EventSourceMapping(this, 'InputQueueSource', {
      enabled: true,
      batchSize: 10,
      eventSourceArn: inputQueue.queueArn,
      target: myLambda
    })

    new EventSourceMapping(this, 'DLQSource', {
      enabled: false,
      batchSize: 10,
      eventSourceArn: dlq.queueArn,
      target: myLambda
    })
  }
}

Fan-out with Lambda functions

Now, let’s reuse the two previous constructs to get another pattern: fan-out of processing.

First there is an orchestrating Lambda function that runs with reserved concurrency 1. This prevents parallel executions. This Lambda function prepares multiple activities that can vary, depending on certain parameters or state. For each unit of processing, the orchestrating Lambda function sends a message to an SQS queue that is attached as a source of another Lambda function with high concurrency. Processing Lambda then scales out and executes the tasks.

A good example of this pattern is processing all objects by a certain prefix in Amazon S3. The first Lambda function does the listing of the prefix, and for each listed object it sends a message to the SQS queue. The second Lambda gets invoked once per object, and does the job.

This pattern doesn’t necessarily need to start with a cron Rule. The orchestrating Lambda function can be triggered by other means. For example, it can be reactively invoked by an s3:ObjectCreated event. I’ll talk about this next.

Signaling pub/sub system

Another important approach, which we use across Zalando, is signaling to others that datasets are ready for consumption. This appeared to be a great opportunity to use AWS CDK constructs.

In this use case, there is a central Amazon SNS topic receiving notifications about all datasets that are ready. Anyone in the company can subscribe to the topic by filtering on the dataset names.

import { Construct, Duration } from '@aws-cdk/core'
import { Queue } from '@aws-cdk/aws-sqs'
import { Topic } from '@aws-cdk/aws-sns'
import { Function, S3Code, Runtime, EventSourceMapping } from '@aws-cdk/aws-lambda'
import { PolicyStatement } from '@aws-cdk/aws-iam';

export interface QueueLambdaTopicProps {
  codeReference: S3Code,
  handlerFullName: string,
  lambdaTimeout: number,
  lambdaMemory: number,
  teamAccounts: string[]
}

export class QueueLambdaTopicConstruct extends Construct {

  public readonly centralTopic: Topic

  constructor(parent: Construct, id: string, props: QueueLambdaTopicProps) {
    super(parent, id);

    const inputQueue = new Queue(this, 'InputQueue', {
      maxMessageSizeBytes: 262144,
      retentionPeriod: Duration.days(14),
      visibilityTimeout: Duration.seconds(props.lambdaTimeout)
    })

    const enrichmentLambda = new Function(this, 'EnrichmentLambda', {
      runtime: Runtime.JAVA_8,
      code: props.codeReference,
      handler: props.handlerFullName,
      timeout: Duration.seconds(props.lambdaTimeout),
      memorySize: props.lambdaMemory
    })

    new EventSourceMapping(this, 'InputQueueSource', {
      enabled: true,
      batchSize: 10,
      eventSourceArn: inputQueue.queueArn,
      target: enrichmentLambda
    })

    this.centralTopic = new Topic(this, 'CentralTopic', {
      displayName: 'central-dataset-readiness-topic'
    })

    this.centralTopic.grantPublish(enrichmentLambda)

    const subscribeStatement = new PolicyStatement({
      actions: ['sns:Subscribe'],
      resources: [this.centralTopic.topicArn]
    })

    props.teamAccounts.forEach(teamAccount => {
      subscribeStatement.addArnPrincipal('arn:aws:iam::${teamAccount}:root')
    })

    this.centralTopic.addToResourcePolicy(subscribeStatement)
  }
}

To subscribe, there is a construct that contains an SQS queue and an SNS subscription, which receives notifications from the central SNS topic and moves them to the queue.

import { Construct, Duration } from '@aws-cdk/core'
import { Queue } from '@aws-cdk/aws-sqs'
import { Topic } from '@aws-cdk/aws-sns'
import { SqsSubscription } from '@aws-cdk/aws-sns-subscriptions';

export interface SubscriptionQueueProps {
  visibilityTimeoutSec: number,
  centralTopicARN: string
}

export class SubscriptionQueueConstruct extends Construct {

  public readonly centralTopic: Topic

  constructor(parent: Construct, id: string, props: SubscriptionQueueProps) {
    super(parent, id);

    const centralTopic = Topic.fromTopicArn(this, 'CentralTopic', props.centralTopicARN)

    const teamQueue = new Queue(this, 'CentralTopic', {
      maxMessageSizeBytes: 262144,
      retentionPeriod: Duration.days(14),
      visibilityTimeout: Duration.seconds(props.visibilityTimeoutSec)
    })

    centralTopic.addSubscription(new SqsSubscription(teamQueue))
  }
}

Here we have two constructs. The first construct contains the input SQS queue, the Lambda function that enriches messages with MessageAttributes (to enable filtering), and the central SNS topic for publishing about ready datasets. The second construct is for the subscriptions.

Long-running data processing

The last pattern I want to describe is a long-running processing job. In data engineering this is a Hello World problem for infrastructure. Essentially, you want to run a batch processing job using Apache Spark or a similar framework, and as soon as it’s finished, trigger downstream actions. In AWS this can be solved using AWS Step Functions, a service where you can create state machines (in other words – workflows) of any complexity. The picture below depicts the pattern being described.

To do this using the AWS CDK, you can create a StateMachine object from the @aws-cdk/aws-stepfunctions package. It triggers a job, waits for its completion or failure, and then reports the result. You can even combine it with the previous pattern, to send a signal to the queue of the central SNS topic, informing it that the dataset is ready for downstream processing. For this purpose, integration of Step Functions with SQS via CloudWatch event Rules can be used.

import { Construct, Duration } from '@aws-cdk/core'
import { IFunction } from '@aws-cdk/aws-lambda'
import { Condition, Choice, Fail, Task, Wait, Errors, StateMachine, Succeed, WaitTime } from '@aws-cdk/aws-stepfunctions'
import { InvokeFunction } from '@aws-cdk/aws-stepfunctions-tasks';

export interface StepfunctionLongProcessProps {
  submitJobLambda: IFunction,
  checkJobStateLambda: IFunction
}

export class StepfunctionLongProcessConstruct extends Construct {

  constructor(parent: Construct, id: string, props: StepfunctionLongProcessProps) {
    super(parent, id);

    const retryProps = {
      intervalSeconds: 15,
      maxAttempts: 3,
      backoffRate: 2,
      errors: [Errors.ALL]
    }

    const failedState = new Fail(this, 'Processing failed', {
      error: 'Processing failed'
    })

    const success = new Succeed(this, 'Successful!')

    const submitJob = new Task(this, 'Sumbit job', {
      task: new InvokeFunction(props.submitJobLambda),
      inputPath: '$.jobInfo',
      resultPath: '$.jobState'
    }).addRetry(retryProps)

    const waitForCompletion = new Wait(this, 'Wait for completion...', {
      time: WaitTime.duration(Duration.seconds(30))
    })

    const checkJobState = new Task(this, 'Check job state', {
      task: new InvokeFunction(props.checkJobStateLambda),
      inputPath: '$.jobInfo',
      resultPath: '$.jobState'
    }).addRetry(retryProps)
 
    const isJobCompleted = new Choice(this, 'Is processing done?')

    const stepsDefinition = submitJob
      .next(waitForCompletion)
      .next(checkJobState)
      .next(isJobCompleted
        .when(Condition.stringEquals('$.jobState', 'RUNNING'), waitForCompletion)
        .when(Condition.stringEquals('$.jobState', 'FAILED'), failedState)
        .when(Condition.stringEquals('$.jobState', 'SUCCESS'), success)
    )

    new StateMachine(this, 'StateMachine', {
      definition: stepsDefinition,
      timeout: Duration.hours(1)
    })
  }
}

Conclusion

The AWS CDK provides an easy, convenient, and flexible way to organize stacks and resources in the AWS Cloud. The CDK not only makes the development lifecycle faster, but also enables safer and more structured organization of the infrastructure. We, in the Core Data Lake team of Zalando, have started migrating every part of our infrastructure to the AWS CDK, whenever we refactor old systems or create new ones. And we clearly see how we benefit from it, and how resources become a commodity instead of being a burden.

 

About the Author

Viacheslav Inozemtsev is a Data Engineer at Zalando, has 8 years of data/software engineering experience, a Specialist degree in applied mathematics and a MSc degree in computer science, mostly on the topics of data processing and analysis. In the Core Data Lake team of Zalando he is building an internal central data platform on top of AWS. He is an active user of the serverless technologies in AWS, and an early adopter of the AWS Cloud Development Kit.

 

Disclaimer

The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/serverless-data-engineering-at-zalando-with-the-aws-cdk/

Referencing the AWS SDK for .NET Standard 2.0 from Unity, Xamarin, or UWP

Referencing the AWS SDK for .NET Standard 2.0 from Unity, Xamarin, or UWP

In March 2019, AWS announced support for .NET Standard 2.0 in SDK for .NET. They also announced plans to remove the Portable Class Library (PCL) assemblies from NuGet packages in favor of the .NET Standard 2.0 binaries.

If you’re starting a new project targeting a platform supported by .NET Standard 2.0, especially recent versions of Unity, Xamarin and UWP, you may want to use the .NET Standard 2.0 assemblies for the AWS SDK instead of the PCL assemblies.

Currently, it’s challenging to consume .NET Standard 2.0 assemblies from NuGet packages directly in your PCL, Xamarin, or UWP applications. Unfortunately, the new csproj file format and NuGet don’t let you select assemblies for a specific target framework (in this case, .NET Standard 2.0). This limitation can cause problems because NuGet always restores the assemblies for the target framework of the project being built (in this case, one of the legacy PCL assemblies).

Considering this limitation, our guidance is for your application to directly reference the AWS SDK assemblies (DLL files) instead of the NuGet packages.

  1. Go to the NuGet page for the specific package (for example, AWSSDK.Core) and choose Download Package.
  2. Rename the downloaded .nupkg file with a .zip extension.
  3. Open it to extract the assemblies for a specific target framework (for example /lib/netstandard2.0/AWSSDK.Core.dll).

When using Unity (2018.1 or newer), choose .NET 4.x Equivalent as Scripting Runtime Version and copy the AWS SDK for .NET assemblies into the Asset folder.

Because this process can be time-consuming and error-prone, you should use a script to perform the download and extraction, especially if your project references multiple AWS services. The following PowerShell script downloads and extracts all the latest SDK .dll files into the current folder:

<#
.Synopsis
    Downloads all assemblies of the AWS SDK for .NET for a specific target framework.
.DESCRIPTION
    Downloads all assemblies of the AWS SDK for .NET for a specific target framework.
    This script allows specifying a version of the SDK to download or a target framework.

.NOTES
    This script downloads all files to the current folder (the folder returned by Get-Location).
    This script depends on GitHub to retrieve the list of assemblies to download and on NuGet
    to retrieve the relative packages.

.EXAMPLE
   ./DownloadSDK.ps1

   Downloads the latest AWS SDK for .NET assemblies for .NET Standard 2.0.

.EXAMPLE
    ./DownloadSDK.ps1 -TargetFramework net35

    Downloads the latest AWS SDK for .NET assemblies for .NET Framework 3.5.
    
.EXAMPLE
    ./DownloadSDK.ps1 -SDKVersion 3.3.0.0

    Downloads the AWS SDK for .NET version 3.3.0.0 assemblies for .NET Standard 2.0.

.PARAMETER TargetFramework
    The name of the target framework for which to download the AWS SDK for .NET assemblies. It must be a valid Target Framework Moniker, as described in https://docs.microsoft.com/en-us/dotnet/standard/frameworks.

.PARAMETER SDKVersion
    The AWS SDK for .NET version to download. This must be in the full four-number format (e.g., "3.3.0.0") and it must correspond to a tag on the https://github.com/aws/aws-sdk-net/ repository.
#>

Param (
    [Parameter()]
    [ValidateNotNullOrEmpty()]
    [string]$TargetFramework = 'netstandard2.0',
    [Parameter()]
    [ValidateNotNullOrEmpty()]
    [string]$SDKVersion = 'master'
)

function DownloadPackageAndExtractDll
{
    Param (
        [Parameter(Mandatory = $true)]
        [string] $name,
        [Parameter(Mandatory = $true)]
        [string] $version
    )

    Write-Progress -Activity "Downloading $name"

    $packageUri = "https://www.nuget.org/api/v2/package/$name/$version"
    $filePath = [System.IO.Path]::GetTempFileName()
    $WebClient.DownloadFile($packageUri, $filePath)

    #Invoke-WebRequest $packageUri -OutFile $filePath
    try {
        $zipArchive = [System.IO.Compression.ZipFile]::OpenRead($filePath)
        $entry = $zipArchive.GetEntry("lib/$TargetFramework/$name.dll")
        if ($null -ne $entry)
        {
            $entryStream = $entry.Open()
            $dllPath = Get-Location | Join-Path -ChildPath "./$name.dll"
            $dllFileStream = [System.IO.File]::OpenWrite($dllPath)
            $entryStream.CopyTo($dllFileStream)
            $dllFileStream.Close();
        }
    }
    finally {
        if ($null -ne $dllFileStream)
        {
            $dllFileStream.Dispose()
        }
        if ($null -ne $entryStream)
        {
            $entryStream.Dispose()
        }
        if ($null -ne $zipArchive)
        {
            $zipArchive.Dispose()
        }
        Remove-Item $filePath
    }
}

try {
    $WebClient = New-Object System.Net.Webclient
    Add-Type -AssemblyName System.IO.Compression.FileSystem

    $sdkVersionsUri = "https://raw.githubusercontent.com/aws/aws-sdk-net/$SDKVersion/generator/ServiceModels/_sdk-versions.json"
    $versions = Invoke-WebRequest $sdkVersionsUri | ConvertFrom-Json
    DownloadPackageAndExtractDll "AWSSDK.Core" $versions.CoreVersion
    foreach ($service in $versions.ServiceVersions.psobject.Properties)
    {
        DownloadPackageAndExtractDll "AWSSDK.$($service.Name)" $service.Value.Version
    }    
}
finally {
    if ($null -ne $WebClient)
    {
        $WebClient.Dispose()
    } 
}

At this time, not all features specific to the PCL and Unity SDK libraries have been ported over to .NET Standard 2.0. To suggest features, changes, or leave other feedback to make PCL and Unity development easier, open an issue on our aws-sdk-net-issues GitHub repo.

This workaround will only be needed until PCL assemblies are removed from the NuGet packages. At that time, restoring the NuGet packages from an iOS, Android or UWP project (either a Xamarin or Unity project) should result in the .NET Standard 2.0 assemblies being referenced and included in your build outputs.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/referencing-the-aws-sdk-for-net-standard-2-0-from-unity-xamarin-or-uwp/

Getting started with the AWS Cloud Development Kit and Python

Getting started with the AWS Cloud Development Kit and Python

This post introduces you to the new Python bindings for the AWS Cloud Development Kit (AWS CDK).

What’s the AWS CDK, you might ask? Good question! You are probably familiar with the concept of infrastructure as code (IaC). When you think of IaC, you might think of things like AWS CloudFormation.

AWS CloudFormation allows you to define your AWS infrastructure in JSON or YAML files that can be managed within your source code repository, just like any other code. You can do pull requests and code reviews. When everything looks good, you can use these files as input into an automated process (CI/CD) that deploys your infrastructure changes.

The CDK actually builds on AWS CloudFormation and uses it as the engine for provisioning AWS resources. Rather than using a declarative language like JSON or YAML to define your infrastructure, the CDK lets you do that in your favorite imperative programming language. This includes languages such as TypeScript, Java, C#, and now Python.

About this post
Time to read 19 minutes
Time to complete (estimated) 30 minutes
Cost to complete $0 free tier (tiny fraction of a penny if you aren’t free tier)
Learning level Intermediate (200)
Services used

AWS CDK

AWS CloudFormation

Why would an imperative language be better than a declarative language? Well, it may not always be but there are some real advantages: IDE integration and composition.

IDE integration

You probably have your favorite IDE for your favorite programming language. It provides all kinds of useful features that make you a more productive developer (for example, code completion, integrated documentation, or refactoring tools).

With CDK, you automatically get all of those same advantages when defining your AWS infrastructure. That’s because you’re doing it in the same language that you use for your application code.

Composition

One of the things that modern programming languages do well is composition. By that, I mean the creation of new, higher-level abstractions that hide the details of what is happening underneath and expose a much simpler API. This is one of the main things that we do as developers, creating higher levels of abstraction to simplify code.

It turns out that this is also useful when defining your infrastructure. The existing APIs to AWS services are, by design, fairly low level because they are trying to expose as much functionality as possible to a broad audience of developers. IaC tools like AWS CloudFormation expose a declarative interface, but that interface is at the same level of the API, so it’s equally complex.

In contrast, CDK allows you to compose new abstractions that hide details and simplify common use cases. Then, it packages that code up as a library in your language of choice so that others can easily take advantage.

One of the other neat things about the CDK is that it is designed to support multiple programming languages. The core of the system is written in TypeScript, but bindings for other languages can be added.

That brings me back to the topic of this post, the Python bindings for CDK.

Sample Python application

First, there is some installation that must happen. Rather than describe all of that here, see Getting Started with the AWS CDK.

Create the application

Now, create a sample application.

$ mkdir my_python_sample
$ cd my_python_sample
$ cdk init
Available templates:
* app: Template for a CDK Application
└─ cdk init app --language=[csharp|fsharp|java|python|typescript]
* lib: Template for a CDK Construct Library
└─ cdk init lib --language=typescript
sample-app: Example CDK Application with some constructs
└─ cdk init sample-app —language=[python|typescript]

The first thing you do is create a directory that contains your Python CDK sample. The CDK provides a CLI tool to make it easy to perform many CDK-related operations. You can see that you are running the init command with no parameters.

The CLI is responding with information about all the things that the init command can do. There are different types of apps that you can initialize and there are a number of different programming languages available. Choose sample-app and python, of course.

$ cdk init --language python sample-app
Applying project template sample-app for python
Initializing a new git repository...
Executing python -m venv .env
Welcome to your CDK Python project!

You should explore the contents of this template. It demonstrates a CDK app with two instances of a stack (`HelloStack`) which also uses a user-defined construct (`HelloConstruct`). 

The `cdk.json` file tells the CDK Toolkit how to execute your app.

This project is set up like a standard Python project. The initialization process also creates a virtualenv within this project, stored under the .env directory.

After the init process completes, you can use the following steps to get your project set up.

'''
$ source .env/bin/activate
$ pip install -r requirements.txt
'''

At this point you can now synthesize the CloudFormation template for this code.

'''
$ cdk synth
'''

You can now begin exploring the source code, contained in the hello directory. There is also a very trivial test included that can be run like this:

'''
$ pytest
'''

To add additional dependencies, for example other CDK libraries, just add to your requirements.txt file and rerun the pip install -r requirements.txt command.

Useful commands:

cdk ls          list all stacks in the app
cdk synth       emits the synthesized CloudFormation template
cdk deploy      deploy this stack to your default AWS account/region
cdk diff        compare deployed stack with current state
cdk docs        open CDK documentation

Enjoy!

So, what just happened? Quite a bit, actually. The CDK CLI created some Python source code for your sample application. It also created other support files and infrastructure to make it easy to get started with CDK in Python. Here’s what your directory contains now:

(.env) $ tree
.
├── README.md
├── app.py
├── cdk.json
├── hello
│   ├── __init__.py
│   ├── hello_construct.py
│   └── hello_stack.py
├── requirements.txt
├── setup.py
└── tests
    ├── __init__.py
    └── unit
        ├── __init__.py
        └── test_hello_construct.py

Take a closer look at the contents of your directory:

  • README.md—The introductory README for this project.
  • app.py—The “main” for this sample application.
  • cdk.json—A configuration file for CDK that defines what executable CDK should run to generate the CDK construct tree.
  • hello—A Python module directory.
    • hello_construct.py—A custom CDK construct defined for use in your CDK application.
    • hello_stack.py—A custom CDK stack construct for use in your CDK application.
  • requirements.txt—This file is used by pip to install all of the dependencies for your application. In this case, it contains only -e . This tells pip to install the requirements specified in setup.py. It also tells pip to run python setup.py develop to install the code in the hello module so that it can be edited in place.
  • setup.py—Defines how this Python package would be constructed and what the dependencies are.
  • tests—Contains all tests.
  • unit—Contains unit tests.
    • test_hello_construct.py—A trivial test of the custom CDK construct created in the hello package. This is mainly to demonstrate how tests can be hooked up to the project.

You may have also noticed that as the init command was running, it mentioned that it had created a virtualenv for the project as well. I don’t have time to go into virtualenvs in detail for this post. They are basically a great tool in the Python world for isolating your development environments from your system Python environment and from other development environments.

All dependencies are installed within this virtual environment and have no effect on anything else on your machine. When you are done with this example, you can just delete the entire directory and everything goes away.

You don’t have to use the virtualenv created here but I highly recommend that you do. Here’s how you would initialize your virtualenv and then install all of your dependencies.

$ source .env/bin/activate
(.env) $ pip install -r requirements.txt
...
(.env) $ pytest
============================= test session starts ==============================
platform darwin -- Python 3.7.0, pytest-4.4.0, py-1.8.0, pluggy-0.9.0
rootdir: /Users/garnaat/projects/cdkdev/my_sample
collected 1 item                                                              
tests/unit/test_hello_construct.py .                                     [100%]
=========================== 1 passed in 0.67 seconds ===========================

As you can see, you even have tests included, although they are admittedly simple at this point. It does give you a way to make sure your sample application and all of its dependencies are installed correctly.

Generate an AWS CloudFormation template

Okay, now that you know what’s here, try to generate an AWS CloudFormation template for the constructs that you are defining in your CDK app. You use the CDK Toolkit (the CLI) to do this.

$ cdk synth 
Multiple stacks selected (hello-cdk-1, hello-cdk-2), but output is directed to stdout. Either select one stack, or use --output to send templates to a directory. 
$

Hmm, that was unexpected. What does this mean? Well, as you will see in a minute, your CDK app actually defines two stacks, hello-cdk-1 and hello-cdk-2. The synth command can only synthesize one stack at a time. It is telling you about the two that it has found and asking you to choose one of them.

$ cdk synth hello-cdk-1
Resources:
  MyFirstQueueFF09316A:
    Type: AWS::SQS::Queue
    Properties:
      VisibilityTimeout: 300
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/Resource
  MyFirstQueueMyFirstTopicSubscription774591B6:
    Type: AWS::SNS::Subscription
    Properties:
      Protocol: sqs
      TopicArn:
        Ref: MyFirstTopic0ED1F8A4
      Endpoint:
        Fn::GetAtt:
          - MyFirstQueueFF09316A
          - Arn
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/MyFirstTopicSubscription/Resource
  MyFirstQueuePolicy596EEC78:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
          - Action: sqs:SendMessage
            Condition:
              ArnEquals:
                aws:SourceArn:
                  Ref: MyFirstTopic0ED1F8A4
            Effect: Allow
            Principal:
              Service: sns.amazonaws.com
            Resource:
              Fn::GetAtt:
               - MyFirstQueueFF09316A
                - Arn
        Version: "2012-10-17"
      Queues:
        - Ref: MyFirstQueueFF09316A
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/Policy/Resource
  MyFirstTopic0ED1F8A4:
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: My First Topic
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstTopic/Resource
  MyHelloConstructBucket0DAEC57E1:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-0/Resource
  MyHelloConstructBucket18D9883BE:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-1/Resource
  MyHelloConstructBucket2C1DA3656:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-2/Resource
  MyHelloConstructBucket398A5DE67:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-3/Resource
  MyUserDC45028B:
    Type: AWS::IAM::User
    Metadata:
      aws:cdk:path: hello-cdk-1/MyUser/Resource
  MyUserDefaultPolicy7B897426:
    Type: AWS::IAM::Policy
    Properties:
      PolicyDocument:
        Statement:
         - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket0DAEC57E1
                  - Arn
             - Fn::Join:
                  - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket0DAEC57E1
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                 - MyHelloConstructBucket18D9883BE
                  - Arn
              - Fn::Join:
                  - ""
                 - - Fn::GetAtt:
                        - MyHelloConstructBucket18D9883BE
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket2C1DA3656
                  - Arn
             - Fn::Join:
                 - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket2C1DA3656
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket398A5DE67
                  - Arn
              - Fn::Join:
                  - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket398A5DE67
                       - Arn
                    - /*
        Version: "2012-10-17"
      PolicyName: MyUserDefaultPolicy7B897426
      Users:
       - Ref: MyUserDC45028B
    Metadata:
     aws:cdk:path: hello-cdk-1/MyUser/DefaultPolicy/Resource
  CDKMetadata:
    Type: AWS::CDK::Metadata
    Properties:
      Modules: aws-cdk=0.27.0,@aws-cdk/assets=0.27.0,@aws-cdk/aws-autoscaling-api=0.27.0,@aws-cdk/aws-cloudwatch=0.27.0,@aws-cdk/aws-codepipeline-api=0.27.0,@aws-cdk/aws-ec2=0.27.0,@aws-cdk/aws-events=0.27.0,@aws-cdk/aws-iam=0.27.0,@aws-cdk/aws-kms=0.27.0,@aws-cdk/aws-lambda=0.27.0,@aws-cdk/aws-logs=0.27.0,@aws-cdk/aws-s3=0.27.0,@aws-cdk/aws-s3-notifications=0.27.0,@aws-cdk/aws-sns=0.27.0,@aws-cdk/aws-sqs=0.27.0,@aws-cdk/aws-stepfunctions=0.27.0,@aws-cdk/cdk=0.27.0,@aws-cdk/cx-api=0.27.0,@aws-cdk/region-info=0.27.0,jsii-runtime=Python/3.7.0

That’s a lot of YAML. 147 lines to be exact. If you take some time to study this, you can probably understand all of the AWS resources that are being created. You could probably even understand why they are being created. Rather than go through that in detail right now, instead focus on the Python code that makes up your CDK app. It’s a lot shorter and a lot easier to understand.

First, look at your “main,” app.py.

#!/usr/bin/env python3

from aws_cdk import cdk
from hello.hello_stack import MyStack

app = cdk.App()

MyStack(app, "hello-cdk-1", env={'region': 'us-east-2'})
MyStack(app, "hello-cdk-2", env={'region': 'us-west-2'})

app.run()

Well, that’s short and sweet. You are creating an App, adding two instances of some class called MyStack to the app, and then calling the run method of the App object.

Now find out what’s going on in the MyStack class.

from aws_cdk import (
    aws_iam as iam,
    aws_sqs as sqs,
    aws_sns as sns,
    cdk
)

from hello_construct import HelloConstruct

class MyStack(cdk.Stack):
    def __init__(self, app: cdk.App, id: str, **kwargs) -&gt; None:
        super().__init__(app, id, **kwargs)

        queue = sqs.Queue(
            self, "MyFirstQueue",
            visibility_timeout_sec=300,
        )

        topic = sns.Topic(
            self, "MyFirstTopic",
            display_name="My First Topic"
        )

        topic.subscribe_queue(queue)

        hello = HelloConstruct(self, "MyHelloConstruct", num_buckets=4)
        user = iam.User(self, "MyUser")
        hello.grant_read(user)

This is a bit more interesting. This code is importing some CDK packages and then using those to create a few AWS resources.

First, you create an SQS queue called MyFirstQueue and set the visibility_timeout value for the queue. Then you create an SNS topic called MyFirstTopic.

The next line of code is interesting. You subscribe the SNS topic to the SQS queue and it’s all happening in one simple and easy to understand line of code.

If you have ever done this with the SDKs or with the CLI, you know that there are several steps to this process. You have to create an IAM policy that grants the topic permission to send messages to the queue, you have to create a topic subscription, etc. You can see the details in the AWS CloudFormation stack generated earlier.

All of that gets simplified into a single, readable line of code. That’s an example of what CDK constructs can do to hide complexity in your infrastructure.

The final thing happening here is that you are creating an instance of a HelloConstruct class. Look at the code behind this.


from aws_cdk import (
     aws_iam as iam,
     aws_s3 as s3,
     cdk,
)

class HelloConstruct(cdk.Construct):

    @property
    def buckets(self):
        return tuple(self._buckets)

    def __init__(self, scope: cdk.Construct, id: str, num_buckets: int) ->
 None:
        super().__init__(scope, id)
        self._buckets = []
        for i in range(0, num_buckets):
            self._buckets.append(s3.Bucket(self, f"Bucket-{i}"))

    def grant_read(self, principal: iam.IPrincipal):
        for b in self.buckets:
            b.grant_read(principal, "*")

This code shows an example of creating your own custom constructs in CDK that define arbitrary AWS resources under the hood while exposing a simple API.

Here, your construct accepts an integer parameter num_buckets in the constructor and then creates that number of buckets inside the scope passed in. It also exposes a grant_read method that automatically grants the IAM principal passed in read permissions to all buckets associated with your construct.

Deploy the AWS CloudFormation templates

The whole point of CDK is to create AWS infrastructure and so far you haven’t done any of that. So now use your CDK program to generate the AWS CloudFormation templates. Then, deploy those templates to your AWS account and validate that the right resources got created.

$ cdk deploy
This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening).
Please confirm you intend to make the following modifications:

IAM Statement Changes
┌───┬───────────────┬────────┬───────────────┬───────────────┬────────────────┐
│   │ Resource      │ Effect │ Action        │ Principal     │ Condition      │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyFirstQueu │ Allow  │ sqs:SendMessa │ Service:sns.a │ "ArnEquals": { │
│   │ e.Arn}        │        │ ge            │ mazonaws.com  │   "aws:SourceA │
│   │               │        │               │               │ rn": "${MyFirs │
│   │               │        │               │               │ tTopic}"       │
│   │               │        │               │               │ }              │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 0.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 0.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 1.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 1.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 2.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 2.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 3.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 3.Arn}/*      │        │               │               │                │
└───┴───────────────┴────────┴───────────────┴───────────────┴────────────────┘
(NOTE: There may be security-related changes not in this list. See http://bit.ly/cdk-2EhF7Np)

Do you wish to deploy these changes (y/n)?

Here, the CDK is telling you about the security-related changes that this deployment includes. It shows you the resources or ARN patterns involved, the actions being granted, and the IAM principals to which the grants apply. You can review these and press y when ready. You then see status reported about the resources being created.

hello-cdk-1: deploying...
hello-cdk-1: creating CloudFormation changeset...
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1)
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::IAM::User | MyUser (MyUserDC45028B)
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::IAM::User | MyUser (MyUserDC45028B) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A) Resource creation Initiated
0/12 | 8:41:16 AM | CREATE_IN_PROGRESS | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4) Resource creation Initiated
0/12 | 8:41:16 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) Resource creation Initiated
1/12 | 8:41:16 AM | CREATE_COMPLETE | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A)
1/12 | 8:41:17 AM | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata Resource creation Initiated
2/12 | 8:41:17 AM | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata
3/12 | 8:41:26 AM | CREATE_COMPLETE | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4)
3/12 | 8:41:28 AM | CREATE_IN_PROGRESS | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6)
3/12 | 8:41:29 AM | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78)
3/12 | 8:41:29 AM | CREATE_IN_PROGRESS | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) Resource creation Initiated
4/12 | 8:41:30 AM | CREATE_COMPLETE | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6)
4/12 | 8:41:30 AM | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) Resource creation Initiated
5/12 | 8:41:30 AM | CREATE_COMPLETE | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78)
6/12 | 8:41:35 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1)
7/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67)
8/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE)
9/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656)
10/12 | 8:41:50 AM | CREATE_COMPLETE | AWS::IAM::User | MyUser (MyUserDC45028B)
10/12 | 8:41:53 AM | CREATE_IN_PROGRESS | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426)
10/12 | 8:41:53 AM | CREATE_IN_PROGRESS | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) Resource creation Initiated
11/12 | 8:42:02 AM | CREATE_COMPLETE | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426)
12/12 | 8:42:03 AM | CREATE_COMPLETE | AWS::CloudFormation::Stack | hello-cdk-1

✅ hello-cdk-1

Stack ARN:
arn:aws:cloudformation:us-east-2:433781611764:stack/hello-cdk-1/87482f50-6c27-11e9-87d0-026465bb0bfc

At this point, the CLI presents you with another summary of IAM changes and asks you to confirm. This is because your CDK sample application creates two stacks in two different AWS Regions. Approve the changes for the second stack and you see similar status output.

Clean up

Now you can use the AWS Management Console to look at the resources that were created and validate that it all makes sense. After you are finished, you can easily destroy all of these resources with a single command.

$ cdk destroy
Are you sure you want to delete: hello-cdk-2, hello-cdk-1 (y/n)? y

hello-cdk-2: destroying...
   0 | 8:48:31 AM | DELETE_IN_PROGRESS   | AWS::CloudFormation::Stack | hello-cdk-2 User Initiated
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::CDK::Metadata     | CDKMetadata 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   1 | 8:48:34 AM | DELETE_COMPLETE      | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) <br />   2 | 8:48:34 AM | DELETE_COMPLETE      | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   3 | 8:48:34 AM | DELETE_COMPLETE      | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   4 | 8:48:35 AM | DELETE_COMPLETE      | AWS::CDK::Metadata     | CDKMetadata 
   4 | 8:48:35 AM | DELETE_IN_PROGRESS   | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   4 | 8:48:36 AM | DELETE_IN_PROGRESS   | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4)
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) 
   4 | 8:48:36 AM | DELETE_IN_PROGRESS   | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) 
   5 | 8:48:36 AM | DELETE_COMPLETE      | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
   6 | 8:48:36 AM | DELETE_COMPLETE      | AWS::IAM::User         | MyUser (MyUserDC45028B) 
 6 Currently in progress: hello-cdk-2, MyFirstQueueFF09316A

 ✅  hello-cdk-2: destroyed
hello-cdk-1: destroying...
   0 | 8:49:38 AM | DELETE_IN_PROGRESS   | AWS::CloudFormation::Stack | hello-cdk-1 User Initiated
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::CDK::Metadata     | CDKMetadata 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   1 | 8:49:41 AM | DELETE_COMPLETE      | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   2 | 8:49:41 AM | DELETE_COMPLETE      | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   3 | 8:49:41 AM | DELETE_COMPLETE      | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   4 | 8:49:42 AM | DELETE_COMPLETE      | AWS::CDK::Metadata     | CDKMetadata 
   4 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) 
   4 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) 
   5 | 8:49:42 AM | DELETE_COMPLETE      | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   5 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A) 
   6 | 8:49:43 AM | DELETE_COMPLETE      | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
 6 Currently in progress: hello-cdk-1, MyFirstQueueFF09316A
   7 | 8:50:43 AM | DELETE_COMPLETE      | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A)

 ✅  hello-cdk-1: destroyed

 

Conclusion

In this post, I introduced you to the AWS Cloud Development Kit. You saw how it enables you to define your AWS infrastructure in modern programming languages like TypeScript, Java, C#, and now Python. I showed you how to use the CDK CLI to initialize a new sample application in Python, and walked you though the project structure. I taught you how to use the CDK to synthesize your Python code into AWS CloudFormation templates and deploy them through AWS CloudFormation to provision AWS infrastructure. Finally, I showed you how to clean up these resources when you’re done.

Now it’s your turn. Go build something amazing with the AWS CDK for Python! To help get you started, see the following resources:

The CDK and the Python language binding are currently in developer preview, so I’d love to get feedback on what you like, and where AWS can do better. The team lives on GitHub at https://github.com/awslabs/aws-cdk where it’s easy to get directly in touch with the engineers building the CDK. Raise an issue if you discover a bug or want to make a feature request. Join the conversation on the aws-cdk Gitter channel to ask questions.

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/getting-started-with-the-aws-cloud-development-kit-and-python/

Node.js 6 is approaching End-of-Life – upgrade your AWS Lambda functions to the Node.js 10 LTS

Node.js 6 is approaching End-of-Life – upgrade your AWS Lambda functions to the Node.js 10 LTS

This blog was authored by Liz Parody, Developer Relations Manager at NodeSource.

 

Node.js 6.x (“Boron”), which has been maintained as a long-term stable (LTS) release line since fall of 2016, is reaching its scheduled end-of-life (EOL) on April 30, 2019. After the maintenance period ends, Node.js 6 will no longer be included in Node.js releases. This includes releases that address critical bugs, security fixes, patches, or other important updates.

[Image source]

Recently, AWS has been reminding users to upgrade AWS Lambda functions built on the Node.js 6 runtime to a newer version. This is because language runtimes that have reached EOL are unsupported in Lambda.

Requests for feature additions to this release line aren’t accepted. Continued use of the Node.js 6 runtime after April 30, 2019 increases your exposure to various risks, including the following:

  • Security vulnerabilities – Node.js contributors are constantly working to fix security flaws of all severity levels (low, moderate, and high). In the February 2019 Security Release, all actively maintained Node.js release lines were patched, including “Boron”. After April 30, security releases will no longer be applied to Node.js 6, increasing the potential for malicious attacks.
  • Software incompatibility – Newer versions of Node.js better support current best practices and newer design patterns. For example, the popular async/await pattern to interact with promises was first introduced in the Node.js 8 (“Carbon”) release line. “Boron” users can’t take advantage of this feature. If you don’t upgrade to a newer release line, you miss out on features and improvements that enable you to write better, more performant applications.
  • Compliance issues – This risk applies most to teams in highly regulated industries such as healthcare, finance, or ecommerce. It also applies to those who deal with sensitive data such as personally identifiable information (PII). Exposing these types of data to unnecessary risk can result in severe consequences, ranging from extended legal battles to hefty fines.
  • Poor performance and reliability – The Node.js 10 (“Dubnium”) runtime is significantly faster than Node.js 6, with the capacity to perform twice as many operations per second. Lambda is an especially popular choice for applications that must deliver low latency and high performance. Upgrading to a newer version of the Node.js runtime is a relatively painless way to improve the performance of your application.
  • Higher operating costs – The performance benefits of the Node.js 10 runtime compared to Node.js 6 can directly translate to reduced operational costs. Aside from missing the day-to-day savings, running an unmaintained version of the Node.js runtime also significantly increases the likelihood of unexpected costs associated with an outage or critical issue.

Key differences between Node.js 6 and Node.js 10

Metrics provided by the Node.js Benchmarking working group highlight the performance benefits of upgrading from Node.js 6 to the most recent LTS release line, Node.js 10:

  • Operations per second are nearly two times higher in Node.js 10 versus Node.js 6.
  • Latency has decreased by 65% in Node.js 10 versus Node.js 6.
  • The footprint after load is 35% lower in Node.js 10 versus Node.js 6, resulting in improved performance in the event of a cold start.

While benchmarks don’t always reflect real-world results, the trend is clear that performance is increasing in each new Node.js release. [Data Source]

The most recent LTS release line is Node.js 10 (“Dubnium”). This release line features several enhancements and improvements over earlier versions, including the following:

  • Node.js 10 is the first release line to upgrade to OpenSSL version 1.1.0.
  • Native support for HTTP/2, first added to the Node.js 8 LTS release line, was stabilized in Node.js 10. It offers massive performance improvements over HTTP/1 (including reduced latency and minimized protocol overhead), and adds support for request prioritization and server push.
  • Node.js 10 introduces new JavaScript language capabilities, such as Function.prototype.toString() and mitigations for side-channel vulnerabilities, to help prevent information leaks.

“While there are a handful of new features, the standout changes in Node.js 10.0.0 are improvements to error handling and diagnostics that will improve the overall developer experience.” James Snell, a member of the Node.js Technical Steering Committee (TSC) [Quote source]

Upgrade using the N|Solid Lambda layer

AWS doesn’t currently offer the Node.js 10 runtime in Lambda. However, you may want to test the Node.js 10 runtime version in a development or staging environment before rolling out updates to production Lambda functions.

Before AWS adds the Node.js 10 runtime version for Lambda, NodeSource’s N|Solid runtime is available for use as a Lambda layer. It includes a 100%-compatible version for the Node.js 10 LTS release line.

If you install N|Solid as a Lambda layer, you can begin migration and testing before the Node.js 6 EOL date. You can also easily switch to the Node.js 10 runtime provided by AWS when it’s available. Choose between versions based on the Node.js 8 (“Carbon”) and 10 (“Dubnium”) LTS release lines. It takes just a few minutes to get up and running.

First, when you’re creating a function, choose Use custom runtime in function code or layer. (If you’re migrating an existing function, you can change the runtime for the function.)

 

Next, add a new Lambda layer, and choose Provide a layer version ARN. You can find the latest ARN for the N|Solid Lambda layer here. Enter the N|Solid runtime ARN for your AWS Region and Node.js version (Node.js 8 “Carbon” or Node.js 10 “Dubnium”). This is where you can use Node.js 10.

 

That’s it! Your Lambda function is now set up to use Node.js 10.

You can also update your functions to use the N|Solid Lambda layer with the AWS CLI.

To update an existing function:

aws lambda update-function-configuration --function-name <YOUR_FUNCTION_NAME> --layers arn:aws:lambda:<AWS_REGION>:800406105498:layer:nsolid-node-10:6 --runtime provided

In addition to the Node.js 10 runtime, the Lambda layer provided by NodeSource includes N|Solid. N|Solid for AWS Lambda provides low-impact performance monitoring for Lambda functions. To take advantage of this feature, you can also sign up for a free NodeSource account. After you sign up, you just need to set your N|Solid license key as an environment variable in your Lambda function.

That’s all you have to do to start monitoring your Node.js Lambda functions. After you add your license key, your Lambda function invocations should show up on the Functions tab of your N|Solid dashboard.

For more information, see our N|Solid for AWS Lambda getting started guide.

Upgrade to Node.js 10 LTS (“Dubnium”) outside of Lambda

Not only are workloads in Lambda affected, but you must consider other locations where you’re running Node.js 6. I review three more ways to upgrade your version of Node.js in other compute environments.

Use NVM

One of the best practices for upgrading Node.js versions is using NVM. NVM, or Node Version Manager, lets you manage multiple active Node.js versions.

To install NVM on *nix systems, you can use the install script using cURL.

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

or Wget:

$ wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

For Windows-based systems, you can use NVM for Windows.

After NVM is installed, you can manage your versions of Node.js with some simple AWS CLI commands.

To download, compile, and install the latest release of Node.js:

$ nvm install node # "node" is an alias for the latest version

To install a specific version of Node.js:

$ nvm install 10.10.0 # or 8.5.0, 8.9.1, etc.

Upgrade manually

To upgrade Node.js without a tool like NVM, you can manually install a new version. NodeSource provides Linux distributions for Node.js, and recommends that you upgrade using the NodeSource Node.js Binary Distributions.

To install Node.js 10:

Using Ubuntu

$ curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash - 
$ sudo apt-get install -y nodejs

Using Amazon Linux

$ curl -sL https://rpm.nodesource.com/setup_10.x | sudo bash -
$ sudo yum install -y nodejs

Most production applications built on Node.js make use of LTS release lines. We highly recommend that you upgrade any application or Lambda function currently using the Node.js 6 runtime version to Node.js 10, the newest LTS version.

To hear more about the latest release line, check out NodeSource’s webinar, New and Exciting Features Landing in Node.js 12. This release line officially becomes the current LTS version in October 2019.

About the Author

Liz is a self-taught Software Engineer focused on JavaScript, and Developer Relations Manager at NodeSource. She organizes different community events such as JSConf Colombia, Pioneras Developers, Startup Weekend and has been a speaker at EmpireJS, MedellinJS, PionerasDev, and GDG.

She loves sharing knowledge, promoting JavaScript and Node.js ecosystem and participating in key tech events and conferences to enhance her knowledge and network.

Disclaimer
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/node-js-6-is-approaching-end-of-life-upgrade-your-aws-lambda-functions-to-the-node-js-10-lts/