Tag: PHP

Referencing the AWS SDK for .NET Standard 2.0 from Unity, Xamarin, or UWP

Referencing the AWS SDK for .NET Standard 2.0 from Unity, Xamarin, or UWP

In March 2019, AWS announced support for .NET Standard 2.0 in SDK for .NET. They also announced plans to remove the Portable Class Library (PCL) assemblies from NuGet packages in favor of the .NET Standard 2.0 binaries.

If you’re starting a new project targeting a platform supported by .NET Standard 2.0, especially recent versions of Unity, Xamarin and UWP, you may want to use the .NET Standard 2.0 assemblies for the AWS SDK instead of the PCL assemblies.

Currently, it’s challenging to consume .NET Standard 2.0 assemblies from NuGet packages directly in your PCL, Xamarin, or UWP applications. Unfortunately, the new csproj file format and NuGet don’t let you select assemblies for a specific target framework (in this case, .NET Standard 2.0). This limitation can cause problems because NuGet always restores the assemblies for the target framework of the project being built (in this case, one of the legacy PCL assemblies).

Considering this limitation, our guidance is for your application to directly reference the AWS SDK assemblies (DLL files) instead of the NuGet packages.

  1. Go to the NuGet page for the specific package (for example, AWSSDK.Core) and choose Download Package.
  2. Rename the downloaded .nupkg file with a .zip extension.
  3. Open it to extract the assemblies for a specific target framework (for example /lib/netstandard2.0/AWSSDK.Core.dll).

When using Unity (2018.1 or newer), choose .NET 4.x Equivalent as Scripting Runtime Version and copy the AWS SDK for .NET assemblies into the Asset folder.

Because this process can be time-consuming and error-prone, you should use a script to perform the download and extraction, especially if your project references multiple AWS services. The following PowerShell script downloads and extracts all the latest SDK .dll files into the current folder:

<#
.Synopsis
    Downloads all assemblies of the AWS SDK for .NET for a specific target framework.
.DESCRIPTION
    Downloads all assemblies of the AWS SDK for .NET for a specific target framework.
    This script allows specifying a version of the SDK to download or a target framework.

.NOTES
    This script downloads all files to the current folder (the folder returned by Get-Location).
    This script depends on GitHub to retrieve the list of assemblies to download and on NuGet
    to retrieve the relative packages.

.EXAMPLE
   ./DownloadSDK.ps1

   Downloads the latest AWS SDK for .NET assemblies for .NET Standard 2.0.

.EXAMPLE
    ./DownloadSDK.ps1 -TargetFramework net35

    Downloads the latest AWS SDK for .NET assemblies for .NET Framework 3.5.
    
.EXAMPLE
    ./DownloadSDK.ps1 -SDKVersion 3.3.0.0

    Downloads the AWS SDK for .NET version 3.3.0.0 assemblies for .NET Standard 2.0.

.PARAMETER TargetFramework
    The name of the target framework for which to download the AWS SDK for .NET assemblies. It must be a valid Target Framework Moniker, as described in https://docs.microsoft.com/en-us/dotnet/standard/frameworks.

.PARAMETER SDKVersion
    The AWS SDK for .NET version to download. This must be in the full four-number format (e.g., "3.3.0.0") and it must correspond to a tag on the https://github.com/aws/aws-sdk-net/ repository.
#>

Param (
    [Parameter()]
    [ValidateNotNullOrEmpty()]
    [string]$TargetFramework = 'netstandard2.0',
    [Parameter()]
    [ValidateNotNullOrEmpty()]
    [string]$SDKVersion = 'master'
)

function DownloadPackageAndExtractDll
{
    Param (
        [Parameter(Mandatory = $true)]
        [string] $name,
        [Parameter(Mandatory = $true)]
        [string] $version
    )

    Write-Progress -Activity "Downloading $name"

    $packageUri = "https://www.nuget.org/api/v2/package/$name/$version"
    $filePath = [System.IO.Path]::GetTempFileName()
    $WebClient.DownloadFile($packageUri, $filePath)

    #Invoke-WebRequest $packageUri -OutFile $filePath
    try {
        $zipArchive = [System.IO.Compression.ZipFile]::OpenRead($filePath)
        $entry = $zipArchive.GetEntry("lib/$TargetFramework/$name.dll")
        if ($null -ne $entry)
        {
            $entryStream = $entry.Open()
            $dllPath = Get-Location | Join-Path -ChildPath "./$name.dll"
            $dllFileStream = [System.IO.File]::OpenWrite($dllPath)
            $entryStream.CopyTo($dllFileStream)
            $dllFileStream.Close();
        }
    }
    finally {
        if ($null -ne $dllFileStream)
        {
            $dllFileStream.Dispose()
        }
        if ($null -ne $entryStream)
        {
            $entryStream.Dispose()
        }
        if ($null -ne $zipArchive)
        {
            $zipArchive.Dispose()
        }
        Remove-Item $filePath
    }
}

try {
    $WebClient = New-Object System.Net.Webclient
    Add-Type -AssemblyName System.IO.Compression.FileSystem

    $sdkVersionsUri = "https://raw.githubusercontent.com/aws/aws-sdk-net/$SDKVersion/generator/ServiceModels/_sdk-versions.json"
    $versions = Invoke-WebRequest $sdkVersionsUri | ConvertFrom-Json
    DownloadPackageAndExtractDll "AWSSDK.Core" $versions.CoreVersion
    foreach ($service in $versions.ServiceVersions.psobject.Properties)
    {
        DownloadPackageAndExtractDll "AWSSDK.$($service.Name)" $service.Value.Version
    }    
}
finally {
    if ($null -ne $WebClient)
    {
        $WebClient.Dispose()
    } 
}

At this time, not all features specific to the PCL and Unity SDK libraries have been ported over to .NET Standard 2.0. To suggest features, changes, or leave other feedback to make PCL and Unity development easier, open an issue on our aws-sdk-net-issues GitHub repo.

This workaround will only be needed until PCL assemblies are removed from the NuGet packages. At that time, restoring the NuGet packages from an iOS, Android or UWP project (either a Xamarin or Unity project) should result in the .NET Standard 2.0 assemblies being referenced and included in your build outputs.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/referencing-the-aws-sdk-for-net-standard-2-0-from-unity-xamarin-or-uwp/

Getting started with the AWS Cloud Development Kit and Python

Getting started with the AWS Cloud Development Kit and Python

This post introduces you to the new Python bindings for the AWS Cloud Development Kit (AWS CDK).

What’s the AWS CDK, you might ask? Good question! You are probably familiar with the concept of infrastructure as code (IaC). When you think of IaC, you might think of things like AWS CloudFormation.

AWS CloudFormation allows you to define your AWS infrastructure in JSON or YAML files that can be managed within your source code repository, just like any other code. You can do pull requests and code reviews. When everything looks good, you can use these files as input into an automated process (CI/CD) that deploys your infrastructure changes.

The CDK actually builds on AWS CloudFormation and uses it as the engine for provisioning AWS resources. Rather than using a declarative language like JSON or YAML to define your infrastructure, the CDK lets you do that in your favorite imperative programming language. This includes languages such as TypeScript, Java, C#, and now Python.

About this post
Time to read 19 minutes
Time to complete (estimated) 30 minutes
Cost to complete $0 free tier (tiny fraction of a penny if you aren’t free tier)
Learning level Intermediate (200)
Services used

AWS CDK

AWS CloudFormation

Why would an imperative language be better than a declarative language? Well, it may not always be but there are some real advantages: IDE integration and composition.

IDE integration

You probably have your favorite IDE for your favorite programming language. It provides all kinds of useful features that make you a more productive developer (for example, code completion, integrated documentation, or refactoring tools).

With CDK, you automatically get all of those same advantages when defining your AWS infrastructure. That’s because you’re doing it in the same language that you use for your application code.

Composition

One of the things that modern programming languages do well is composition. By that, I mean the creation of new, higher-level abstractions that hide the details of what is happening underneath and expose a much simpler API. This is one of the main things that we do as developers, creating higher levels of abstraction to simplify code.

It turns out that this is also useful when defining your infrastructure. The existing APIs to AWS services are, by design, fairly low level because they are trying to expose as much functionality as possible to a broad audience of developers. IaC tools like AWS CloudFormation expose a declarative interface, but that interface is at the same level of the API, so it’s equally complex.

In contrast, CDK allows you to compose new abstractions that hide details and simplify common use cases. Then, it packages that code up as a library in your language of choice so that others can easily take advantage.

One of the other neat things about the CDK is that it is designed to support multiple programming languages. The core of the system is written in TypeScript, but bindings for other languages can be added.

That brings me back to the topic of this post, the Python bindings for CDK.

Sample Python application

First, there is some installation that must happen. Rather than describe all of that here, see Getting Started with the AWS CDK.

Create the application

Now, create a sample application.

$ mkdir my_python_sample
$ cd my_python_sample
$ cdk init
Available templates:
* app: Template for a CDK Application
└─ cdk init app --language=[csharp|fsharp|java|python|typescript]
* lib: Template for a CDK Construct Library
└─ cdk init lib --language=typescript
sample-app: Example CDK Application with some constructs
└─ cdk init sample-app —language=[python|typescript]

The first thing you do is create a directory that contains your Python CDK sample. The CDK provides a CLI tool to make it easy to perform many CDK-related operations. You can see that you are running the init command with no parameters.

The CLI is responding with information about all the things that the init command can do. There are different types of apps that you can initialize and there are a number of different programming languages available. Choose sample-app and python, of course.

$ cdk init --language python sample-app
Applying project template sample-app for python
Initializing a new git repository...
Executing python -m venv .env
Welcome to your CDK Python project!

You should explore the contents of this template. It demonstrates a CDK app with two instances of a stack (`HelloStack`) which also uses a user-defined construct (`HelloConstruct`). 

The `cdk.json` file tells the CDK Toolkit how to execute your app.

This project is set up like a standard Python project. The initialization process also creates a virtualenv within this project, stored under the .env directory.

After the init process completes, you can use the following steps to get your project set up.

'''
$ source .env/bin/activate
$ pip install -r requirements.txt
'''

At this point you can now synthesize the CloudFormation template for this code.

'''
$ cdk synth
'''

You can now begin exploring the source code, contained in the hello directory. There is also a very trivial test included that can be run like this:

'''
$ pytest
'''

To add additional dependencies, for example other CDK libraries, just add to your requirements.txt file and rerun the pip install -r requirements.txt command.

Useful commands:

cdk ls          list all stacks in the app
cdk synth       emits the synthesized CloudFormation template
cdk deploy      deploy this stack to your default AWS account/region
cdk diff        compare deployed stack with current state
cdk docs        open CDK documentation

Enjoy!

So, what just happened? Quite a bit, actually. The CDK CLI created some Python source code for your sample application. It also created other support files and infrastructure to make it easy to get started with CDK in Python. Here’s what your directory contains now:

(.env) $ tree
.
├── README.md
├── app.py
├── cdk.json
├── hello
│   ├── __init__.py
│   ├── hello_construct.py
│   └── hello_stack.py
├── requirements.txt
├── setup.py
└── tests
    ├── __init__.py
    └── unit
        ├── __init__.py
        └── test_hello_construct.py

Take a closer look at the contents of your directory:

  • README.md—The introductory README for this project.
  • app.py—The “main” for this sample application.
  • cdk.json—A configuration file for CDK that defines what executable CDK should run to generate the CDK construct tree.
  • hello—A Python module directory.
    • hello_construct.py—A custom CDK construct defined for use in your CDK application.
    • hello_stack.py—A custom CDK stack construct for use in your CDK application.
  • requirements.txt—This file is used by pip to install all of the dependencies for your application. In this case, it contains only -e . This tells pip to install the requirements specified in setup.py. It also tells pip to run python setup.py develop to install the code in the hello module so that it can be edited in place.
  • setup.py—Defines how this Python package would be constructed and what the dependencies are.
  • tests—Contains all tests.
  • unit—Contains unit tests.
    • test_hello_construct.py—A trivial test of the custom CDK construct created in the hello package. This is mainly to demonstrate how tests can be hooked up to the project.

You may have also noticed that as the init command was running, it mentioned that it had created a virtualenv for the project as well. I don’t have time to go into virtualenvs in detail for this post. They are basically a great tool in the Python world for isolating your development environments from your system Python environment and from other development environments.

All dependencies are installed within this virtual environment and have no effect on anything else on your machine. When you are done with this example, you can just delete the entire directory and everything goes away.

You don’t have to use the virtualenv created here but I highly recommend that you do. Here’s how you would initialize your virtualenv and then install all of your dependencies.

$ source .env/bin/activate
(.env) $ pip install -r requirements.txt
...
(.env) $ pytest
============================= test session starts ==============================
platform darwin -- Python 3.7.0, pytest-4.4.0, py-1.8.0, pluggy-0.9.0
rootdir: /Users/garnaat/projects/cdkdev/my_sample
collected 1 item                                                              
tests/unit/test_hello_construct.py .                                     [100%]
=========================== 1 passed in 0.67 seconds ===========================

As you can see, you even have tests included, although they are admittedly simple at this point. It does give you a way to make sure your sample application and all of its dependencies are installed correctly.

Generate an AWS CloudFormation template

Okay, now that you know what’s here, try to generate an AWS CloudFormation template for the constructs that you are defining in your CDK app. You use the CDK Toolkit (the CLI) to do this.

$ cdk synth 
Multiple stacks selected (hello-cdk-1, hello-cdk-2), but output is directed to stdout. Either select one stack, or use --output to send templates to a directory. 
$

Hmm, that was unexpected. What does this mean? Well, as you will see in a minute, your CDK app actually defines two stacks, hello-cdk-1 and hello-cdk-2. The synth command can only synthesize one stack at a time. It is telling you about the two that it has found and asking you to choose one of them.

$ cdk synth hello-cdk-1
Resources:
  MyFirstQueueFF09316A:
    Type: AWS::SQS::Queue
    Properties:
      VisibilityTimeout: 300
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/Resource
  MyFirstQueueMyFirstTopicSubscription774591B6:
    Type: AWS::SNS::Subscription
    Properties:
      Protocol: sqs
      TopicArn:
        Ref: MyFirstTopic0ED1F8A4
      Endpoint:
        Fn::GetAtt:
          - MyFirstQueueFF09316A
          - Arn
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/MyFirstTopicSubscription/Resource
  MyFirstQueuePolicy596EEC78:
    Type: AWS::SQS::QueuePolicy
    Properties:
      PolicyDocument:
        Statement:
          - Action: sqs:SendMessage
            Condition:
              ArnEquals:
                aws:SourceArn:
                  Ref: MyFirstTopic0ED1F8A4
            Effect: Allow
            Principal:
              Service: sns.amazonaws.com
            Resource:
              Fn::GetAtt:
               - MyFirstQueueFF09316A
                - Arn
        Version: "2012-10-17"
      Queues:
        - Ref: MyFirstQueueFF09316A
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstQueue/Policy/Resource
  MyFirstTopic0ED1F8A4:
    Type: AWS::SNS::Topic
    Properties:
      DisplayName: My First Topic
    Metadata:
      aws:cdk:path: hello-cdk-1/MyFirstTopic/Resource
  MyHelloConstructBucket0DAEC57E1:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-0/Resource
  MyHelloConstructBucket18D9883BE:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-1/Resource
  MyHelloConstructBucket2C1DA3656:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-2/Resource
  MyHelloConstructBucket398A5DE67:
    Type: AWS::S3::Bucket
    DeletionPolicy: Retain
    Metadata:
      aws:cdk:path: hello-cdk-1/MyHelloConstruct/Bucket-3/Resource
  MyUserDC45028B:
    Type: AWS::IAM::User
    Metadata:
      aws:cdk:path: hello-cdk-1/MyUser/Resource
  MyUserDefaultPolicy7B897426:
    Type: AWS::IAM::Policy
    Properties:
      PolicyDocument:
        Statement:
         - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket0DAEC57E1
                  - Arn
             - Fn::Join:
                  - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket0DAEC57E1
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                 - MyHelloConstructBucket18D9883BE
                  - Arn
              - Fn::Join:
                  - ""
                 - - Fn::GetAtt:
                        - MyHelloConstructBucket18D9883BE
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket2C1DA3656
                  - Arn
             - Fn::Join:
                 - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket2C1DA3656
                        - Arn
                    - /*
          - Action:
              - s3:GetObject*
              - s3:GetBucket*
              - s3:List*
            Effect: Allow
            Resource:
              - Fn::GetAtt:
                  - MyHelloConstructBucket398A5DE67
                  - Arn
              - Fn::Join:
                  - ""
                  - - Fn::GetAtt:
                        - MyHelloConstructBucket398A5DE67
                       - Arn
                    - /*
        Version: "2012-10-17"
      PolicyName: MyUserDefaultPolicy7B897426
      Users:
       - Ref: MyUserDC45028B
    Metadata:
     aws:cdk:path: hello-cdk-1/MyUser/DefaultPolicy/Resource
  CDKMetadata:
    Type: AWS::CDK::Metadata
    Properties:
      Modules: aws-cdk=0.27.0,@aws-cdk/assets=0.27.0,@aws-cdk/aws-autoscaling-api=0.27.0,@aws-cdk/aws-cloudwatch=0.27.0,@aws-cdk/aws-codepipeline-api=0.27.0,@aws-cdk/aws-ec2=0.27.0,@aws-cdk/aws-events=0.27.0,@aws-cdk/aws-iam=0.27.0,@aws-cdk/aws-kms=0.27.0,@aws-cdk/aws-lambda=0.27.0,@aws-cdk/aws-logs=0.27.0,@aws-cdk/aws-s3=0.27.0,@aws-cdk/aws-s3-notifications=0.27.0,@aws-cdk/aws-sns=0.27.0,@aws-cdk/aws-sqs=0.27.0,@aws-cdk/aws-stepfunctions=0.27.0,@aws-cdk/cdk=0.27.0,@aws-cdk/cx-api=0.27.0,@aws-cdk/region-info=0.27.0,jsii-runtime=Python/3.7.0

That’s a lot of YAML. 147 lines to be exact. If you take some time to study this, you can probably understand all of the AWS resources that are being created. You could probably even understand why they are being created. Rather than go through that in detail right now, instead focus on the Python code that makes up your CDK app. It’s a lot shorter and a lot easier to understand.

First, look at your “main,” app.py.

#!/usr/bin/env python3

from aws_cdk import cdk
from hello.hello_stack import MyStack

app = cdk.App()

MyStack(app, "hello-cdk-1", env={'region': 'us-east-2'})
MyStack(app, "hello-cdk-2", env={'region': 'us-west-2'})

app.run()

Well, that’s short and sweet. You are creating an App, adding two instances of some class called MyStack to the app, and then calling the run method of the App object.

Now find out what’s going on in the MyStack class.

from aws_cdk import (
    aws_iam as iam,
    aws_sqs as sqs,
    aws_sns as sns,
    cdk
)

from hello_construct import HelloConstruct

class MyStack(cdk.Stack):
    def __init__(self, app: cdk.App, id: str, **kwargs) -&gt; None:
        super().__init__(app, id, **kwargs)

        queue = sqs.Queue(
            self, "MyFirstQueue",
            visibility_timeout_sec=300,
        )

        topic = sns.Topic(
            self, "MyFirstTopic",
            display_name="My First Topic"
        )

        topic.subscribe_queue(queue)

        hello = HelloConstruct(self, "MyHelloConstruct", num_buckets=4)
        user = iam.User(self, "MyUser")
        hello.grant_read(user)

This is a bit more interesting. This code is importing some CDK packages and then using those to create a few AWS resources.

First, you create an SQS queue called MyFirstQueue and set the visibility_timeout value for the queue. Then you create an SNS topic called MyFirstTopic.

The next line of code is interesting. You subscribe the SNS topic to the SQS queue and it’s all happening in one simple and easy to understand line of code.

If you have ever done this with the SDKs or with the CLI, you know that there are several steps to this process. You have to create an IAM policy that grants the topic permission to send messages to the queue, you have to create a topic subscription, etc. You can see the details in the AWS CloudFormation stack generated earlier.

All of that gets simplified into a single, readable line of code. That’s an example of what CDK constructs can do to hide complexity in your infrastructure.

The final thing happening here is that you are creating an instance of a HelloConstruct class. Look at the code behind this.


from aws_cdk import (
     aws_iam as iam,
     aws_s3 as s3,
     cdk,
)

class HelloConstruct(cdk.Construct):

    @property
    def buckets(self):
        return tuple(self._buckets)

    def __init__(self, scope: cdk.Construct, id: str, num_buckets: int) ->
 None:
        super().__init__(scope, id)
        self._buckets = []
        for i in range(0, num_buckets):
            self._buckets.append(s3.Bucket(self, f"Bucket-{i}"))

    def grant_read(self, principal: iam.IPrincipal):
        for b in self.buckets:
            b.grant_read(principal, "*")

This code shows an example of creating your own custom constructs in CDK that define arbitrary AWS resources under the hood while exposing a simple API.

Here, your construct accepts an integer parameter num_buckets in the constructor and then creates that number of buckets inside the scope passed in. It also exposes a grant_read method that automatically grants the IAM principal passed in read permissions to all buckets associated with your construct.

Deploy the AWS CloudFormation templates

The whole point of CDK is to create AWS infrastructure and so far you haven’t done any of that. So now use your CDK program to generate the AWS CloudFormation templates. Then, deploy those templates to your AWS account and validate that the right resources got created.

$ cdk deploy
This deployment will make potentially sensitive changes according to your current security approval level (--require-approval broadening).
Please confirm you intend to make the following modifications:

IAM Statement Changes
┌───┬───────────────┬────────┬───────────────┬───────────────┬────────────────┐
│   │ Resource      │ Effect │ Action        │ Principal     │ Condition      │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyFirstQueu │ Allow  │ sqs:SendMessa │ Service:sns.a │ "ArnEquals": { │
│   │ e.Arn}        │        │ ge            │ mazonaws.com  │   "aws:SourceA │
│   │               │        │               │               │ rn": "${MyFirs │
│   │               │        │               │               │ tTopic}"       │
│   │               │        │               │               │ }              │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 0.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 0.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 1.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 1.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 2.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 2.Arn}/*      │        │               │               │                │
├───┼───────────────┼────────┼───────────────┼───────────────┼────────────────┤
│ + │ ${MyHelloCons │ Allow  │ s3:GetBucket* │ AWS:${MyUser} │                │
│   │ truct/Bucket- │        │ s3:GetObject* │               │                │
│   │ 3.Arn}        │        │ s3:List*      │               │                │
│   │ ${MyHelloCons │        │               │               │                │
│   │ truct/Bucket- │        │               │               │                │
│   │ 3.Arn}/*      │        │               │               │                │
└───┴───────────────┴────────┴───────────────┴───────────────┴────────────────┘
(NOTE: There may be security-related changes not in this list. See http://bit.ly/cdk-2EhF7Np)

Do you wish to deploy these changes (y/n)?

Here, the CDK is telling you about the security-related changes that this deployment includes. It shows you the resources or ARN patterns involved, the actions being granted, and the IAM principals to which the grants apply. You can review these and press y when ready. You then see status reported about the resources being created.

hello-cdk-1: deploying...
hello-cdk-1: creating CloudFormation changeset...
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1)
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::IAM::User | MyUser (MyUserDC45028B)
0/12 | 8:41:14 AM | CREATE_IN_PROGRESS | AWS::IAM::User | MyUser (MyUserDC45028B) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4)
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) Resource creation Initiated
0/12 | 8:41:15 AM | CREATE_IN_PROGRESS | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A) Resource creation Initiated
0/12 | 8:41:16 AM | CREATE_IN_PROGRESS | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4) Resource creation Initiated
0/12 | 8:41:16 AM | CREATE_IN_PROGRESS | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) Resource creation Initiated
1/12 | 8:41:16 AM | CREATE_COMPLETE | AWS::SQS::Queue | MyFirstQueue (MyFirstQueueFF09316A)
1/12 | 8:41:17 AM | CREATE_IN_PROGRESS | AWS::CDK::Metadata | CDKMetadata Resource creation Initiated
2/12 | 8:41:17 AM | CREATE_COMPLETE | AWS::CDK::Metadata | CDKMetadata
3/12 | 8:41:26 AM | CREATE_COMPLETE | AWS::SNS::Topic | MyFirstTopic (MyFirstTopic0ED1F8A4)
3/12 | 8:41:28 AM | CREATE_IN_PROGRESS | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6)
3/12 | 8:41:29 AM | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78)
3/12 | 8:41:29 AM | CREATE_IN_PROGRESS | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) Resource creation Initiated
4/12 | 8:41:30 AM | CREATE_COMPLETE | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6)
4/12 | 8:41:30 AM | CREATE_IN_PROGRESS | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) Resource creation Initiated
5/12 | 8:41:30 AM | CREATE_COMPLETE | AWS::SQS::QueuePolicy | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78)
6/12 | 8:41:35 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1)
7/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67)
8/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE)
9/12 | 8:41:36 AM | CREATE_COMPLETE | AWS::S3::Bucket | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656)
10/12 | 8:41:50 AM | CREATE_COMPLETE | AWS::IAM::User | MyUser (MyUserDC45028B)
10/12 | 8:41:53 AM | CREATE_IN_PROGRESS | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426)
10/12 | 8:41:53 AM | CREATE_IN_PROGRESS | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) Resource creation Initiated
11/12 | 8:42:02 AM | CREATE_COMPLETE | AWS::IAM::Policy | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426)
12/12 | 8:42:03 AM | CREATE_COMPLETE | AWS::CloudFormation::Stack | hello-cdk-1

✅ hello-cdk-1

Stack ARN:
arn:aws:cloudformation:us-east-2:433781611764:stack/hello-cdk-1/87482f50-6c27-11e9-87d0-026465bb0bfc

At this point, the CLI presents you with another summary of IAM changes and asks you to confirm. This is because your CDK sample application creates two stacks in two different AWS Regions. Approve the changes for the second stack and you see similar status output.

Clean up

Now you can use the AWS Management Console to look at the resources that were created and validate that it all makes sense. After you are finished, you can easily destroy all of these resources with a single command.

$ cdk destroy
Are you sure you want to delete: hello-cdk-2, hello-cdk-1 (y/n)? y

hello-cdk-2: destroying...
   0 | 8:48:31 AM | DELETE_IN_PROGRESS   | AWS::CloudFormation::Stack | hello-cdk-2 User Initiated
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::CDK::Metadata     | CDKMetadata 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   0 | 8:48:33 AM | DELETE_IN_PROGRESS   | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   1 | 8:48:34 AM | DELETE_COMPLETE      | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) <br />   2 | 8:48:34 AM | DELETE_COMPLETE      | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   3 | 8:48:34 AM | DELETE_COMPLETE      | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   4 | 8:48:35 AM | DELETE_COMPLETE      | AWS::CDK::Metadata     | CDKMetadata 
   4 | 8:48:35 AM | DELETE_IN_PROGRESS   | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   4 | 8:48:36 AM | DELETE_IN_PROGRESS   | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4)
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) 
   4 | 8:48:36 AM | DELETE_IN_PROGRESS   | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A) 
   4 | 8:48:36 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) 
   5 | 8:48:36 AM | DELETE_COMPLETE      | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
   6 | 8:48:36 AM | DELETE_COMPLETE      | AWS::IAM::User         | MyUser (MyUserDC45028B) 
 6 Currently in progress: hello-cdk-2, MyFirstQueueFF09316A

 ✅  hello-cdk-2: destroyed
hello-cdk-1: destroying...
   0 | 8:49:38 AM | DELETE_IN_PROGRESS   | AWS::CloudFormation::Stack | hello-cdk-1 User Initiated
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::CDK::Metadata     | CDKMetadata 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   0 | 8:49:40 AM | DELETE_IN_PROGRESS   | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   1 | 8:49:41 AM | DELETE_COMPLETE      | AWS::IAM::Policy       | MyUser/DefaultPolicy (MyUserDefaultPolicy7B897426) 
   2 | 8:49:41 AM | DELETE_COMPLETE      | AWS::SQS::QueuePolicy  | MyFirstQueue/Policy (MyFirstQueuePolicy596EEC78) 
   3 | 8:49:41 AM | DELETE_COMPLETE      | AWS::SNS::Subscription | MyFirstQueue/MyFirstTopicSubscription (MyFirstQueueMyFirstTopicSubscription774591B6) 
   4 | 8:49:42 AM | DELETE_COMPLETE      | AWS::CDK::Metadata     | CDKMetadata 
   4 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-2 (MyHelloConstructBucket2C1DA3656) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-3 (MyHelloConstructBucket398A5DE67) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-0 (MyHelloConstructBucket0DAEC57E1) 
   4 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
   4 | 8:49:42 AM | DELETE_SKIPPED       | AWS::S3::Bucket        | MyHelloConstruct/Bucket-1 (MyHelloConstructBucket18D9883BE) 
   5 | 8:49:42 AM | DELETE_COMPLETE      | AWS::IAM::User         | MyUser (MyUserDC45028B) 
   5 | 8:49:42 AM | DELETE_IN_PROGRESS   | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A) 
   6 | 8:49:43 AM | DELETE_COMPLETE      | AWS::SNS::Topic        | MyFirstTopic (MyFirstTopic0ED1F8A4) 
 6 Currently in progress: hello-cdk-1, MyFirstQueueFF09316A
   7 | 8:50:43 AM | DELETE_COMPLETE      | AWS::SQS::Queue        | MyFirstQueue (MyFirstQueueFF09316A)

 ✅  hello-cdk-1: destroyed

 

Conclusion

In this post, I introduced you to the AWS Cloud Development Kit. You saw how it enables you to define your AWS infrastructure in modern programming languages like TypeScript, Java, C#, and now Python. I showed you how to use the CDK CLI to initialize a new sample application in Python, and walked you though the project structure. I taught you how to use the CDK to synthesize your Python code into AWS CloudFormation templates and deploy them through AWS CloudFormation to provision AWS infrastructure. Finally, I showed you how to clean up these resources when you’re done.

Now it’s your turn. Go build something amazing with the AWS CDK for Python! To help get you started, see the following resources:

The CDK and the Python language binding are currently in developer preview, so I’d love to get feedback on what you like, and where AWS can do better. The team lives on GitHub at https://github.com/awslabs/aws-cdk where it’s easy to get directly in touch with the engineers building the CDK. Raise an issue if you discover a bug or want to make a feature request. Join the conversation on the aws-cdk Gitter channel to ask questions.

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/getting-started-with-the-aws-cloud-development-kit-and-python/

Node.js 6 is approaching End-of-Life – upgrade your AWS Lambda functions to the Node.js 10 LTS

Node.js 6 is approaching End-of-Life – upgrade your AWS Lambda functions to the Node.js 10 LTS

This blog was authored by Liz Parody, Developer Relations Manager at NodeSource.

 

Node.js 6.x (“Boron”), which has been maintained as a long-term stable (LTS) release line since fall of 2016, is reaching its scheduled end-of-life (EOL) on April 30, 2019. After the maintenance period ends, Node.js 6 will no longer be included in Node.js releases. This includes releases that address critical bugs, security fixes, patches, or other important updates.

[Image source]

Recently, AWS has been reminding users to upgrade AWS Lambda functions built on the Node.js 6 runtime to a newer version. This is because language runtimes that have reached EOL are unsupported in Lambda.

Requests for feature additions to this release line aren’t accepted. Continued use of the Node.js 6 runtime after April 30, 2019 increases your exposure to various risks, including the following:

  • Security vulnerabilities – Node.js contributors are constantly working to fix security flaws of all severity levels (low, moderate, and high). In the February 2019 Security Release, all actively maintained Node.js release lines were patched, including “Boron”. After April 30, security releases will no longer be applied to Node.js 6, increasing the potential for malicious attacks.
  • Software incompatibility – Newer versions of Node.js better support current best practices and newer design patterns. For example, the popular async/await pattern to interact with promises was first introduced in the Node.js 8 (“Carbon”) release line. “Boron” users can’t take advantage of this feature. If you don’t upgrade to a newer release line, you miss out on features and improvements that enable you to write better, more performant applications.
  • Compliance issues – This risk applies most to teams in highly regulated industries such as healthcare, finance, or ecommerce. It also applies to those who deal with sensitive data such as personally identifiable information (PII). Exposing these types of data to unnecessary risk can result in severe consequences, ranging from extended legal battles to hefty fines.
  • Poor performance and reliability – The Node.js 10 (“Dubnium”) runtime is significantly faster than Node.js 6, with the capacity to perform twice as many operations per second. Lambda is an especially popular choice for applications that must deliver low latency and high performance. Upgrading to a newer version of the Node.js runtime is a relatively painless way to improve the performance of your application.
  • Higher operating costs – The performance benefits of the Node.js 10 runtime compared to Node.js 6 can directly translate to reduced operational costs. Aside from missing the day-to-day savings, running an unmaintained version of the Node.js runtime also significantly increases the likelihood of unexpected costs associated with an outage or critical issue.

Key differences between Node.js 6 and Node.js 10

Metrics provided by the Node.js Benchmarking working group highlight the performance benefits of upgrading from Node.js 6 to the most recent LTS release line, Node.js 10:

  • Operations per second are nearly two times higher in Node.js 10 versus Node.js 6.
  • Latency has decreased by 65% in Node.js 10 versus Node.js 6.
  • The footprint after load is 35% lower in Node.js 10 versus Node.js 6, resulting in improved performance in the event of a cold start.

While benchmarks don’t always reflect real-world results, the trend is clear that performance is increasing in each new Node.js release. [Data Source]

The most recent LTS release line is Node.js 10 (“Dubnium”). This release line features several enhancements and improvements over earlier versions, including the following:

  • Node.js 10 is the first release line to upgrade to OpenSSL version 1.1.0.
  • Native support for HTTP/2, first added to the Node.js 8 LTS release line, was stabilized in Node.js 10. It offers massive performance improvements over HTTP/1 (including reduced latency and minimized protocol overhead), and adds support for request prioritization and server push.
  • Node.js 10 introduces new JavaScript language capabilities, such as Function.prototype.toString() and mitigations for side-channel vulnerabilities, to help prevent information leaks.

“While there are a handful of new features, the standout changes in Node.js 10.0.0 are improvements to error handling and diagnostics that will improve the overall developer experience.” James Snell, a member of the Node.js Technical Steering Committee (TSC) [Quote source]

Upgrade using the N|Solid Lambda layer

AWS doesn’t currently offer the Node.js 10 runtime in Lambda. However, you may want to test the Node.js 10 runtime version in a development or staging environment before rolling out updates to production Lambda functions.

Before AWS adds the Node.js 10 runtime version for Lambda, NodeSource’s N|Solid runtime is available for use as a Lambda layer. It includes a 100%-compatible version for the Node.js 10 LTS release line.

If you install N|Solid as a Lambda layer, you can begin migration and testing before the Node.js 6 EOL date. You can also easily switch to the Node.js 10 runtime provided by AWS when it’s available. Choose between versions based on the Node.js 8 (“Carbon”) and 10 (“Dubnium”) LTS release lines. It takes just a few minutes to get up and running.

First, when you’re creating a function, choose Use custom runtime in function code or layer. (If you’re migrating an existing function, you can change the runtime for the function.)

 

Next, add a new Lambda layer, and choose Provide a layer version ARN. You can find the latest ARN for the N|Solid Lambda layer here. Enter the N|Solid runtime ARN for your AWS Region and Node.js version (Node.js 8 “Carbon” or Node.js 10 “Dubnium”). This is where you can use Node.js 10.

 

That’s it! Your Lambda function is now set up to use Node.js 10.

You can also update your functions to use the N|Solid Lambda layer with the AWS CLI.

To update an existing function:

aws lambda update-function-configuration --function-name <YOUR_FUNCTION_NAME> --layers arn:aws:lambda:<AWS_REGION>:800406105498:layer:nsolid-node-10:6 --runtime provided

In addition to the Node.js 10 runtime, the Lambda layer provided by NodeSource includes N|Solid. N|Solid for AWS Lambda provides low-impact performance monitoring for Lambda functions. To take advantage of this feature, you can also sign up for a free NodeSource account. After you sign up, you just need to set your N|Solid license key as an environment variable in your Lambda function.

That’s all you have to do to start monitoring your Node.js Lambda functions. After you add your license key, your Lambda function invocations should show up on the Functions tab of your N|Solid dashboard.

For more information, see our N|Solid for AWS Lambda getting started guide.

Upgrade to Node.js 10 LTS (“Dubnium”) outside of Lambda

Not only are workloads in Lambda affected, but you must consider other locations where you’re running Node.js 6. I review three more ways to upgrade your version of Node.js in other compute environments.

Use NVM

One of the best practices for upgrading Node.js versions is using NVM. NVM, or Node Version Manager, lets you manage multiple active Node.js versions.

To install NVM on *nix systems, you can use the install script using cURL.

$ curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

or Wget:

$ wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.34.0/install.sh | bash

For Windows-based systems, you can use NVM for Windows.

After NVM is installed, you can manage your versions of Node.js with some simple AWS CLI commands.

To download, compile, and install the latest release of Node.js:

$ nvm install node # "node" is an alias for the latest version

To install a specific version of Node.js:

$ nvm install 10.10.0 # or 8.5.0, 8.9.1, etc.

Upgrade manually

To upgrade Node.js without a tool like NVM, you can manually install a new version. NodeSource provides Linux distributions for Node.js, and recommends that you upgrade using the NodeSource Node.js Binary Distributions.

To install Node.js 10:

Using Ubuntu

$ curl -sL https://deb.nodesource.com/setup_10.x | sudo -E bash - 
$ sudo apt-get install -y nodejs

Using Amazon Linux

$ curl -sL https://rpm.nodesource.com/setup_10.x | sudo bash -
$ sudo yum install -y nodejs

Most production applications built on Node.js make use of LTS release lines. We highly recommend that you upgrade any application or Lambda function currently using the Node.js 6 runtime version to Node.js 10, the newest LTS version.

To hear more about the latest release line, check out NodeSource’s webinar, New and Exciting Features Landing in Node.js 12. This release line officially becomes the current LTS version in October 2019.

About the Author

Liz is a self-taught Software Engineer focused on JavaScript, and Developer Relations Manager at NodeSource. She organizes different community events such as JSConf Colombia, Pioneras Developers, Startup Weekend and has been a speaker at EmpireJS, MedellinJS, PionerasDev, and GDG.

She loves sharing knowledge, promoting JavaScript and Node.js ecosystem and participating in key tech events and conferences to enhance her knowledge and network.

Disclaimer
The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post.

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/node-js-6-is-approaching-end-of-life-upgrade-your-aws-lambda-functions-to-the-node-js-10-lts/

V2 AWS SDK for Go adds Context to API operations

V2 AWS SDK for Go adds Context to API operations

The v2 AWS SDK for Go developer preview made a breaking change in the release of v0.8.0. The v0.8.0 release added a new parameter, context.Context, to the SDK’s Send and Paginate Next methods.

Context was added as a required parameter to the Send and Paginate Next methods to enable you to use the v2 SDK for Go in your application with cancellation and request tracing.

Using the Context pattern helps reduce the chance of code paths mistakenly dropping the Context, causing the cancellation and tracing chain to be lost. When the Context is lost, it can be difficult to track down the missing cancellation and tracing metrics within an application.

Migrating to v0.8.0

After you update your application to depend on v0.8.0 of the v2 SDK, you’ll encounter compile errors. This is because of the Context parameter that was added to the Send and Paginate Next methods.

If your application is already using the Context pattern, you can now pass the Context into Send and Paginate Next methods directly, instead of calling SetContext on the request returned by the client’s operation request method.

If you don’t need a Context within your application, you can use context.Background() or context.TODO() instead of specifying a Context, such as a timeout, deadline, cancel, or httptrace.ClientTrace.

Example code: before v0.8.0

The following code is an example of an application using the Amazon S3 service’s PutObject API operation with the v2 SDK before v0.8.0. The example code is
using the req.SetContext method to specify the Context for the PutObject operation.

func uploadObject(ctx context.Context, bucket, key string, obj io.ReadSeeker) error
	req := svc.PutObjectRequest(&s3.PutObjectInput{
		Bucket: &bucket,
		Key:    &key,
		Body:   obj,
	})
	req.SetContext(ctx)

	_, err := req.Send()
	return err
}

Example code: updated to v0.8.0

To migrate the previous example code to use v0.8.0 of the v2 SDK, we need to remove the req.SetContext method call, and pass the Context directly to
the Send method instead. This change will make the example code compatible with v0.8.0 of the v2 SDK.

func uploadObject(ctx context.Context, bucket, key string, obj io.ReadSeeker) error
	req := svc.PutObjectRequest(&s3.PutObjectInput{
		Bucket: &bucket,
		Key:    &key,
		Body:   obj,
	})

	_, err := req.Send(ctx)
	return err
}

What’s next for the v2 SDK for Go developer preview?

We’re working to improve usability and reduce pain points with the v2 SDK. Two specific areas we’re looking at are the SDK’s request lifecycle and error handling.

Improving the SDK’s request lifecycle will help reduce your application’s CPU and memory performance when using the SDK. It also makes it easier for you to extend and modify the SDK’s core functionality.

For the SDK’s error handling, we’re investigating alternative approaches, such as typed errors for API operation exceptions. By using typed errors, your application can assert directly against the error type. This would reduce the need to do string comparisons for SDK API operation response errors.

See our issues on Github to share your feedback, questions, and feature requests, and to stay current with the v2 AWS SDK for Go developer preview as it moves to GA.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/v2-aws-sdk-for-go-adds-context-to-api-operations/

New — Analyze and debug distributed applications interactively using AWS X-Ray Analytics

New — Analyze and debug distributed applications interactively using AWS X-Ray Analytics

Developers spend a lot of time searching through application logs, service logs, metrics, and traces to understand performance bottlenecks and to pinpoint their root causes. Correlating this information to identify its impact on end users comes with its own challenges of mining the data and performing analysis. This adds to the triaging time when using a distributed microservices architecture, where the call passes through several microservices. To address these challenges, AWS launched AWS X-Ray Analytics.

X-Ray helps you analyze and debug distributed applications, such as those built using a microservices architecture. Using X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root causes of performance issues and errors. It helps you debug and triage distributed applications wherever those applications are running, whether the architecture is serverless, containers, Amazon EC2, on-premises, or a mixture of all of these.

AWS X-Ray Analytics helps you quickly and easily understand:

  • Any latency degradation or increase in error or fault rates.
  • The latency experienced by customers in the 50th, 90th, and 95th percentiles.
  • The root cause of the issue at hand.
  • End users who are impacted, and by how much.
  • Comparisons of trends, based on different criteria. For example, you can understand if new deployments caused a regression.

In this post, I walk you through several use cases to see how you can use X-Ray Analytics to address these issues.

AWS X-Ray Analytics Walkthrough

The following is a service map of an online store application hosted on Amazon EC2 and serverless technologies like Amazon API Gateway, AWS Lambda, and Amazon DynamoDB. Using this service map, you can easily see that there are faults in the “products” microservice in the selected time range.

Use X-Ray Analytics to explore the root cause and end-user impact. Looking at the X-Ray Analytics console, you can determine that the 50th-percentile customers have latency of around 1.6 seconds. The 95th-percentile customers have latency of more that 2.5 seconds using the response time distribution.

This chart also helps you see the overall latency distribution of the requests in the selected group for the selected time range. You can learn more about X-Ray groups and their use cases in the Deep dive into AWS X-Ray groups and use cases post.

Now, you want to triage the increase in latency in requests that are taking more than 1.5 seconds and get to its root cause. Select those traces from the graph, as shown below. You see that all the numbers in the chart, like Time series activity and tables, are automatically updated based on the filter criteria. Also, a new Filtered traces trend line, indicated in blue, is added.

This Filtered trace set A trend line keeps updating as you add new criteria. For example, looking at the following tables, you can easily see that around 85% of these high-latency requests result in 500 errors, and Emma is the most impacted customer.

To focus on the traces that result in 500 errors, select that row from the table and see the filtered traces and other data points getting updated. In the Root Cause section, see the root cause of issues resulting in this increased latency. You can see that the DynamoDB wait in the “products” service has resulted in around 57% of the errors. You can also view individual traces that match the selected filters, as shown.

Selecting the Fault Root Cause using the cog icon helps in viewing the fault exception. This indicates that the configured, provisioned throughput capacity of the DynamoDB table has exceeded its capacity, giving a clear indication of the root cause of the issue.

You just saw how you can use X-Ray Analytics to detect an increase in latency and understand the root cause of the issue and end-user impact.

Comparison of trends

Now, see how you can compare two trends using the compare functionality in X-Ray Analytics. You can use this functionality to compare any two filter expressions. For example, you can compare performance experience between two users, or compare and analyze whether a new deployment caused any regression.

Say that you have deployed a new Lambda function at 3:40 AM. You want to compare five minutes before and five minutes after the deployment was completed to understand whether any regression was caused, and what the impact is to end users.

Use the compare functionality provided in X-Ray Analytics. In this case, two different time ranges are represented. Filtered trace set A, starting from 3:35 AM to 3:40 AM, is shown in blue, and Filtered trace set B, starting from 3:40 AM to 3:45 AM, is shown in green.

In compare mode, the percentage deviation column that is automatically calculated clearly indicates that 500 errors decreased by 32 percentage points after the new deployment was completed. This gives a clear indication to the DevOps team that the new deployment didn’t cause any regression and was successful in reducing errors.

Identifying outlying users

Take an example in which one of the end users, “Ava,” is complaining about degradation in performance experience from the application. None of the other users have reported this issue.

Use the compare feature in X-Ray Analytics to compare the response time of all users (blue trend line) with that of Ava (green trend line). Looking at the following response time distribution graph, it’s not easy to notice the difference in end-user experience based on the data.

However, as you look into the details of other attributes, like the annotations that you added during code instrumentation (annotation.ACL_CACHED) and response time root cause, you can get actionable insights. You see that the performance latency is in the “api” service and related to the time spent in the “generate_acl” module. Correlate that to the ACL not being cached, based on the approximate 55% delta that you see in Ava’s requests compared to other users.

You can also validate this by looking at the traces from the trace list and see that there is a 300-millisecond delay added by the “generate_acl” module. This shows how X-Ray Analytics helps correlate different attributes to understand the root cause of the issue.

Getting Started

To get started using X-Ray Analytics, visit the AWS Management Console for X-Ray. There is no additional charge for using this feature.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/new-analyze-and-debug-distributed-applications-interactively-using-aws-x-ray-analytics/

Query Systems Manager Parameter Store for AWS Regions, endpoints and more using PowerShell

Query Systems Manager Parameter Store for AWS Regions, endpoints and more using PowerShell

In Jeff Barr’s recent blog post, he announced support for querying AWS Region and service availability programmatically by using AWS Systems Manager Parameter Store. The examples in the blog post all used the AWS CLI, but the post noted that you can also use the AWS Tools for PowerShell.

In this post I’ll show you how to use the Systems Manager cmdlets in the AWS Tools for PowerShell to query the same data.

Prerequisites

To use the cmdlets shown in this blog post, you need to install the AWS Tools for Windows PowerShell module or the AWS Tools for PowerShell Core module (PowerShell Core is also known as PowerShell 6). You can use the PowerShell Core module if you’re using Windows, Linux, or macOS.

If you’re using Amazon EC2 Windows instances, the tools are preinstalled for you. Also, thanks to a change to adopt PowerShell Standard, you can now use the AWS Tools for PowerShell Core module if you’re running Windows PowerShell versions 3 through 5.x.

After it’s installed, import the relevant module (AWSPowerShell if using Windows PowerShell, or AWSPowerShell.NetCore if using PowerShell 6) and configure credentials. The user guide for the tools describes how to set up credential profiles to use with the tools.

AWS Systems Manager Cmdlets

The cmdlets for Systems Manager have the prefix “SSM” applied to the cmdlet names. You can obtain a full list of all cmdlets for the service by using the Get-AWSCmdletName cmdlet.

PS C:\> Get-AWSCmdletName -Service SSM
 
CmdletName                  ServiceOperation            ServiceName
----------                  ----------------            -----------
Add-SSMResourceTag          AddTagsToResource           AWS Systems Manager
Edit-SSMDocumentPermission  ModifyDocumentPermission    AWS Systems Manager
Get-SSMActivation           DescribeActivations         AWS Systems Manager
Get-SSMAssociation          DescribeAssociation         AWS Systems Manager
....
Write-SSMComplianceItem     PutComplianceItems          AWS Systems Manager
Write-SSMInventory          PutInventory                AWS Systems Manager
Write-SSMParameter          PutParameter                AWS Systems Manager

We’ll work with two cmdlets in this blog post: Get-SSMParametersByPath, which returns all parameters sharing a common key path, and Get-SSMParameter, which returns a specific parameter.

Querying to find active AWS Regions

To query all active Regions, we use the parameter key path, /aws/service/global-infrastructure/regions, with the Get-SSMParametersByPath cmdlet.

PS C:\> Get-SSMParametersByPath -Path '/aws/service/global-infrastructure/regions'
ARN : arn:aws:ssm:us-west-2::parameter/aws/service/global-infrastructure/regions/ap-northeast-1
LastModifiedDate : 4/18/2019 2:05:37 AM
Name : /aws/service/global-infrastructure/regions/ap-northeast-1
Selector :
SourceResult :
Type : String
Value : ap-northeast-1
Version : 1
  
ARN : arn:aws:ssm:us-west-2::parameter/aws/service/global-infrastructure/regions/ap-northeast-2
LastModifiedDate : 4/18/2019 2:05:42 AM
Name : /aws/service/global-infrastructure/regions/ap-northeast-2
Selector :
SourceResult :
Type : String
Value : ap-northeast-2
Version : 1
...

We get back a series of parameter objects, one per Region. We could send these objects to the pipeline to process, or filter them immediately to just the list of Regions, by using an expression like the following.

PS C:\> (Get-SSMParametersByPath -Path '/aws/service/global-infrastructure/regions').Value
ap-northeast-1
ap-northeast-2
ca-central-1
eu-north-1
eu-west-1
eu-west-2
sa-east-1
us-east-1
us-east-2
us-west-1
ap-northeast-3
ap-south-1
ap-southeast-1
ap-southeast-2
cn-north-1
cn-northwest-1
eu-central-1
eu-west-3
us-gov-east-1
us-west-2
us-gov-west-1

Querying to find all services

To query services, we use a different key path: /aws/service/global-infrastructure/services. The following query displays a complete list of all available AWS services, sorted alphabetically. It also displays the first 10 (out of 155 at the time of this writing).

PS C:\> (Get-SSMParametersByPath -Path '/aws/service/global-infrastructure/services').Value | 
           sort |
           select -first 10
acm
acm-pca
alexaforbusiness
apigateway
application-autoscaling
appmesh
appstream
appsync
athena
autoscaling

Querying services that are available in a Region

PS C:\> (Get-SSMParametersByPath -Path '/aws/service/global-infrastructure/regions/us-east-1/services').Value |
         sort |
         select -first 10
acm
acm-pca
alexaforbusiness
apigateway
application-autoscaling
appmesh
appstream
appsync
athena
autoscaling

Querying Regions for a service

Inverting the query, what if we want to know what Regions a given service supports? For example, in the following we want to know where Amazon Athena is currently available.

PS C:\> (Get-SSMParametersByPath -Path '/aws/service/global-infrastructure/services/athena/regions').Value
ap-northeast-1
ap-northeast-2
ap-south-1
ap-southeast-1
ap-southeast-2
ca-central-1
eu-central-1
eu-west-1
us-east-2
us-gov-west-1
eu-west-2
us-east-1
us-west-2

Querying for a service name

To get the official name of a service you can run this query:

PS C:\> Get-SSMParametersByPath -Path '/aws/service/global-infrastructure/services/athena'
ARN              : arn:aws:ssm:us-west-2::parameter/aws/service/global-infrastructure/services/athena/longName
LastModifiedDate : 4/18/2019 2:05:52 AM
Name             : /aws/service/global-infrastructure/services/athena/longName
Selector         :
SourceResult     :
Type             : String
Value            : Amazon Athena
Version          : 1

The example shows that the value for the parameter contains the official service name.

Querying for a service’s regional endpoint

When using the cmdlets, most of the time you don’t need to worry about a service’s regional endpoint. This is because the tools form this up for you before making calls to an operation. If want to know the endpoint, however, you can query for it.

PS C:\> (Get-SSMParameter -Name '/aws/service/global-infrastructure/regions/us-west-1/services/s3/endpoint').Value
s3.us-west-1.amazonaws.com

Easy!

As noted at the end of Jeff’s post, this data is available now and you can start using it today at no charge.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/query-systems-manager-parameter-store-for-aws-regions-endpoints-and-more-using-powershell/

Deep dive into AWS X-Ray groups and use cases

Deep dive into AWS X-Ray groups and use cases

AWS X-Ray helps developers analyze and debug distributed applications, such as those built using a microservices architecture. With X-Ray, you can understand how your application and its underlying services are performing to identify and troubleshoot the root cause of performance issues and errors. X-Ray not only enables developers and DevOps engineers to get to the root cause of the issue more quickly, but also helps in understanding who is impacted and by how much.

Many customers are moving toward the modern microservices architecture, where the services being developed are tested against production APIs. Creating data boundaries at the onset doesn’t allow for slicing and dicing the data as the organization or service structure changes. To address this, X-Ray provides a global view of traces for the AWS account or AWS Region.

Customers also have multiple applications and workflows running within their account. It’s important to view them individually to understand any performance bottlenecks and issues that might be affecting end users. To address this, we introduced X-Ray groups, which enable customers to slice and dice their X-Ray service graph and focus on certain workflows, applications, or routes.

Customers can create a group by setting a filter expression. All the traces that match the set filter expression will be part of that group. Customers can then view service graphs for the selected group, and understand performance bottlenecks, errors, or faults in services belonging to that service graph.

Let’s use the following example to look closely into some of the use cases where X-Ray groups are helpful. As you can see in the X-Ray service graph, I have two different workflows. The first is a web application running on my Amazon EC2 instance, calling an authentication service that validates the information in the database. The second is the Serverless API running on Amazon API Gateway, and AWS Lambda talking to Amazon DynamoDB.

Focus on certain applications or workflows
Let’s say that in these two workflows I have a serverless order placement API running on API Gateway and Lambda as one workflow. I have another order processing application that runs on Amazon Elastic Container Service (Amazon ECS). I want a view of my Serverless API to understand how they’re performing.

I would create a group with the filter expression edge(id(type: “client”), “api”) && service(id(type: “AWS::ApiGateway::Stage”)). This group will show traces that start at the “api” node, and includes calls to API Gateway. The type: “client” portion in the edge represents an end user, and the second parameter, api, indicates the node the end user is interacting with directly.

The newly generated service graph for this group looks like the following.

 

Send notifications on increased error and fault rate
X-Ray automatically creates an Amazon CloudWatch metric for each group that indicates the number of traces that belong to that group. Customers can use this to alert on faults or errors if the value crosses a certain threshold using CloudWatch alarms. An example of the group and a view of corresponding CloudWatch metrics is shown below.

 

 

Increase visibility on specific service latency
With X-Ray groups, customers can focus on certain services that are taking longer than normal to run, and get notified when a threshold is breached. Or they can query multiple metrics and use math expressions to create new time series based on these metrics. For example, customers can get a view of a service graph where Lambda functions are taking more than one second, as shown below.

 

To get started, open the X-Ray console and create groups on the Service Graphs page. You can learn more about the X-Ray service here and use the developer guide to get started.

Let us know what you think about the service and X-Ray groups in the comments below.

 

 

from AWS Developer Blog https://aws.amazon.com/blogs/developer/deep-dive-into-aws-x-ray-groups-and-use-cases/

PowerShell Standard support in AWSPowerShell.NetCore

PowerShell Standard support in AWSPowerShell.NetCore

In 2016, we released AWS Tools for PowerShell Core targeting PowerShell Core 6.0, which provided cross-platform support for macOS and Linux, in addition to Windows. We published this module separately from AWS Tools for Windows PowerShell because it was not compatible with earlier versions of PowerShell.

Last year, Microsoft released PowerShell Standard: a new library that allows the creation of modules with a wide compatibility, spanning from PowerShell 3.0 to the latest version of PowerShell Core (including cross-platform support for macOS and Linux).

With our latest release, 3.3.485.0, AWSPowerShell.NetCore targets PowerShell Standard. When the host environment has .NET Framework 4.7.2 installed, the AWSPowerShell.NetCore module can be used on older versions of PowerShell. Although we suggest that users of legacy PowerShell versions (from 2.0 to 5.1) continue to use the AWSPowerShell module in production environments, we invite you to try AWSPowerShell.NetCore and provide us with feedback at our new AWS PowerShell GitHub repository by creating an issue.

Powershell Module Compatible Powershell Versions Experimental Compatibility
AWSPowerShell 2.0 – 5.1
AWSPowerShell.NetCore 6.0 – 6.1 3.0 – 5.1 (when .NET Framework 4.7.2 is installed)

Based on the feedback from the AWS PowerShell community, we’ll consider making the AWSPowerShell module targeting PowerShell Standard our default offering for any PowerShell version starting with 3.0 (supporting all the way back to Windows Server 2018 R2 SP1). This includes PowerShell Core on Windows, macOS, and Linux.

We’re also interested in feedback from our users if the requirement of .NET Framework 4.7.2 is too restrictive, and if there is a significant portion of users on Windows Server 2008 or later who would have a problem upgrading to the newer .NET Framework. You can provide your feedback here.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/powershell-standard-support-in-awspowershell-netcore/

Generate an Amazon S3 presigned URL with SSE using the AWS SDK for C++

Generate an Amazon S3 presigned URL with SSE using the AWS SDK for C++

Amazon Simple Storage Service (Amazon S3) presigned URLs give you or your customers an option to access an Amazon S3 object identified in the URL, without having AWS credentials and permissions.

With server-side encryption (SSE) specified, Amazon S3 will encrypt the data when the object is written to disks, and decrypt the data when the object is accessed from disks. Amazon S3 presigned URLs and SSE are two different functionalities, and have already been supported separately by the AWS SDK for C++. Now customers using the AWS SDK for C++ can use them together.

In this blog series, we’ve already explained different types of SSE and how we can generate and use Amazon S3 presigned URLs with SSE by using the AWS SDK for Java. In this blog post, we’ll give examples for AWS SDK for C++ customers. But, unlike the AWS SDK for Java, AWS Signature Version 2 (SigV2) is no longer supported in the AWS SDK for C++. The underlying signer is AWS Signature Version 4 (SigV4).

Using functions defined in S3Client.h to generate an S3 presigned URL with SSE is pretty straight forward.
Let’s look at PutObject as an example.

Generate a presigned URL with server-side encryption (SSE) and S3 managed keys (SSE-S3)

Aws::S3::S3Client s3Client;
Aws::String presignedPutUrl = s3Client->GeneratePresignedUrlWithSSES3(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_PUT);

Generate a presigned URL with server-side encryption (SSE) and KMS managed keys (SSE-KMS)

Aws::S3::S3Client s3Client;
// If KMS_MASTER_KEY_ID is empty, we will use KMS managed default key: “aws/s3” for s3.
Aws::String presignedPutUrl = s3Client->GeneratePresignedUrlWithSSEKMS(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_PUT, “KMS_MASTER_KEY_ID”);

Generate a presigned URL with server-side encryption (SSE) and customer-supplied key (SSE-C)

Aws::S3::S3Client s3Client;
Aws::String presignedPutUrl = s3Client->GeneratePresignedUrlWithSSEC(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_PUT, “BASE64_ENCODED_AES256_KEY”);

 

To actually use generated S3 presigned URLs with SSE programmatically in your project requires a little bit more work. First, you have to create an HttpRequest. Then, based on different SSE types, you need to add some HTTP headers. Last, unlike common Amazon S3 operations, you have to explicitly send out the request with the content body (the actual object you want to upload) that’s been set.

Code snippet to create HttpRequest

std::shared_ptr putRequest = CreateHttpRequest(presignedUrlPut, HttpMethod::HTTP_PUT, Aws::Utils::Stream::DefaultResponseStreamFactoryMethod);

Add required headers if using SSE-S3

putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::AES256));

Add required headers if using SSE-KMS

putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::aws_kms));

Add required headers if using SSE-C

// Suppose customer supplied key is stored in a ByteBuffer called sseKey
putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::AES256));
putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, HashingUtils::Base64Encode(sseKey));
Aws::String strBuffer(reinterpret_cast<char*>(sseKey.GetUnderlyingData()), sseKey.GetLength());
putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, HashingUtils::Base64Encode(HashingUtils::CalculateMD5(strBuffer)));

Add content body (object’s data) to the request and send it out

std::shared_ptr objectStream = Aws::MakeShared("Test");
*objectStream << "Test Object";
objectStream->flush();
putRequest->AddContentBody(objectStream);
Aws::StringStream intConverter;
intConverter << objectStream->tellp();
putRequest->SetContentLength(intConverter.str());
putRequest->SetContentType("text/plain");
Aws::Http::HttpClient httpClient = Aws::Http::CreateHttpClient(Aws::Client::ClientConfiguration());
std::shared_ptr putResponse = httpClient->MakeRequest(putRequest);

 

As we said at the beginning, you can also use a presigned URL to access objects (GetObject). For SSE-S3 and SSE-KMS, SSE-related headers are not required. That means, to access objects, there’s no difference between using a presigned URL and presigned URL with SSE. However, for SSE-C, related headers are still required. You can use the same GeneratePresignedUrlWithSSEC function to generate a presigned URL with SSE-C to access objects.

Get an object uploaded with an SSE-S3 or SSE-KMS presigned URL

// It’s the same to use GeneratePresignedUrlWithSSES3 or GeneratePresignedUrlWithSSEKMS
Aws::String presignedUrlGet = s3Client->GeneratePresignedUrl("BUCKET_NAME", “KEY_NAME”, HttpMethod::HTTP_GET);
std::shared_ptr getRequest = CreateHttpRequest(presignedUrlGet, HttpMethod::HTTP_GET, Aws::Utils::Stream::DefaultResponseStreamFactoryMethod);
std::shared_ptr getResponse = httpClient->MakeRequest(getRequest);

Get an object uploaded with an SSE-C presigned URL

Aws::String presignedUrlGet = s3Client->GeneratePresignedUrlWithSSEC(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_GET, HashingUtils::Base64Encode(sseKey));
std::shared_ptr getRequest = CreateHttpRequest(presignedUrlGet, HttpMethod::HTTP_GET, Aws::Utils::Stream::DefaultResponseStreamFactoryMethod);
getRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::AES256));
getRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, HashingUtils::Base64Encode(sseKey));
getRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, HashingUtils::Base64Encode(HashingUtils::CalculateMD5(strBuffer)));
std::shared_ptr getResponse = httpClient->MakeRequest(getRequest);
std::cout << getResponse->GetResponseBody().rdbuf(); // Should output “Test Object”

There is also another set of functions that enable customers to add customized headers when generating an Amazon S3 presigned URL with SSE. For more details on using all of these functions, check out these test cases.

Please reach out to us with questions and improvements. As always, pull requests are welcome!

from AWS Developer Blog https://aws.amazon.com/blogs/developer/generate-an-amazon-s3-presigned-url-with-sse-using-the-aws-sdk-for-c/

AWS Toolkit for Visual Studio now supports Visual Studio 2019

AWS Toolkit for Visual Studio now supports Visual Studio 2019

A new release of the AWS Toolkit for Visual Studio has been published to Visual Studio marketplace. This new release adds support for Visual Studio 2019. Visual Studio 2019 is currently in preview, however, Microsoft has announced the general availability (GA) release date to be April 2, 2019.

The AWS Toolkit for Visual Studio provides many features inside Visual Studio to help get your code running in AWS. This includes deploying ASP.NET and ASP.NET Core web applications to AWS Elastic Beanstalk, deploying containers to Amazon Elastic Container Service (Amazon ECS), or deploying .NET Core serverless applications with AWS Lambda and AWS CloudFormation.

The toolkit also contains an AWS Explorer tool window that can help you manage some of your most common developer resources, like Amazon S3 buckets and Amazon DynamoDB tables.

We have also had a couple recent releases for .NET Core Lambda support that was only available to our .NET Core Global Tool, Amazon.Lambda.Tools. With the new release of the AWS Toolkit for Visual Studio you can now take advantage of these new features within Visual Studio.

Lambda Custom .NET Core runtime

Support for using custom .NET Core runtimes with Lambda was released recently. This provides the ability to use any version of .NET Core in Lambda, such as .NET Core 2.2 or .NET Core 3.0 preview. Now that support has been added to the AWS Toolkit for Visual Studio.

You can create a Lambda project using .NET Core 2.2 by selecting the Custom Runtime Function blueprint. To use .NET Core 3.0 preview, just update the target framework of the project in the project properties.

When you right-click the project and choose Publish to AWS, as you would for any other Lambda project, you might notice some differences.

The Language Runtime box now says Custom .NET Core Runtime, and the Framework says netcoreapp3.0. Also, you no longer select an assembly, type, and method for the Lambda handler. Instead, there is an optional handler field. The handler is optional because in a custom runtime function, the .NET Core project is packaged up as an executable file, so the Main method is called to start the .NET process. You’re welcome to still set a value for the handler, which you can access in your code by using the _HANDLER environment variable.

Lambda layers

The Lambda layers feature was also added to Amazon.Lambda.Tools .NET Core Global Tool. This makes it easy to create a layer of your .NET assemblies and tell Lambda to deploy your project with a specified layer. That way the deployment bundle can have a reduced size. And if you choose to create the layer on an Amazon Linux environment, you can optimize the .NET assemblies added to the layer by having them pre-jitted. This can reduce your cold-start time. You can find out more about optimizing packages here..

You still need to create the layer with the Amazon.Lambda.Tools .NET Core Global Tool. But once you create the layer, you can reference the layer in your projects and Visual Studio will honor the layer when creating the deployment package.

Let’s do a quick walkthrough on how to use layers with Visual Studio. You’ll need to have Amazon.Lambda.Tools installed, which you can do by using the following command.


dotnet tool install -g Amazon.Lambda.Tools

If you already had it installed, make sure it’s at least version 3.2.0. If it isn’t, then use the following command to update.


dotnet tool update -g Amazon.Lambda.Tools

Then, in a console window, navigate to your project and execute the following command.


dotnet lambda publish-layer <layer-name> —layer-type runtime-package-store —s3-bucket <bucket>

This creates a layer and outputs an ARN for the new version of the layer, which should look something like this, depending on what you name your layer: arn:aws:lambda:us-west-2:123412341234:layer:DemoTest:1

If you used the Lambda projects template that deploys straight to the Lambda service, you can specify the layer in the aws-lambda-tools-defaults.json configuration file with the function-layers key. If you want to use multiple layers, use a comma-separated list.


{
    "profile"               : "default",
    "region"                : "us-west-2",
    "configuration"         : "Release",
    "framework"             : "netcoreapp2.1",
    "function-runtime"      : "dotnetcore2.1",
    "function-memory-size"  : 256,
    "function-timeout"      : 30,
    "function-handler"      : "DemoLayerTest::DemoLayerTest.Function::FunctionHandler",

    "function-layers"       : "arn:aws:lambda:us-west-2:123412341234:layer:DemoTest:1"
}

If you’re deploying with an AWS CloudFormation template, usually called the serverless.template file, then you need to specify the layer using the Layers property.


"Get" : {
    "Type" : "AWS::Serverless::Function",
    "Properties": {
        "Handler": "DemoLayerServerlessTest::DemoLayerServerlessTest.Functions::Get",
        "Runtime": "dotnetcore2.1",
        "CodeUri": "",
        
        "Layers" : ["arn:aws:lambda:us-west-2:123412341234:layer:DemoTest:1"],
        
        "MemorySize": 256,
        "Timeout": 30,
        "Role": null,
        "Policies": [ "AWSLambdaBasicExecutionRole" ],
        "Events": {
        }
    }
}

With the layer specified the deployment wizard in Visual Studio will pick up this layer setting. Visual Studio will make sure the layer information is passed down into the underlying dotnet publish command used by our tooling, so that the deployment bundle is created without the .NET assemblies that will be provided by the layers.

I imagine a typical scenario for layers is a common layer is created by one dev on a team or through automation, possibly using the optimization feature. Then that layer is shared with the rest of the team in their development environments.

A final note about layers and custom runtimes. Currently, you can’t use the layer feature with the custom runtimes feature. This is because a custom runtime is deployed as a self-contained application with all the required runtime assemblies of .NET Core included with the deployment package. The underlying call to dotnet publish used in self-contained mode doesn’t support passing in a manifest of assemblies to exclude.

Visual Studio new project wizard

Visual Studio 2019 contains a new redesign for the experience of creating projects. All of the features of this redesign are not available yet to Visual Studio extensions like the AWS Toolkit for Visual Studio.

For example, the Language, Platform, or Project type filters you can see here in the new project wizard are not accessible to custom project templates like the ones that AWS provides. If you select C# in the language filter, our C# project templates will not show up.

The Visual Studio team plans to address this issue for extensions in a future update to Visual Studio 2019, but the changes will not be in the initial GA version. Until this is addressed, the best way to find our AWS project templates is to use the search bar and either search for “AWS” or “Lambda”.

Conclusion

We have come a long way since the AWS Toolkit for Visual Studio was first released in 2011 for Visual Studio 2008 and 2010, with support for ASP.NET web apps. With serverless and container support, as well as the Elastic Beanstalk support for web applications, you can use the toolkit to deploy so many types of applications to AWS.

We enjoy getting your feedback on our Visual Studio support. You can reach out to us about the AWS Toolkit for Visual Studio on our .NET repository.

–Norm

from AWS Developer Blog https://aws.amazon.com/blogs/developer/aws-toolkit-for-visual-studio-now-supports-visual-studio-2019/