Tag: AWS Developer Blog

Generate an Amazon S3 presigned URL with SSE using the AWS SDK for C++

Generate an Amazon S3 presigned URL with SSE using the AWS SDK for C++

Amazon Simple Storage Service (Amazon S3) presigned URLs give you or your customers an option to access an Amazon S3 object identified in the URL, without having AWS credentials and permissions.

With server-side encryption (SSE) specified, Amazon S3 will encrypt the data when the object is written to disks, and decrypt the data when the object is accessed from disks. Amazon S3 presigned URLs and SSE are two different functionalities, and have already been supported separately by the AWS SDK for C++. Now customers using the AWS SDK for C++ can use them together.

In this blog series, we’ve already explained different types of SSE and how we can generate and use Amazon S3 presigned URLs with SSE by using the AWS SDK for Java. In this blog post, we’ll give examples for AWS SDK for C++ customers. But, unlike the AWS SDK for Java, AWS Signature Version 2 (SigV2) is no longer supported in the AWS SDK for C++. The underlying signer is AWS Signature Version 4 (SigV4).

Using functions defined in S3Client.h to generate an S3 presigned URL with SSE is pretty straight forward.
Let’s look at PutObject as an example.

Generate a presigned URL with server-side encryption (SSE) and S3 managed keys (SSE-S3)

Aws::S3::S3Client s3Client;
Aws::String presignedPutUrl = s3Client->GeneratePresignedUrlWithSSES3(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_PUT);

Generate a presigned URL with server-side encryption (SSE) and KMS managed keys (SSE-KMS)

Aws::S3::S3Client s3Client;
// If KMS_MASTER_KEY_ID is empty, we will use KMS managed default key: “aws/s3” for s3.
Aws::String presignedPutUrl = s3Client->GeneratePresignedUrlWithSSEKMS(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_PUT, “KMS_MASTER_KEY_ID”);

Generate a presigned URL with server-side encryption (SSE) and customer-supplied key (SSE-C)

Aws::S3::S3Client s3Client;
Aws::String presignedPutUrl = s3Client->GeneratePresignedUrlWithSSEC(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_PUT, “BASE64_ENCODED_AES256_KEY”);

 

To actually use generated S3 presigned URLs with SSE programmatically in your project requires a little bit more work. First, you have to create an HttpRequest. Then, based on different SSE types, you need to add some HTTP headers. Last, unlike common Amazon S3 operations, you have to explicitly send out the request with the content body (the actual object you want to upload) that’s been set.

Code snippet to create HttpRequest

std::shared_ptr putRequest = CreateHttpRequest(presignedUrlPut, HttpMethod::HTTP_PUT, Aws::Utils::Stream::DefaultResponseStreamFactoryMethod);

Add required headers if using SSE-S3

putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::AES256));

Add required headers if using SSE-KMS

putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::aws_kms));

Add required headers if using SSE-C

// Suppose customer supplied key is stored in a ByteBuffer called sseKey
putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::AES256));
putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, HashingUtils::Base64Encode(sseKey));
Aws::String strBuffer(reinterpret_cast<char*>(sseKey.GetUnderlyingData()), sseKey.GetLength());
putRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, HashingUtils::Base64Encode(HashingUtils::CalculateMD5(strBuffer)));

Add content body (object’s data) to the request and send it out

std::shared_ptr objectStream = Aws::MakeShared("Test");
*objectStream << "Test Object";
objectStream->flush();
putRequest->AddContentBody(objectStream);
Aws::StringStream intConverter;
intConverter << objectStream->tellp();
putRequest->SetContentLength(intConverter.str());
putRequest->SetContentType("text/plain");
Aws::Http::HttpClient httpClient = Aws::Http::CreateHttpClient(Aws::Client::ClientConfiguration());
std::shared_ptr putResponse = httpClient->MakeRequest(putRequest);

 

As we said at the beginning, you can also use a presigned URL to access objects (GetObject). For SSE-S3 and SSE-KMS, SSE-related headers are not required. That means, to access objects, there’s no difference between using a presigned URL and presigned URL with SSE. However, for SSE-C, related headers are still required. You can use the same GeneratePresignedUrlWithSSEC function to generate a presigned URL with SSE-C to access objects.

Get an object uploaded with an SSE-S3 or SSE-KMS presigned URL

// It’s the same to use GeneratePresignedUrlWithSSES3 or GeneratePresignedUrlWithSSEKMS
Aws::String presignedUrlGet = s3Client->GeneratePresignedUrl("BUCKET_NAME", “KEY_NAME”, HttpMethod::HTTP_GET);
std::shared_ptr getRequest = CreateHttpRequest(presignedUrlGet, HttpMethod::HTTP_GET, Aws::Utils::Stream::DefaultResponseStreamFactoryMethod);
std::shared_ptr getResponse = httpClient->MakeRequest(getRequest);

Get an object uploaded with an SSE-C presigned URL

Aws::String presignedUrlGet = s3Client->GeneratePresignedUrlWithSSEC(“BUCKET_NAME”, “KEY_NAME”, HttpMethod::HTTP_GET, HashingUtils::Base64Encode(sseKey));
std::shared_ptr getRequest = CreateHttpRequest(presignedUrlGet, HttpMethod::HTTP_GET, Aws::Utils::Stream::DefaultResponseStreamFactoryMethod);
getRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_ALGORITHM, Aws::S3::Model::ServerSideEncryptionMapper::GetNameForServerSideEncryption(Aws::S3::Model::ServerSideEncryption::AES256));
getRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY, HashingUtils::Base64Encode(sseKey));
getRequest->SetHeaderValue(Aws::S3::SSEHeaders::SERVER_SIDE_ENCRYPTION_CUSTOMER_KEY_MD5, HashingUtils::Base64Encode(HashingUtils::CalculateMD5(strBuffer)));
std::shared_ptr getResponse = httpClient->MakeRequest(getRequest);
std::cout << getResponse->GetResponseBody().rdbuf(); // Should output “Test Object”

There is also another set of functions that enable customers to add customized headers when generating an Amazon S3 presigned URL with SSE. For more details on using all of these functions, check out these test cases.

Please reach out to us with questions and improvements. As always, pull requests are welcome!

from AWS Developer Blog https://aws.amazon.com/blogs/developer/generate-an-amazon-s3-presigned-url-with-sse-using-the-aws-sdk-for-c/

AWS Toolkit for Visual Studio now supports Visual Studio 2019

AWS Toolkit for Visual Studio now supports Visual Studio 2019

A new release of the AWS Toolkit for Visual Studio has been published to Visual Studio marketplace. This new release adds support for Visual Studio 2019. Visual Studio 2019 is currently in preview, however, Microsoft has announced the general availability (GA) release date to be April 2, 2019.

The AWS Toolkit for Visual Studio provides many features inside Visual Studio to help get your code running in AWS. This includes deploying ASP.NET and ASP.NET Core web applications to AWS Elastic Beanstalk, deploying containers to Amazon Elastic Container Service (Amazon ECS), or deploying .NET Core serverless applications with AWS Lambda and AWS CloudFormation.

The toolkit also contains an AWS Explorer tool window that can help you manage some of your most common developer resources, like Amazon S3 buckets and Amazon DynamoDB tables.

We have also had a couple recent releases for .NET Core Lambda support that was only available to our .NET Core Global Tool, Amazon.Lambda.Tools. With the new release of the AWS Toolkit for Visual Studio you can now take advantage of these new features within Visual Studio.

Lambda Custom .NET Core runtime

Support for using custom .NET Core runtimes with Lambda was released recently. This provides the ability to use any version of .NET Core in Lambda, such as .NET Core 2.2 or .NET Core 3.0 preview. Now that support has been added to the AWS Toolkit for Visual Studio.

You can create a Lambda project using .NET Core 2.2 by selecting the Custom Runtime Function blueprint. To use .NET Core 3.0 preview, just update the target framework of the project in the project properties.

When you right-click the project and choose Publish to AWS, as you would for any other Lambda project, you might notice some differences.

The Language Runtime box now says Custom .NET Core Runtime, and the Framework says netcoreapp3.0. Also, you no longer select an assembly, type, and method for the Lambda handler. Instead, there is an optional handler field. The handler is optional because in a custom runtime function, the .NET Core project is packaged up as an executable file, so the Main method is called to start the .NET process. You’re welcome to still set a value for the handler, which you can access in your code by using the _HANDLER environment variable.

Lambda layers

The Lambda layers feature was also added to Amazon.Lambda.Tools .NET Core Global Tool. This makes it easy to create a layer of your .NET assemblies and tell Lambda to deploy your project with a specified layer. That way the deployment bundle can have a reduced size. And if you choose to create the layer on an Amazon Linux environment, you can optimize the .NET assemblies added to the layer by having them pre-jitted. This can reduce your cold-start time. You can find out more about optimizing packages here..

You still need to create the layer with the Amazon.Lambda.Tools .NET Core Global Tool. But once you create the layer, you can reference the layer in your projects and Visual Studio will honor the layer when creating the deployment package.

Let’s do a quick walkthrough on how to use layers with Visual Studio. You’ll need to have Amazon.Lambda.Tools installed, which you can do by using the following command.


dotnet tool install -g Amazon.Lambda.Tools

If you already had it installed, make sure it’s at least version 3.2.0. If it isn’t, then use the following command to update.


dotnet tool update -g Amazon.Lambda.Tools

Then, in a console window, navigate to your project and execute the following command.


dotnet lambda publish-layer <layer-name> —layer-type runtime-package-store —s3-bucket <bucket>

This creates a layer and outputs an ARN for the new version of the layer, which should look something like this, depending on what you name your layer: arn:aws:lambda:us-west-2:123412341234:layer:DemoTest:1

If you used the Lambda projects template that deploys straight to the Lambda service, you can specify the layer in the aws-lambda-tools-defaults.json configuration file with the function-layers key. If you want to use multiple layers, use a comma-separated list.


{
    "profile"               : "default",
    "region"                : "us-west-2",
    "configuration"         : "Release",
    "framework"             : "netcoreapp2.1",
    "function-runtime"      : "dotnetcore2.1",
    "function-memory-size"  : 256,
    "function-timeout"      : 30,
    "function-handler"      : "DemoLayerTest::DemoLayerTest.Function::FunctionHandler",

    "function-layers"       : "arn:aws:lambda:us-west-2:123412341234:layer:DemoTest:1"
}

If you’re deploying with an AWS CloudFormation template, usually called the serverless.template file, then you need to specify the layer using the Layers property.


"Get" : {
    "Type" : "AWS::Serverless::Function",
    "Properties": {
        "Handler": "DemoLayerServerlessTest::DemoLayerServerlessTest.Functions::Get",
        "Runtime": "dotnetcore2.1",
        "CodeUri": "",
        
        "Layers" : ["arn:aws:lambda:us-west-2:123412341234:layer:DemoTest:1"],
        
        "MemorySize": 256,
        "Timeout": 30,
        "Role": null,
        "Policies": [ "AWSLambdaBasicExecutionRole" ],
        "Events": {
        }
    }
}

With the layer specified the deployment wizard in Visual Studio will pick up this layer setting. Visual Studio will make sure the layer information is passed down into the underlying dotnet publish command used by our tooling, so that the deployment bundle is created without the .NET assemblies that will be provided by the layers.

I imagine a typical scenario for layers is a common layer is created by one dev on a team or through automation, possibly using the optimization feature. Then that layer is shared with the rest of the team in their development environments.

A final note about layers and custom runtimes. Currently, you can’t use the layer feature with the custom runtimes feature. This is because a custom runtime is deployed as a self-contained application with all the required runtime assemblies of .NET Core included with the deployment package. The underlying call to dotnet publish used in self-contained mode doesn’t support passing in a manifest of assemblies to exclude.

Visual Studio new project wizard

Visual Studio 2019 contains a new redesign for the experience of creating projects. All of the features of this redesign are not available yet to Visual Studio extensions like the AWS Toolkit for Visual Studio.

For example, the Language, Platform, or Project type filters you can see here in the new project wizard are not accessible to custom project templates like the ones that AWS provides. If you select C# in the language filter, our C# project templates will not show up.

The Visual Studio team plans to address this issue for extensions in a future update to Visual Studio 2019, but the changes will not be in the initial GA version. Until this is addressed, the best way to find our AWS project templates is to use the search bar and either search for “AWS” or “Lambda”.

Conclusion

We have come a long way since the AWS Toolkit for Visual Studio was first released in 2011 for Visual Studio 2008 and 2010, with support for ASP.NET web apps. With serverless and container support, as well as the Elastic Beanstalk support for web applications, you can use the toolkit to deploy so many types of applications to AWS.

We enjoy getting your feedback on our Visual Studio support. You can reach out to us about the AWS Toolkit for Visual Studio on our .NET repository.

–Norm

from AWS Developer Blog https://aws.amazon.com/blogs/developer/aws-toolkit-for-visual-studio-now-supports-visual-studio-2019/

AWS Toolkit for IntelliJ – Now generally available

AWS Toolkit for IntelliJ – Now generally available

Last year at re:Invent we told you that we were working on the AWS Toolkit for IntelliJ. Since then, the toolkit has been in active development on GitHub.

I’m happy to share that the AWS Toolkit for IntelliJ is now generally available!

The toolkit provides an integrated experience for developing serverless applications. For example, you can:

  • Create a new, ready-to-deploy serverless application in Java.
  • Locally test your code with step-through debugging in an execution environment similar to that of AWS Lambda.
  • Deploy your applications to the AWS Region of your choice.
  • Invoke your Lambda functions locally or remotely.
  • Use and customize sample payloads from different event sources such as Amazon S3Amazon API Gateway, and Amazon SNS.

Installation

First, install the AWS Serverless Application Model (SAM) CLI. It provides a Lambda-like execution environment and enables you to step-through and debug your code. This toolkit also uses SAM CLI to build and create deployment packages for your applications. You can find installation instructions for your system here.

Next, install the AWS Toolkit for IntelliJ via the JetBrains plugins repository. In the Settings/Preferences dialog, click on Plugins, select Marketplace, search for “AWS Toolkit”, and click the Install button. Then restart the IDE for the changes to take effect.

Building a serverless application with IntelliJ

Now that the IDE is configured and ready, I create a new project, select AWS on the left, and then choose AWS Serverless Application.

In the next window, I choose a name for my project and finish.

I’m using Maven to manage the project and the Project Object Model (pom.xml) file is not in the root directory. I have to select it and right click to add it as a Maven project.

Before I start deploying the application, I choose the AWS Region from the bottom-right menu. Let’s use Stockholm.

The default application is composed of a single Lambda function that you can call via HTTP using Amazon API Gateway. I open the code in the src/main/java/helloworld directory and change the message to be “Hello World from IntelliJ”.

The default application comes with unit tests that make it easy to build high-quality applications. Let’s update the assertion to make the test pass.

Running a function locally

Back to the function, I click Lambda icon on the left of the class definition to see the option to run the function locally or have a local step-through debugging session.

Let’s run the function locally. The first time I run the function, I can edit the configuration to choose the AWS credentials I want to use, the Region (for AWS services used by the function), and the input event to provide. I select the API Gateway AWS Proxy to simulate an invocation by API Gateway. I can customize the HTTP request using the syntax described here. I can also pass environment variables to customize the behavior of the function.

I select Run, and two tabs appears:

  • The Build tab, using the SAM CLI to do the build.
  • The Run tab, where I can check the output of my function.

The local invocation of the function is using Docker containers to emulate the Lambda environment.

Debugging a function locally

I’m not really sure how the location, part of the output message, is computed by this application, so I add a breakpoint where the pageContents variable is given a value. I select the option to debug locally, by clicking the gutter icon.

I can now use the IntelliJ debugger to get a better understanding of my function. I click Step Into to go in the getPageContents method. There I Step Over a few times to see how the location is taken from the public https://checkip.amazonaws.com website.

I finally resume the program execution to get a similar result as before.

Deploying a serverless application

Everything works as expected, I am ready to go in production. I deploy the serverless application in the AWS Region of my choice. To do so, I select the template.yaml file in the root directory. This template is using AWS SAM to describe the deployment in terms of:

  • Infrastructure, in this case a Lambda function, API, permissions, and so on.
  • Code, because the Handler property of the function is specifying the source file and the method that is invoked by the Lambda platform.

Right-clicking the template.yaml file I choose to Deploy Serverless Application. AWS SAM is using AWS CloudFormation to create and update the required resources. I choose to create a new AWS CloudFormation stack, but you can use the same deployment option to update an existing stack. I create an S3 bucket to host the deployment packages that the build process creates. The new bucket is automatically created in the AWS Region I selected before. You can reuse the bucket for multiple deployments. The SAM CLI automatically creates unique names for each build.

I don’t have template parameters to pass here, but they can be used by SAM or AWS CloudFormation to customize the behavior of a template for different environments.

If your build process depends on the actual Lambda execution environment, you can choose to run it inside a container to provide the necessary emulation.

I choose Deploy, and after a few minutes, the AWS CloudFormation stack is created.

Running a function remotely

Now I can invoke the Lambda function remotely. In the AWS Explorer on the left, I find the function under Lambda, where all functions in the selected Region are listed, and under AWS CloudFormation, where all stacks that have a Lambda function are listed.

I right-click the Lambda function to run it remotely. You can also jump to the source code of the function from here. Again, I create a configuration similar to what I did for the local invocation: I choose the API Gateway AWS Proxy input event, then choose Run to get the output of my serverless application. I can also see the logs of the function here, including the duration, the billed duration, the memory size, and the memory actually used by the invocation.

Invoking the HTTP endpoint

To invoke the API via HTTP, I need to know the API endpoint. I can get it from the output of the AWS CloudFormation stack, for example, using the AWS CLI:

$ aws cloudformation describe-stacks --stack-name hello-world-from-IntelliJ --region eu-north-1

In the output, there is a section similar to this, with the API endpoint in the OutputValue:

{
  "Description": "API Gateway endpoint URL for Prod stage for Hello World function", 
  "OutputKey": "HelloWorldApi", 
  "OutputValue": "https://<API_ID>.execute-api.eu-north-1.amazonaws.com/Prod/hello/"
}

Now I can invoke the API using curl, for example:

$ curl -s https://<API_ID>.execute-api.eu-north-1.amazonaws.com/Prod/hello/
{
  "message": "Hello World from IntelliJ",
  "location": "x.x.x.x"
}

Available now

This toolkit is distributed under the open source Apache License, Version 2.0.

More information is available on the AWS Toolkit for IntelliJ product page.

There are lots of other features I didn’t have time to describe in this post. Just start using this toolkit to discover more. And let us know what you use it for!

from AWS Developer Blog https://aws.amazon.com/blogs/developer/aws-toolkit-for-intellij-now-generally-available/

AWS Lambda layers with .NET Core

AWS Lambda layers with .NET Core

Lambda layers enable you to provide additional code and content to your AWS Lambda function. A layer is composed of additional files used by your Lambda function that are extracted into the /opt directory in the Lambda compute environment.

Since the release of Lambda Layers one of the common questions I hear is how can .NET Core Lambda functions take advantage of this feature. For .NET Core, there are a couple challenges that you have to overcome to take advantage of layers. First, you have to tell the .NET runtime to load assemblies from outside of the deployment bundle. The other big challenge is when the dotnet publish command is executed, which all of our .NET Lambda tools rely on to gather all of the required .NET assemblies, the publish command needs to know what assemblies not to include because they will be provided by a layer.

Thankfully, the .NET Core tooling has some lesser-known features to make this work, but they’re a bit tricky to use. With version 3.2.0 of the Amazon.Lambda.Tools .NET Core Global Tool the process of creating layers and using them with your Lambda functions is now simple to use.

The major benefit of using layers is it can dramatically reduce the size of the .zip file that has to be uploaded to Lambda whenever you deploy a function. Also, there are opportunities to improve cold-start performance that are described below in the optimizing packages section.

Runtime package stores

The .NET Core technology used by Amazon.Lambda.Tools to create layers is called runtime package stores, https://docs.microsoft.com/en-us/dotnet/core/deploying/runtime-store. This was introduced as part of .NET Core 2.0.

A manifest file is used to create a runtime package store. The project files, that is the *.csproj file in your Lambda projects, are an example of a manifest. When you create a runtime package store, all of the NuGet packages identified by the PackageReference elements in the manifest and their dependencies are captured in a directory that Amazon.Lambda.Tools will turn into a Lambda layer.


<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
    <GenerateRuntimeConfigurationFiles>true</GenerateRuntimeConfigurationFiles>
    <AWSProjectType>Lambda</AWSProjectType>
  </PropertyGroup>
  <ItemGroup>
    <PackageReference Include="Amazon.Lambda.Core" Version="1.1.0" />
    <PackageReference Include="Amazon.Lambda.Serialization.Json" Version="1.4.0" />
  </ItemGroup>
</Project>

This means you can create a layer for all of the dependencies in your Lambda project. Then when you deploy, you will only upload the assemblies for your local projects. Or you could create a custom manifest in the same style as a .csproj file with all of your common dependencies, and create a layer from that. You then reference that layer from all of your Lambda projects.

For a full description of how runtime package stores are turned into Lambda layers and how they work, I recommend checking out our .NET Lambda layers documentation, https://github.com/aws/aws-extensions-for-dotnet-cli/blob/master/docs/Layers.md.

Creating a Lambda layer

As we mentioned earlier, a layer can be created from the .csproj file of a Lambda project. To create a layer, execute the following command in the project directory.


dotnet lambda publish-layer LayerBlogDemoLayer --layer-type runtime-package-store --s3-bucket <s3-bucket>

This creates a layer called LayerBlogDemoLayer. The type will be runtime-package-store. Currently, runtime-package-store is the only valid value, but it’s a required field to allow us to create new types of layers in the future. The Amazon S3 bucket that’s specified is used to upload the runtime package store created locally, and which Lambda will use to create the layer from.

Here is output of the publish-layer command.


...

... Progress: 100%
Upload complete to s3://normj-west2/LayerBlogDemoLayer-636888783824636933/artifact.xml
Create zip file of runtime package store directory
... zipping: dotnetcore\store\x64\netcoreapp2.1\artifact.xml
... zipping: dotnetcore\store\x64\netcoreapp2.1\amazon.lambda.core\1.0.0\lib\netstandard1.3\Amazon.Lambda.Core.dll
... zipping: dotnetcore\store\x64\netcoreapp2.1\amazon.lambda.core\1.1.0\lib\netstandard2.0\Amazon.Lambda.Core.dll
... zipping: dotnetcore\store\x64\netcoreapp2.1\amazon.lambda.serialization.json\1.4.0\lib\netstandard1.3\Amazon.Lambda.Serialization.Json.dll
... zipping: dotnetcore\store\x64\netcoreapp2.1\newtonsoft.json\9.0.1\lib\netstandard1.0\Newtonsoft.Json.dll
Uploading layer input zip file to S3
Uploading to S3. (Bucket: normj-west2 Key: LayerBlogDemoLayer-636888783824636933/packages.zip)
... Progress: 52%
... Progress: 100%
Upload complete to s3://normj-west2/LayerBlogDemoLayer-636888783824636933/packages.zip
Layer publish with arn arn:aws:lambda:us-west-2:123412341234:layer:LayerBlogDemoLayer:1

There are two things I want to call out here. First, there was an artifact.xml file loaded to Amazon S3. This file is important because when Lambda functions are later deployed using this layer, the artifact.xml file is downloaded by Amazon.Lambda.Tools to tell the dotnet publish command the assemblies to not include. The use of this file will be transparent to you, but beware that this file is uploaded to S3, and is meant to stay there as long as you want to use this layer. To share your layer with other accounts, you also need to share this object in S3.

The most important information in the output is the ARN of the new version of the layer, which you can see on the last line. This value is what you’ll use when deploying functions.

Inspect your layer

If you execute the dotnet lambda help command, you can see there are several new commands added to manage your layers.


PS> dotnet lambda help
Amazon Lambda Tools for .NET Core applications (3.2.0)
Project Home: https://github.com/aws/aws-extensions-for-dotnet-cli, https://github.com/aws/aws-lambda-dotnet

...

Commands to publish and manage AWS Lambda Layers:

        publish-layer           Command to publish a Layer that can be associated with a Lambda function
        list-layers             Command to list Layers
        list-layer-versions     Command to list versions for a Layer
        get-layer-version       Command to get the details of a Layer version
        delete-layer-version    Command to delete a version of a Layer

...

To inspect your layer, the get-layer-version will let you know what assemblies are in the layer and where the artifact.xml file is stored.


PS< dotnet lambda get-layer-version arn:aws:lambda:us-west-2:123412341234:layer:LayerBlogDemoLayer:1
Amazon Lambda Tools for .NET Core applications (3.2.0)
Project Home: https://github.com/aws/aws-extensions-for-dotnet-cli, https://github.com/aws/aws-lambda-dotnet

Layer ARN:               arn:aws:lambda:us-west-2:123412341234:layer:LayerBlogDemoLayer
Version Number:          1
Created:                 3/22/2019 12:06 PM
License Info:
Compatible Runtimes:     dotnetcore2.1
Layer Type:              .NET Runtime Package Store

.NET Runtime Package Store Details:
Manifest Location:       s3://normj-west2/LayerBlogDemoLayer-636888783824636933/artifact.xml
Packages Optimized:      False
Packages Directory:      /opt/dotnetcore/store

Manifest Contents
-----------------------
<StoreArtifacts>
  <Package Id="Amazon.Lambda.Core" Version="1.1.0" />
  <Package Id="Amazon.Lambda.Core" Version="1.0.0" />
  <Package Id="Amazon.Lambda.Serialization.Json" Version="1.4.0" />
  <Package Id="Newtonsoft.Json" Version="9.0.1" />
</StoreArtifacts>

Deploying with your layers

To use the layer when you deploy the function, use the --function-layers switch. This should be set to the layer version ARN output by the publish-layer command. You can use a comma-separated list of ARNs to use multiple layers.


PS< dotnet lambda deploy-function LayerBlogDemo --function-layers arn:aws:lambda:us-west-2:123412341234:layer:LayerBlogDemoLayer:1
Amazon Lambda Tools for .NET Core applications (3.2.0)
Project Home: https://github.com/aws/aws-extensions-for-dotnet-cli, https://github.com/aws/aws-lambda-dotnet

Inspecting Lambda layers for runtime package store manifests
... arn:aws:lambda:us-west-2:626492997873:layer:LayerBlogDemoLayer:1: Downloaded package manifest for runtime package store layer
Executing publish command
Deleted previous publish folder
... invoking 'dotnet publish', working folder 'C:\temp\LayerBlogDemo\src\LayerBlogDemo\bin\Release\netcoreapp2.1\publish'
... Disabling compilation context to reduce package size. If compilation context is needed pass in the "/p:PreserveCompilationContext=false" switch.
... publish: Microsoft (R) Build Engine version 15.9.20+g88f5fadfbe for .NET Core
... publish: Copyright (C) Microsoft Corporation. All rights reserved.
... publish:   Restore completed in 46.95 ms for C:\temp\LayerBlogDemo\src\LayerBlogDemo\LayerBlogDemo.csproj.
... publish:   LayerBlogDemo -> C:\temp\LayerBlogDemo\src\LayerBlogDemo\bin\Release\netcoreapp2.1\rhel.7.2-x64\LayerBlogDemo.dll
... publish:   LayerBlogDemo -> C:\temp\LayerBlogDemo\src\LayerBlogDemo\bin\Release\netcoreapp2.1\publish\
Zipping publish folder C:\temp\LayerBlogDemo\src\LayerBlogDemo\bin\Release\netcoreapp2.1\publish to C:\temp\LayerBlogDemo\src\LayerBlogDemo\bin\Release\netcoreapp2.1\LayerBlogDemo.zip
... zipping: LayerBlogDemo.deps.json
... zipping: LayerBlogDemo.dll
... zipping: LayerBlogDemo.pdb
... zipping: LayerBlogDemo.runtimeconfig.json
Updating code for existing function LayerBlogDemo

Notice at the start of deployment that the Lambda layers were inspected and the artifact.xml file was downloaded. Under the covers, the artifact.xml file was passed into the dotnet publish command, which told it not to include the Amazon.Lambda.* and Newtonsoft.Json NuGet packages. You can see that only the project’s assembly was included with the package bundle.

When using the deploy-serverless command to deploy with an AWS CloudFormation template, set the Layers property. The deploy-serverless command performs the same inspection of the layers that we saw in the *deploy-function* command.

javascript
{
    "AWSTemplateFormatVersion" : "2010-09-09",
    "Transform" : "AWS::Serverless-2016-10-31",
    "Description" : "An AWS Serverless Application.",

    "Resources" : {

        "LayerBlogDemo" : {
            "Type" : "AWS::Serverless::Function",
            "Properties": {
                "Handler": "LayerBlogDemo::LayerBlogDemo.Function::FunctionHandler",
                "Runtime": "dotnetcore2.1",
                "Layers" : ["arn:aws:lambda:us-west-2:123412341234:layer:LayerBlogDemoLayer:1"],
                "CodeUri": "",
                "MemorySize": 256,
                "Timeout": 30,
                "Role": null,
                "Policies": [ "AWSLambdaBasicExecutionRole" ]
            }
        }
    }
}

Optimizing packages

A feature of a runtime package store is that the .NET assemblies placed into the store can be optimized for the target runtime by pre-jitting the assemblies. Pre-jitting is the process of compiling the platform-agnostic machine code of an assembly, known as MSIL, into native machine code. Without pre-jitting, the assemblies are compiled into native machine code when they are first loaded into the .NET Core Process. Enabling the optimization can significantly reduce cold-start times in Lambda.

To create an optimized runtime package store layer, you must run the publish-layer command in an Amazon Linux environment. Attempting to create an optimized runtime package store layer on Windows or macOS will result in an error. If you’re creating the layer on Linux, be sure the distribution is Amazon Linux. Amazon EC2 provides an AMI with Amazon Linux and .NET Core 2.1 preinstalled and you can easily launch them from the AWS Toolkit for Visual Studio.

A nice aspect of using optimized layers is that you can create the layer once on an Amazon Linux instance to get the pre-jitted benefits, and then share that layer version ARN with all of the Lambda functions you are developing on Windows and macOS.

To tell the publish-layer command to optimize the layer, set the --enable-package-optimization switch to true.

Upcoming AWS Toolkit for Visual Studio

With today’s release you can use Lambda layers with the Amazon.Lambda.Tools .NET Core Global Tool. We’re finishing updates to the AWS Toolkit for Visual Studio to add support for the upcoming Visual Studio 2019. When we release this update, you’ll be able to reference the layers you created with the publish-layer command in either the aws-lambda-tools-defaults.json or serverless.template file, and the deployment from Visual Studio will process the layers in the same way we saw with the deploy-function and deploy-serverless commands. Be sure to monitor our .NET blog or the @awsfornet Twitter handle for the upcoming AWS Toolkit for Visual Studio release.

Summary

There’s a lot going on under the hood for Amazon.Lambda.Tools to provide a seamless experience. I recommend checking out the full docs about how this feature works from our GitHub repository, https://github.com/aws/aws-extensions-for-dotnet-cli/blob/master/docs/Layers.md. There’s a FAQ and more details about how to use layers with the package and package-ci commands for CI systems.

I’m excited to add this requested feature to our Lambda .NET tool chain. I hope you find the process of using layers seamless, and we welcome any feedback on this feature. Feel free to reach out on the .NET Lambda repository.

–Norm

from AWS Developer Blog https://aws.amazon.com/blogs/developer/aws-lambda-layers-with-net-core/

Now generally available: the ASP.NET Core Identity Provider for Amazon Cognito

Now generally available: the ASP.NET Core Identity Provider for Amazon Cognito

We’re pleased to announce the general availability of the ASP.NET Core Identity Provider for Amazon Cognito, which enables ASP.NET Core developers to easily integrate with Amazon Cognito in their web applications.

Targeting .NET Standard 2.0, the custom ASP.NET Core Identity Provider for Amazon Cognito extends the ASP.NET Core Identity membership system by providing Amazon Cognito as a custom storage provider for ASP.NET Identity. In a few lines of code, you can add authentication and authorization that’s based on Amazon Cognito to your ASP.NET Core application.

Getting started with the sample web application

You can quickly try out the library by cloning and exploring the sample web application from the GitHub repository.

To use the sample application with your Amazon Cognito user pool, just make the necessary changes to the following properties in the appsettings.json file:

"AWS": {
    "Region": "<your region id goes here>",
    "UserPoolClientId": "<your user pool client id goes here>",
    "UserPoolClientSecret": "<your user pool client secret goes here>",
    "UserPoolId": "<your user pool id goes here>"
}

Migrating an existing web application to use the ASP.NET Core Identity Provider for Amazon Cognito

To upgrade an existing web application to use Amazon Cognito as the Identity provider, you need to add the following NuGet dependencies to your ASP.NET Core web application:

To add Amazon Cognito as an Identity provider, add a call to services.AddCognitoIdentity(); in the ConfigureServices method.

public void ConfigureServices(IServiceCollection services)
{
    // Adds Amazon Cognito as Identity Provider
    services.AddCognitoIdentity();
    ...
}

Finally, if it isn’t already active, enable the support for authentication in ASP.NET Core in your Startup.cs file:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    // If not already enabled, you need to enable ASP.NET Core authentication
    app.UseAuthentication();
    ...
} 

Using the CognitoUser class as your web application user class

Once you add Amazon Cognito as the default ASP.NET Core Identity provider, you need to use the newly introduced CognitoUser class, instead of the default ApplicationUser class.
These changes are required in any existing Razor views and controllers. Here is an example with a Razor view.

@using Microsoft.AspNetCore.Identity
@using Amazon.Extensions.CognitoAuthentication
@inject SignInManager<CognitoUser> SignInManager
@inject UserManager<CognitoUser> UserManager

The ASP.NET Core Identity Provider for Amazon Cognito comes with custom implementations of the ASP.NET Core Identity classes UserManager and SigninManager (CognitoUserManager and CognitoSigninManager). These implementations are designed to support Amazon Cognito use cases, such as:

  • User account management (account registration, account confirmation, user attributes update, account deletion)
  • User password management (password update, password reset)
  • User login and user logout (with or without two-factor authentication)
  • Roles and claims management
  • Authorization

Using Amazon Cognito as an Identity membership system is as simple as using CognitoUserManager and CognitoSigninManager in your existing scaffolded Identity controllers.

Register.cshtml.cs

public async Task<IActionResult> OnPostAsync(string returnUrl = null)
{
    returnUrl = returnUrl ?? Url.Content("~/");
    if (ModelState.IsValid)
    {
        // Retrieves a new user with the pool configuration set up
        CognitoUser user = _pool.GetUser(Input.UserName);
        // Sets the required user email
        user.Attributes.Add(CognitoAttributesConstants.Email, Input.Email);
        // Set additional attributes required by the user pool
        user.Attributes.Add("custom:domain", "foo.bar");
        // Registers the user in the pool
        SigninResult result = await _userManager.CreateAsync(user, Input.Password);
        if (result.Succeeded)
        {
            _logger.LogInformation("User created a new account with password.");
            
            await _signInManager.SignInAsync(user, isPersistent: false);
            // Redirects to the account confirmation page
            return RedirectToPage("./ConfirmAccount");
        }
        foreach (var error in result.Errors)
        {
            ModelState.AddModelError(string.Empty, error.Description);
        }
    }

    // If we got this far, something failed, redisplay form
    return Page();
}

You can find complete samples in the Amazon Cognito ASP.NET Core Identity Provider GitHub repository, including the following:

  • User registration
  • User login with and without two-factor authentication
  • Account confirmation
  • How to change or reset a user password

Feel free to explore other examples in the documentation guide available on GitHub.

Authentication and authorization

By default, authentication is supported by the Amazon CognitoAuthentication Extension Library using the Secure Remote Password protocol. In addition, ASP.NET Core authorization provides a simple, declarative role and a rich policy-based model to handle authorization.

We use Amazon Cognito groups to support role-based authorization. Restricting access to only users who are part of an “Admin” group is as simple as adding the following attribute to the controllers or methods you want to restrict access to:

[Authorize(Roles = "Admin")]
public class AdminController : Controller
{
}

Similarly, we use Amazon Cognito users attributes to support claim-based authorization. Amazon Cognito prefixes custom attributes with the key “custom:”.

The following snippets shows how you could restrict access to resources to Amazon Cognito users with a specific “domain” attribute value by creating a custom policy and applying it to your resources. You can do this in the ConfigureServices method of your Startup.cs file:

public void ConfigureServices(IServiceCollection services)
{
    List<string> authorizedDomains = new List<string>()
    {
        "amazon.com",
        "foo.bar"
    };

    services.AddAuthorization(options =>
    {
        options.AddPolicy("AuthorizedDomainsOnly", policy => policy.RequireClaim("custom:domain", authorizedDomains));
    });
    
    ...
}
[Authorize(Policy = "AuthorizedDomainsOnly")]
public class RestrictedController : Controller
{
}

Providing feedback

We would love to know how you’re using the ASP.NET Core Identity Provider for Amazon Cognito. Please give us any feedback and check out the source on GitHub!

Come join the AWS SDK for .NET community chat on Gitter.
Submit a feature request or up-vote existing ones on the GitHub Issues page.

from AWS Developer Blog https://aws.amazon.com/blogs/developer/now-generally-available-the-asp-net-core-identity-provider-for-amazon-cognito/