Tag: AWS Compute Blog

Running Java applications on Amazon EC2 A1 instances with Amazon Corretto

Running Java applications on Amazon EC2 A1 instances with Amazon Corretto

This post is contributed by Jeff Underhill | EC2 Principal Business Development Manager

Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). Production-ready Linux builds of JDK8 and JDK 11 for the 64-bit Arm architecture were released Sep 17, 2019. Scale-out Java applications can get significant cost savings using the Arm-based Amazon EC2 A1 instances. Read on to learn more about Amazon EC2 A1 instances, Amazon Corretto, and how support for 64-bit Arm in Amazon Corretto is a significant development for Java developers building cloud native applications.

What are Amazon EC2 A1 instances?

Last year at re:Invent Amazon Web Services (AWS) introduced Amazon EC2 A1 instances powered by AWS Graviton Processors that feature 64-bit Arm Neoverse cores and custom silicon designed by AWS. The A1 instances deliver up to 45% cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, distributed data stores, and Arm-native software development workflows that are supported by the extensive, and growing Arm software ecosystem.

Why Java and Amazon EC2 A1 instances?
The majority of customers we’ve spoken to have experienced a seamless transition and are realizing cost benefits with A1 instances, especially customers that are transitioning architecture-agnostic applications that often run seamlessly on Arm-based platforms. Today, there are many examples of architecture-agnostic programming languages, such as PHP, Python, node.js, and GoLang, and all of these are well supported on Arm-based A1 instances. The Java programming language has been around for almost 25 years and is one of the most broadly adopted programming languages. The TIOBE Programming Community Index (as of Sept’19) shows Java ranked as the #1 or #2 programming language between 2004-2019, and the annual GitHub Octoverse report shows Java has consistently ranked #2 between 2014 and 2018. Java was designed to have as few implementation dependencies as possible, which enables portability of Java applications regardless of processor architectures. This portability enables choice of how and where to run your Java-based workloads.

What is Amazon Corretto?
Java is one of the most popular languages in use by AWS customers, and AWS is committed to supporting Java and keeping it free. That’s why AWS introduced Amazon Corretto, a no-cost, multi-platform, production-ready distribution of OpenJDK from Amazon. AWS runs Corretto internally on thousands of production services, and has committed to distributing security updates to Corretto 8 at no cost until at least June, 2023. Amazon Corretto is available for both JDK 8 and JDK 11 – you can learn more in the documentation and if you’re curious about what goes into building Java in the open and specifically the Amazon Corretto OpenJDK distribution then check out this OSCON video.

What’s new?
Java provides you with the choice of how and where to run your applications, and Amazon EC2 provides you with the broadest and deepest portfolio of compute instances available in the cloud. AWS wants its customers to be able to run their workloads in the most efficient way as defined by their specific use case and business requirements and that includes providing a consistent platform experience across the Amazon EC2 instance portfolio. We’re fortunate to have James Gosling, the designer of Java, as a member of the Amazon team, and he recently took to Twitter to announce the General Availability (GA) of Amazon Corretto for the Arm architecture:

For those of you that like playing with Linux on ARM, the Corretto build for ARM64 is now GA. Fully production ready. Both JDK8 and JDK11

 

Open Source – “It takes a village”
It’s important to recognize the significance of Open Source Software (OSS) and the community of people involved to develop, build and test a production ready piece of software as complex as Java. So, because I couldn’t have said it better myself, here’s a tweet from my colleague Matt Wilson who took a moment to recognize all the hard work of the Red Hat and Java community developers:

Many thanks to all the hard work from @redhatopen developers, and all the #OpenSource Java community that played a part in making 64 bit Arm support in OpenJDK distributions a reality!

Ready to cost optimize your scale-out Java applications?
With the 8.222.10.4 and 11.0.4.11.1 releases of Amazon Corretto that became generally available Sep 17, 2019, new and existing AWS Corretto users can now deploy their production Java applications on Arm-based EC2 A1 instances. If you have scale-out applications and are interested in optimizing cost then you’ll want to take the A1 instances for a test-drive to see if they’re a fit for your specific use case. You can learn more at the Amazon Corretto website, and the downloads are all available here for Amazon Corretto 8Amazon Corretto 11 and if you’re using containers here’s the Docker Official image.  If you have any questions about your own workloads running on Amazon EC2 A1 instances, contact us at [email protected].

from AWS Compute Blog

Building a pocket platform-as-a-service with Amazon Lightsail

Building a pocket platform-as-a-service with Amazon Lightsail

This post was written by Robert Zhu, a principal technical evangelist at AWS and a member of the GraphQL Working Group. 

When you start a new web-based project, you figure out wha kind of infrastructure you need. For my projects, I prioritize simplicity, flexibility, value, and on-demand capacity, and find myself quickly needing the following features:

  • DNS configuration
  • SSL support
  • Subdomain to a service
  • SSL reverse proxy to localhost (similar to ngrok and serveo)
  • Automatic deployment after a commit to the source repo (nice to have)

new projects have different requirements compared to mature projects

Amazon Lightsail is perfect for building a simple “pocket platform” that provides all these features. It’s cheap and easy for beginners and provides a friendly interface for managing virtual machines and DNS. This post shows step-by-step how to assemble a pocket platform on Amazon Lightsail.

Walkthrough

The following steps describe the process. If you prefer to learn by watching videos instead, view the steps by watching the following: Part 1, Part 2, Part 3.

Prerequisites

You should be familiar with: Linux, SSH, SSL, Docker, Nginx, HTTP, and DNS.

Steps

Use the following steps to assemble a pocket platform.

Creating a domain name and static IP

First, you need a domain name for your project. You can register your domain with any domain name registration service, such as Amazon Route53.

  1. After your domain is registered, open the Lightsail console, choose the Networking tab, and choose Create static IP.

Lightsail console networking tab

  1. On the Create static IP page, give the static IP a name you can remember and don’t worry about attaching it to an instance just yet. Choose Create DNS zone.

 

  1. On the Create a DNS zone page, enter your domain name and then choose Create DNS zone. For this post, I use the domain “raccoon.news.DNS zone in Lightsail with two A records
  2. Choose Add Record and create two A records—“@.raccoon.news” and “raccoon.news”—both resolving to the static IP address you created earlier. Then, copy the values for the Lightsail name servers at the bottom of the page. Go back to your domain name provider, and edit the name servers to point to the Lightsail name servers. Since I registered my domain with Route53, here’s what it looks like:

Changing name servers in Route53

Note: If you registered your domain with Route53, make sure you change the name server values under “domain registration,” not “hosted zones.” If you registered your domain with Route53, you need to delete the hosted zone that Route53 automatically creates for your domain.

Setting up your Lightsail instance

While you wait for your DNS changes to propagate, set up your Lightsail instance.

  1. In the Lightsail console, create a new instance and select Ubuntu 18.04.

Choose OS Only and Ubuntu 18.04 LTS

For this post, you can use the cheapest instance. However, when you run anything in production, make sure you choose an instance with enough capacity for your workload.

  1. After the instance launches, select it, then click on the Networking tab and open two additional TCP ports: 443 and 2222. Then, attach the static IP allocated earlier.
  2. To connect to the Lightsail instance using SSH, download the SSH key, and save it to a friendly path, for example: ~/ls_ssh_key.pem.

Click the download link to download the SSH key

  • Restrict permissions for your SSH key:

chmod 400 ~/ls_ssh_key.pem

  • Connect to the instance using SSH:

ssh -i ls_ssh_key.pem [email protected]_IP

  1. After you connect to the instance, install Docker to help manage deployment and configuration:

sudo apt-get update && sudo apt-get install docker.io
sudo systemctl start docker
sudo systemctl enable docker
docker run hello-world

  1. After Docker is installed, set up a gateway using called the nginx-proxy container. This container lets you route traffic to other containers by providing the “VIRTUAL_HOST” environment variable. Conveniently, nginx-proxy comes with an SSL companion, nginx-proxy-letsencrypt, which uses Let’s Encrypt.

# start the reverse proxy container
sudo docker run --detach \
    --name nginx-proxy \
    --publish 80:80 \
    --publish 443:443 \
    --volume /etc/nginx/certs \
    --volume /etc/nginx/vhost.d \
    --volume /usr/share/nginx/html \
    --volume /var/run/docker.sock:/tmp/docker.sock:ro \
    jwilder/nginx-proxy

# start the letsencrypt companion
sudo docker run --detach \
    --name nginx-proxy-letsencrypt \
    --volumes-from nginx-proxy \
    --volume /var/run/docker.sock:/var/run/docker.sock:ro \
    --env "DEFAULT_EMAIL=YOUREMAILHERE" \
    jrcs/letsencrypt-nginx-proxy-companion

# start a demo web server under a subdomain
sudo docker run --detach \
    --name nginx \
    --env "VIRTUAL_HOST=test.EXAMPLE.COM" \
    --env "LETSENCRYPT_HOST=test.EXAMPLE.COM" \
    nginx

Pay special attention to setting a valid email for the DEFAULT_EMAIL environment variable on the proxy companion; otherwise, you’ll need to specify the email whenever you start a new container. If everything went well, you should be able to navigate to https://test.EXAMPLE.COM and see the nginx default content with a valid SSL certificate that has been auto-generated by Let’s Encrypt.

A publicly accessible URL served from our Lightsail instance with SSL support

Troubleshooting:

  • In the Lightsail console, make sure that Port 443 is open.
  • Let’s Encrypt rate limiting (for reference if you encounter issues with SSL certificate issuance): https://letsencrypt.org/docs/rate-limits/.

Deploying a localhost proxy with SSL

Most developers prefer to code on a dev machine (laptop or desktop) because they can access the file system, use their favorite IDE, recompile, debug, and more. Unfortunately, developing on a dev machine can introduce bugs due to differences from the production environment. Also, certain services (for example, Alexa Skills or GitHub Webhooks) require SSL to work, which can be annoying to configure on your local machine.

For this post, you can use an SSL reverse proxy to make your local dev environment resemble production from the browser’s point of view. This technique also helps allow your test application to make API requests to production endpoints with Cross-Origin Resource Sharing restrictions. While it’s not a perfect solution, it takes you one step closer toward a frictionless dev/test feedback loop. You may have used services like ngrok and serveo for this purpose. By running a reverse proxy, you won’t need to spread your domain and SSL settings across multiple services.

To run a reverse proxy, create an SSH reverse tunnel. After the reverse tunnel SSH session is initiated, all network requests to the specified port on the host are proxied to your dev machine. However, since your Lightsail instance is already using port 22 for VPS management, you need a different SSH port (use 2222 from earlier). To keep everything organized, run the SSH server for port 2222 inside a special proxy container. The following diagram shows this solution.

Diagram of how an SSL reverse proxy works with SSH tunneling

Using Dockerize an SSH service as a starting point, I created a repository with a working Dockerfile and nginx config for reference. Here are the summary steps:

git clone https://github.com/robzhu/nginx-local-tunnelcd nginx-local-tunnel

docker build -t {DOCKERUSER}/dev-proxy . --build-arg ROOTPW={PASSWORD}

# start the proxy container
# Note, 2222 is the port we opened on the instance earlier.
docker run --detach -p 2222:22 \
    --name dev-proxy \
    --env "VIRTUAL_HOST=dev.EXAMPLE.com" \
    --env "LETSENCRYPT_HOST=dev.EXAMPLE.com" \
    {DOCKERUSER}/dev-proxy

# Ports explained:
# 3000 refers to the port that your app is running on localhost.
# 2222 is the forwarded port on the host that we use to directly SSH into the container.
# 80 is the default HTTP port, forwarded from the host
ssh -R :80:localhost:3000 -p 2222 [email protected]

# Start sample app on localhost
cd node-hello && npm i
nodemon main.js

# Point browser to https://dev.EXAMPLE.com

The reverse proxy subdomain works only as long as the reverse proxy SSH connection remains open. If there is no SSH connection, you should see an nginx gateway error:

Nginx will return 502 if you try to access the reverse proxy without running the SSH tunnel

While this solution is handy, be extremely careful, as it could expose your work-in-progress to the internet. Consider adding additional authorization logic and settings for allowing/denying specific IPs.

Setting up automatic deployment

Finally, build an automation workflow that watches for commits on a source repository, builds an updated container image, and re-deploys the container on your host. There are many ways to do this, but here’s the combination I’ve selected for simplicity:

  1. First, create a GitHub repository to host your application source code. For demo purposes, you can clone my express hello-world example. On the Docker hub page, create a new repository, click the GitHub icon, and select your repository from the dropdown list.Create GitHub repo to host your application source code
  2. Now Docker watches for commits to the repo and builds a new image with the “latest” tag in response. After the image is available, start the container as follows:

docker run --detach \
    --name app \
    --env "VIRTUAL_HOST=app.raccoon.news" \
    --env "LETSENCRYPT_HOST=app.raccoon.news" \
    robzhu/express-hello

  1. Finally, use Watchtower to poll dockerhub and update the “app” container whenever a new image is detected:

docker run -d \
    --name watchtower \
    -v /var/run/docker.sock:/var/run/docker.sock \
    containrrr/watchtower \
    --interval 10 \
    APPCONTAINERNAME

 

Summary

Your Pocket PaaS is now complete! As long as you deploy new containers and add the VIRTUAL_HOST and LETSENCRYPT_HOST environment variables, you get automatic subdomain routing and SSL termination. With SSH reverse tunneling, you can develop on your local dev machine using your favorite IDE and test/share your app at https://dev.EXAMPLE.COM.

Because this is a public URL with SSL, you can test Alexa Skills, GitHub Webhooks, CORS settings, PWAs, and anything else that requires SSL. Once you’re happy with your changes, a git commit triggers an automated rebuild of your Docker image, which is automatically redeployed by Watchtower.

I hope this information was useful. Thoughts? Leave a comment or direct-message me on Twitter: @rbzhu.

from AWS Compute Blog

Update: Issue affecting HashiCorp Terraform resource deletions after the VPC Improvements to AWS Lambda

Update: Issue affecting HashiCorp Terraform resource deletions after the VPC Improvements to AWS Lambda

On September 3, 2019, we announced an exciting update that improves the performance, scale, and efficiency of AWS Lambda functions when working with Amazon VPC networks. You can learn more about the improvements in the original blog post. These improvements represent a significant change in how elastic network interfaces (ENIs) are configured to connect to your VPCs. With this new model, we identified an issue where VPC resources, such as subnets, security groups, and VPCs, can fail to be destroyed via HashiCorp Terraform. More information about the issue can be found here. In this post we will help you identify whether this issue affects you and the steps to resolve the issue.

How do I know if I’m affected by this issue?

This issue only affects you if you use HashiCorp Terraform to destroy environments. Versions of Terraform AWS Provider that are v2.30.0 or older are impacted by this issue. With these versions you may encounter errors when destroying environments that contain AWS Lambda functions, VPC subnets, security groups, and Amazon VPCs. Typically, terraform destroy fails with errors similar to the following:

Error deleting subnet: timeout while waiting for state to become 'destroyed' (last state: 'pending', timeout: 20m0s)

Error deleting security group: DependencyViolation: resource sg-<id> has a dependent object
        	status code: 400, request id: <guid>

Depending on which AWS Regions the VPC improvements are rolled out, you may encounter these errors in some Regions and not others.

How do I resolve this issue if I am affected?

You have two options to resolve this issue. The recommended option is to upgrade your Terraform AWS Provider to v2.31.0 or later. To learn more about upgrading the Provider, visit the Terraform AWS Provider Version 2 Upgrade Guide. You can find information and source code for the latest releases of the AWS Provider on this page. The latest version of the Terraform AWS Provider contains a fix for this issue as well as changes that improve the reliability of the environment destruction process. We highly recommend that you upgrade the Provider version as the preferred option to resolve this issue.

If you are unable to upgrade the Provider version, you can mitigate the issue by making changes to your Terraform configuration. You need to make the following sets of changes to your configuration:

  1. Add an explicit dependency, using a depends_on argument, to the aws_security_group and aws_subnet resources that you use with your Lambda functions. The dependency has to be added on the aws_security_group or aws_subnet and target the aws_iam_policy resource associated with IAM role configured on the Lambda function. See the example below for more details.
  2. Override the delete timeout for all aws_security_group and aws_subnet resources. The timeout should be set to 40 minutes.

The following configuration file shows an example where these changes have been made(scroll to see the full code):

provider "aws" {
    region = "eu-central-1"
}
 
resource "aws_iam_role" "lambda_exec_role" {
  name = "lambda_exec_role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}
 
data "aws_iam_policy" "LambdaVPCAccess" {
  arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
 
resource "aws_iam_role_policy_attachment" "sto-lambda-vpc-role-policy-attach" {
  role       = "${aws_iam_role.lambda_exec_role.name}"
  policy_arn = "${data.aws_iam_policy.LambdaVPCAccess.arn}"
}
 
resource "aws_security_group" "allow_tls" {
  name        = "allow_tls"
  description = "Allow TLS inbound traffic"
  vpc_id      = "vpc-<id>"
 
  ingress {
    # TLS (change to whatever ports you need)
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    # Please restrict your ingress to only necessary IPs and ports.
    # Opening to 0.0.0.0/0 can lead to security vulnerabilities.
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "tcp"
    cidr_blocks     = ["0.0.0.0/0"]
  }
  
  timeouts {
    delete = "40m"
  }
  depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]  
}
 
resource "aws_subnet" "main" {
  vpc_id     = "vpc-<id>"
  cidr_block = "172.31.68.0/24"

  timeouts {
    delete = "40m"
  }
  depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]
}
 
resource "aws_lambda_function" "demo_lambda" {
    function_name = "demo_lambda"
    handler = "index.handler"
    runtime = "nodejs10.x"
    filename = "function.zip"
    source_code_hash = "${filebase64sha256("function.zip")}"
    role = "${aws_iam_role.lambda_exec_role.arn}"
    vpc_config {
     subnet_ids         = ["${aws_subnet.main.id}"]
     security_group_ids = ["${aws_security_group.allow_tls.id}"]
  }
}

The key block to note here is the following, which can be seen in both the “allow_tls” security group and “main” subnet resources:

timeouts {
  delete = "40m"
}
depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]

These changes should be made to your Terraform configuration files before destroying your environments for the first time.

Can I delete resources remaining after a failed destroy operation?

Destroying environments without upgrading the provider or making the configuration changes outlined above may result in failures. As a result, you may have ENIs in your account that remain due to a failed destroy operation. These ENIs can be manually deleted a few minutes after the Lambda functions that use them have been deleted (typically within 40 minutes). Once the ENIs have been deleted, you can re-re-run terraform destroy.

from AWS Compute Blog

Improving the Getting Started experience with AWS Lambda

Improving the Getting Started experience with AWS Lambda

A common question from developers is, “How do I get started with creating serverless applications?” Frequently, I point developers to the AWS Lambda console where they can create a new Lambda function and immediately see it working.

While you can learn the basics of a Lambda function this way, it does not encompass the full serverless experience. It does not allow you to take advantage of best practices like infrastructure as code (IaC) or continuous integration and continuous delivery (CI/CD). A full-on serverless application could include a combination of services like Amazon API Gateway, Amazon S3, and Amazon DynamoDB.

To help you start right with serverless, AWS has added a Create application experience to the Lambda console. This enables you to create serverless applications from ready-to-use sample applications, which follow these best practices:

  • Use infrastructure as code (IaC) for defining application resources
  • Provide a continuous integration and continuous deployment (CI/CD) pipeline for deployment
  • Exemplify best practices in serverless application structure and methods

IaC

Using IaC allows you to automate deployment and management of your resources. When you define and deploy your IaC architecture, you can standardize infrastructure components across your organization. You can rebuild your applications quickly and consistently without having to perform manual actions. You can also enforce best practices such as code reviews.

When you’re building serverless applications on AWS, you can use AWS CloudFormation directly, or choose the AWS Serverless Application Model, also known as AWS SAM. AWS SAM is an open source framework for building serverless applications that makes it easier to build applications quickly. AWS SAM provides a shorthand syntax to express APIs, functions, databases, and event source mappings. Because AWS SAM is built on CloudFormation, you can specify any other AWS resources using CloudFormation syntax in the same template.

Through this new experience, AWS provides an AWS SAM template that describes the entire application. You have instant access to modify the resources and security as needed.

CI/CD

When editing a Lambda function in the console, it’s live the moment that the function is saved. This works when developing against test environments, but risks introducing untested, faulty code in production environments. That’s a stressful atmosphere for developers with the unneeded overhead of manually testing code on each change.

Developers say that they are looking for an automated process for consistently testing and deploying reliable code. What they need is a CI/CD pipeline.

CI/CD pipelines are more than just convenience, they can be critical in helping development teams to be successful. CI/CDs provide code integration, testing, multiple environment deployments, notifications, rollbacks, and more. The functionality depends on how you choose to configure it.

When you create a new application through Lambda console, you create a CI/CD pipeline to provide a framework for automated testing and deployment. The pipeline includes the following resources:

Best practices

Like any other development pattern, there are best practices for serverless applications. These include testing strategies, local development, IaC, and CI/CD. When you create a Lambda function using the console, most of this is abstracted away. A common request from developers learning about serverless is for opinionated examples of best practices.

When you choose Create application, the application uses many best practices, including:

  • Managing IaC architectures
  • Managing deployment with a CI/CD pipeline
  • Runtime-specific test examples
  • Runtime-specific dependency management
  • A Lambda execution role with permissions boundaries
  • Application security with managed policies

Create an application

Now, lets walk through creating your first application.

  1. Open the Lambda console, and choose Applications, Create application.
  2. Choose Serverless API backend. The next page shows the architecture, services used, and development workflow of the chosen application.
  3. Choose Create and then configure your application settings.
    • For Application name and Application description, enter values.
    • For Runtime, the preview supports Node.js 10.x. Stay tuned for more runtimes.
    • For Source Control Service, I chose CodeCommit for this example, but you can choose either. If you choose GitHub, you are asked to connect to your GitHub account for authorization.
    • For Repository Name, feel free to use whatever you want.
    • Under Permissions, check Create roles and permissions boundary.
  4. Choose Create.

Exploring the application

That’s it! You have just created a new serverless application from the Lambda console. It takes a few moments for all the resources to be created. Take a moment to review what you have done so far.

Across the top of the application, you can see four tabs, as shown in the following screenshot:

  • Overview—Shows the current page, including a Getting started section, and application and toolchain resources of the application
  • Code—Shows the code repository and instructions on how to connect
  • Deployments—Links to the deployment pipeline and a deployment history.
  • Monitoring—Reports on the application health and performance

getting started dialog

The Resources section lists all the resources specific to the application. This application includes three Lambda functions, a DynamoDB table, and the API. The following screenshot shows the resources for this sample application.resources view

Finally, the Infrastructure section lists all the resources for the CI/CD pipeline including the AWS Identity and Access Management (IAM) roles, the permissions boundary policy, the S3 bucket, and more. The following screenshot shows the resources for this sample application.application view

About Permissions Boundaries

This new Create application experience utilizes an IAM permissions boundary to help further secure the function that gets created and prevent an overly permissive function policy from being created later on. The boundary is a separate policy that acts as a maximum bound on what an IAM policy for your function can be created to have permissions for. This model allows developers to build out the security model of their application while still meeting certain requirements that are often put in place to prevent overly permissive policies and is considered a best practice. By default, the permissions boundary that is created limits the application access to just the resources that are included in the example template. In order to expand the permissions of the application, you’ll first need to extend what is defined in the permissions boundary to allow it.

A quick test

Now that you have an application up and running, try a quick test to see if it works.

  1. In the Lambda console, in the left navigation pane, choose Applications.
  2. For Applications, choose Start Right application.
  3. On the Endpoint details card, copy your endpoint.
  4. From a terminal, run the following command:
    curl -d '{"id":"id1", "name":"name1"}' -H "Content-Type: application/json" -X POST <YOUR-ENDPOINT>

You can find tips like this, and other getting started hints in the README.md file of your new serverless application.

Outside of the console

With the introduction of the Create application function, there is now a closer tie between the Lambda console and local development. Before this feature, you would get started in the Lambda console or with a framework like AWS SAM. Now, you can start the project in the console and then move to local development.

You have already walked through the steps of creating an application, now pull it local and make some changes.

  1. In the Lambda console, in the left navigation pane, choose Applications.
  2. Select your application from the list and choose the Code tab.
  3. If you used CodeCommit, choose Connect instructions to configure your local git client. To copy the URL, choose the SSH squares icon.
  4. If you used GitHub, click on the SSH squares icon.
  5. In a terminal window, run the following command:
    git clone <your repo>
  6. Update one of the Lambda function files and save it.
  7. In the terminal window, commit and push the changes:
    git commit -am "simple change"
    git push
  8. In the Lambda console, under Deployments, choose View in CodePipeline.codepipeline pipeline

The build has started and the application is being deployed .

Caveats

submit feedback

This feature is currently available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). This is a feature beta and as such, it is not a full representation of the final experience. We know this is limited in scope and request your feedback. Let us know your thoughts about any future enhancements you would like to see. The best way to give feedback is to use the feedback button in the console.

Conclusion

With the addition of the Create application feature, you can now start right with full serverless applications from within the Lambda console. This delivers the simplicity and ease of the console while still offering the power of an application built on best practices.

Until next time: Happy coding!

from AWS Compute Blog

Visualizing Sensor Data in Amazon QuickSight

Visualizing Sensor Data in Amazon QuickSight

This post is courtesy of Moheeb Zara, Developer Advocate, AWS Serverless

The Internet of Things (IoT) is a term used wherever physical devices are networked in some meaningful connected way. Often, this takes the form of sensor data collection and analysis. As the number of devices and size of data scales, it can become costly and difficult to keep up with demand.

Using AWS Serverless Application Model (AWS SAM), you can reduce the cost and time to market of an IoT solution. This guide demonstrates how to collect and visualize data from a low-cost, Wi-Fi connected IoT device using a variety of AWS services. Much of this can be accomplished within the AWS Free Usage Tier, which is necessary for the following instructions.

Services used

The following services are used in this example:

What’s covered in this post?

This post covers:

  • Connecting an Arduino MKR 1010 Wi-Fi device to AWS IoT Core.
  • Forwarding messages from an AWS IoT Core topic stream to a Lambda function.
  • Using a Kinesis Data Firehose delivery stream to store data in S3.
  • Analyzing and visualizing data stored in S3 using Amazon QuickSight.

Connect the device to AWS IoT Core using MQTT

The Arduino MKR 1010 is a low-cost, Wi-Fi enabled, IoT device, shown in the following image.

An Arduino MKR 1010 Wi-Fi microcontroller

Its analog and digital input and output pins can be used to read sensors or to write to actuators. Arduino provides a detailed guide on how to securely connect this device to AWS IoT Core. The following steps build upon it to push arbitrary sensor data to a topic stream and ultimately visualize that data using Amazon QuickSight.

  1. Start by following this comprehensive guide to using an Arduino MKR 1010 with AWS IoT Core. Upon completion, your device is connected to AWS IoT Core using MQTT (Message Queuing Telemetry Transport), a protocol for publishing and subscribing to messages using topics.
  2. In the Arduino IDE, choose File, Sketch, Include Library, and Manage Libraries.
  3. In the window that opens, search for ArduinoJson and select the library by Benoit Blanchon. Choose install.

4. Add #include <ArduinoJson.h> to the top of your sketch from the Arduino guide.

5. Modify the publishMessage() function with this code. It publishes a JSON message with two keys: time (ms) and the current value read from the first analog pin.

void publishMessage() {  
  Serial.println("Publishing message");

  // send message, the Print interface can be used to set the message contents
  mqttClient.beginMessage("arduino/outgoing");
  
  // create json message to send
  StaticJsonDocument<200> doc;
  doc["time"] = millis();
  doc["sensor_a0"] = analogRead(0);
  serializeJson(doc, mqttClient); // print to client
  
  mqttClient.endMessage();
}

6. Save and upload the sketch to your board.

Create a Kinesis Firehose delivery stream

Amazon Kinesis Data Firehose is a service that reliably loads streaming data into data stores, data lakes, and analytics tools. Amazon QuickSight requires a data store to create visualizations of the sensor data. This simple Kinesis Data Firehose delivery stream continuously uploads data to an S3 storage bucket. The next sections cover how to add records to this stream using a Lambda function.

  1. In the Kinesis Data Firehose console, create a new delivery stream, called SensorDataStream.
  2. Leave the default source as a Direct PUT or other sources and choose Next.
  3. On the next screen, leave all the default values and choose Next.
  4. Select Amazon S3 as the destination and create a new bucket with a unique name. This is where records are continuously uploaded so that they can be used by Amazon QuickSight.
  5. On the next screen, choose Create New IAM Role, Allow. This gives the Firehose delivery stream permission to upload to S3.
  6. Review and then choose Create Delivery Stream.

It can take some time to fully create the stream. In the meantime, continue on to the next section.

Invoking Lambda using AWS IoT Core rules

Using AWS IoT Core rules, you can forward messages from devices to a Lambda function, which can perform actions such as uploading to an Amazon DynamoDB table or an S3 bucket, or running data against various Amazon Machine Learning services. In this case, the function transforms and adds a message to the Kinesis Data Firehose delivery stream, which then adds that data to S3.

AWS IoT Core rules use the MQTT topic stream to trigger interactions with other AWS services. An AWS IoT Core rule is created by using an SQL statement, a topic filter, and a rule action. The Arduino example publishes messages every five seconds on the topic arduino/outgoing. The following instructions show how to consume those messages with a Lambda function.

Create a Lambda function

Before creating an AWS IoT Core rule, you need a Lambda function to consume forwarded messages.

  1. In the AWS Lambda console, choose Create function.
  2. Name the function ArduinoConsumeMessage.
  3. For Runtime, choose Author From Scratch, Node.js10.x. For Execution role, choose Create a new role with basic Lambda permissions. Choose Create.
  4. On the Execution role card, choose View the ArduinoConsumeMessage-role-xxxx on the IAM console.
  5. Choose Attach Policies. Then, search for and select AmazonKinesisFirehoseFullAccess.
  6. Choose Attach Policy. This applies the necessary permissions to add records to the Firehose delivery stream.
  7. In the Lambda console, in the Designer card, select the function name.
  8. Paste the following in the code editor, replacing SensorDataStream with the name of your own Firehose delivery stream. Choose Save.
const AWS = require('aws-sdk')

const firehose = new AWS.Firehose()
const StreamName = "SensorDataStream"

exports.handler = async (event) => {
    
    console.log('Received IoT event:', JSON.stringify(event, null, 2))
    
    let payload = {
        time: new Date(event.time),
        sensor_value: event.sensor_a0
    }
    
    let params = {
            DeliveryStreamName: StreamName,
            Record: { 
                Data: JSON.stringify(payload)
            }
        }
        
    return await firehose.putRecord(params).promise()

}

Create an AWS IoT Core rule

To create an AWS IoT Core rule, follow these steps.

  1. In the AWS IoT console, choose Act.
  2. Choose Create.
  3. For Rule query statement, copy and paste SELECT * FROM 'arduino/outgoing’. This subscribes to the outgoing message topic used in the Arduino example.
  4. Choose Add action, Send a message to a Lambda function, Configure action.
  5. Select the function created in the last set of instructions.
  6. Choose Create rule.

At this stage, any message published to the arduino/outgoing topic forwards to the ArduinoConsumeMessage Lambda function, which transforms and puts the payload on the Kinesis Data Firehose stream and also logs the message to Amazon CloudWatch. If you’ve connected an Arduino device to AWS IoT Core, it publishes to that topic every five seconds.

The following steps show how to test functionality using the AWS IoT console.

  1. In the AWS IoT console, choose Test.
  2. For Publish, enter the topic arduino/outgoing .
  3. Enter the following test payload:
    {  “time”: 1567023375013,

      “sensor_a0”: 456

    }

  4. Choose Publish to topic.
  5. Navigate back to your Lambda function.
  6. Choose Monitoring, View logs in CloudWatch.
  7. Select a log item to view the message contents, as shown in the following screenshot.

Visualizing data with Amazon QuickSight

To visualize data with Amazon QuickSight, follow these steps.

  1. In the Amazon QuickSight console, sign up.
  2. Choose Manage Data, New Data Set. Select S3 as the data source.
  3. A manifest file is necessary for Amazon QuickSight to be able to fetch data from your S3 bucket. Copy the following into a file named manifest.json. Replace YOUR-BUCKET-NAME with the name of the bucket created for the Firehose delivery stream.
    {
       "fileLocations":[
          {
             "URIPrefixes":[
                "s3://YOUR-BUCKET-NAME/"
             ]
          }
       ],
       "globalUploadSettings":{
          "format":"JSON"
       }
    }
  4. Upload the manifest.json file.
  5. Choose Connect, then Visualize. You may have to give Amazon QuickSight explicit permissions to your S3 bucket.
  6. Finally, design the Amazon QuickSight visualizations in the drag and drop editor. Drag the two available fields into the center card to generate a Sum of Sensor_value by Time visual.

Conclusion

This post demonstrated visualizing data from a securely connected remote IoT device. This was achieved by connecting an Arduino to AWS IoT Core using MQTT, forwarding messages from the topic stream to Lambda using IoT Core rules, putting records on an Amazon Kinesis Data Firehose delivery stream, and using Amazon QuickSight to visualize the data stored within an S3 bucket.

With these building blocks, it is possible to implement highly scalable and customizable IoT data collection, analysis, and visualization. With the use of other AWS services, you can build a full end-to-end platform for an IoT product that can reliably handle volume. To further explore how hardware and AWS Serverless can work together, visit the Amazon Web Services page on Hackster.

from AWS Compute Blog

Automating notifications when AMI permissions change

Automating notifications when AMI permissions change

This post is courtesy of Ernes Taljic, Solutions Architect and Sudhanshu Malhotra, Solutions Architect

This post demonstrates how to automate alert notifications when users modify the permissions of an Amazon Machine Image (AMI). You can use it as a blueprint for a wide variety of alert notifications by making simple modifications to the events that you want to receive alerts about. For example, updating the specific operation in Amazon CloudWatch allows you to receive alerts on any activity that AWS CloudTrail captures.

This post walks you through on how to configure an event rule in CloudWatch that triggers an AWS Lambda function. The Lambda function uses Amazon SNS to send an email when an AMI changes to public, private, shared, or unshared with one or more AWS accounts.

Solution overview

The following diagram describes the solution at a high level:

  1. A user changes an attribute of an AMI.
  2. CloudTrail logs the change as a ModifyImageAttribute API event.
  3. A CloudWatch Events rule captures this event.
  4. The CloudWatch Events rule triggers a Lambda function.
  5. The Lambda function publishes a message to the defined SNS topic.
  6. SNS sends an email alert to the topic’s subscribers.

Deployment walkthrough

To implement this solution, you must create:

  • An SNS topic
  • An IAM role
  • A Lambda function
  • A CloudWatch Events rule

Step 1: Creating an SNS topic

To create an SNS topic, complete the following steps:

  1. Open the SNS console.
  2. Under Create topic, for Topic name, enter a name and choose Create topic. You can now see the MySNSTopic page. The Details section displays the topic’s Name, ARN, Display name (optional), and the AWS account ID of the Topic owner.
  3. In the Details section, copy the topic ARN to the clipboard, for example:
    arn:aws:sns:us-east-1:123456789012:MySNSTopic
  4. On the left navigation pane, choose Subscriptions, Create subscription.
  5. On the Create subscription page, do the following:
    1. Enter the topic ARN of the topic you created earlier:
      arn:aws:sns:us-east-1:123456789012:MySNSTopic
    2. For Protocol, select Email.
    3. For Endpoint, enter an email address that can receive notifications.
    4. Choose Create subscription.

Step 2: Creating an IAM role

To create an IAM role, complete the following steps. For more information, see Creating an IAM Role.

  1. In the IAM console, choose Policies, Create Policy.
  2. On the JSON tab, enter the following IAM policy:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Action": [
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": [
                "arn:aws:logs:*:*:*"
            ],
            "Effect": "Allow",
            "Sid": "LogStreamAccess"
        },
        {
            "Action": [
                "sns:Publish"
            ],
            "Resource": [
                "arn:aws:sns:*:*:*"
            ],
            "Effect": "Allow",
            "Sid": "SNSPublishAllow"
        },
        {
            "Action": [
                "iam:ListAccountAliases"
            ],
            "Resource": "*",
            "Effect": "Allow",
            "Sid": "ListAccountAlias"
        }
    ]
}

3. Choose Review policy.

4. Enter a name (MyCloudWatchRole) for this policy and choose Create policy. Note the name of this policy for later steps.

5. In the left navigation pane, choose Roles, Create role.

6. On the Select role type page, choose Lambda and the Lambda use case.

7. Choose Next: Permissions.

8. Filter policies by the policy name that you just created, and select the check box.

9. Choose Next: Tags, and give it an appropriate tag.

10. Choose Next: Review. Give this IAM role an appropriate name, and note it for future use.

11.   Choose Create role.

Step 3: Creating a Lambda function

To create a Lambda function, complete the following steps. For more information, see Create a Lambda Function with the Console.

  1. In the Lambda console, choose Author from scratch.
  2. For Function Name, enter the name of your function.
  3. For Runtime, choose Python 3.7.
  4. For Execution role, select Use an existing role, then select the IAM role created in the previous step.
  5. Choose Create Function, remove the default function, and copy the following code into the Function Code window:
# Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License").
# You may not use this file except in compliance with the License.
# A copy of the License is located at
#
#     http://aws.amazon.com/apache2.0/
#
# or in the "license" file accompanying this file.
# This file is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
# either express or implied. See the License for the specific language governing permissions
# and limitations under the License.
#
# Description: This Lambda function sends an SNS notification to a given AWS SNS topic when an API event of \"Modify Image Attribute\" is detected.
#              The SNS subject is- "API call-<insert call event> by < insert user name> detected in Account-<insert account alias>, see message for further details". 
#              The JSON message body of the SNS notification contains the full event details.
# 
#
# Author: Sudhanshu Malhotra


import json
import boto3
import logging
import os
import botocore.session
from botocore.exceptions import ClientError
session = botocore.session.get_session()

logging.basicConfig(level=logging.DEBUG)
logger=logging.getLogger(__name__)

import ipaddress
import traceback

def lambda_handler(event, context):
	logger.setLevel(logging.DEBUG)
	eventname = event['detail']['eventName']
	snsARN = os.environ['snsARN']          #Getting the SNS Topic ARN passed in by the environment variables.
	user = event['detail']['userIdentity']['type']
	srcIP = event['detail']['sourceIPAddress']
	imageId = event['detail']['requestParameters']['imageId']
	launchPermission = event['detail']['requestParameters']['launchPermission']
	imageAction = list(launchPermission.keys())[0]
	accnt_num =[]
	
	if imageAction == "add":
		if "userId" in launchPermission['add']['items'][0].keys():
			accnt_num = [li['userId'] for li in launchPermission['add']['items']]      # Get the AWS account numbers that the image was shared with
			imageAction = "Image shared with AWS account: " + str(accnt_num)[1:-1]
		else:
			imageAction = "Image made Public"
	
	elif imageAction == "remove":
		if "userId" in launchPermission['remove']['items'][0].keys():
			accnt_num = [li['userId'] for li in launchPermission['remove']['items']]
			imageAction = "Image Unshared with AWS account: " + str(accnt_num)[1:-1]    # Get the AWS account numbers that the image was unshared with
		else:
			imageAction = "Image made Private"
	
	
	logger.debug("Event is --- %s" %event)
	logger.debug("Event Name is--- %s" %eventname)
	logger.debug("SNSARN is-- %s" %snsARN)
	logger.debug("User Name is -- %s" %user)
	logger.debug("Source IP Address is -- %s" %srcIP)
	
	client = boto3.client('iam')
	snsclient = boto3.client('sns')
	response = client.list_account_aliases()
	logger.debug("List Account Alias response --- %s" %response)
	
	# Check if the source IP is a valid IP or AWS service DNS name.
	# If DNS name then we ignore the API activity as this is internal AWS operation
	# For more information check - https://aws.amazon.com/premiumsupport/knowledge-center/cloudtrail-root-action-logs/
	try:
	    validIP = ipaddress.ip_address(srcIP)
	    logger.debug("IP addr is-- %s" %validIP)
	except Exception as e:
	    logger.error("Catching the traceback error: %s" %traceback.format_exc())
	    logger.debug("Seems like the root API activity was caused by an internal operation. IP address is internal service DNS name")
	    return
	try:
		if not response['AccountAliases']:
			accntAliase = (boto3.client('sts').get_caller_identity()['Account'])
			logger.info("Account Aliase is not defined. Account ID is %s" %accntAliase)
		else:
			accntAliase = response['AccountAliases'][0]
			logger.info("Account Aliase is : %s" %accntAliase)
	
	except ClientError as e:
		logger.error("Client Error occured")
	
	try: 
		publish_message = ""
		publish_message += "\nImage Attribute change summary" + "\n\n"
		publish_message += "##########################################################\n"
		publish_message += "# Event Name- " +str(eventname) + "\n" 
		publish_message += "# Account- " +str(accntAliase) +  "\n"
		publish_message += "# AMI ID- " +str(imageId) +  "\n"
		publish_message += "# Image Action- " +str(imageAction) +  "\n"
		publish_message += "# Source IP- " +str(srcIP) +   "\n"
		publish_message += "##########################################################\n"
		publish_message += "\n\n\nThe full event is as below:- \n\n" +str(event) +"\n"
		logger.debug("MESSAGE- %s" %publish_message)
		
		#Sending the notification...
		snspublish = snsclient.publish(
						TargetArn= snsARN,
						Subject=(("Image Attribute change API call-\"%s\" detected in Account-\"%s\"" %(eventname,accntAliase))[:100]),
						Message=publish_message
						)
	except ClientError as e:
		logger.error("An error occured: %s" %e)

6. In the Environment variables section, enter the following key-value pair:

  • Key= snsARN
  • Value= the ARN of the MySNSTopic created earlier

7. Choose Save.

Step 4: Creating a CloudWatch Events rule

To create a CloudWatch Events rule, complete the following steps. This rule catches when a user performs a ModifyImageAttribute API event and triggers the Lambda function (set as a target).

1.       In the CloudWatch console, choose Rules, Create rule.

  • On the Step 1: Create rule page, under Event Source, select Event Pattern.
  • Copy the following event into the preview pane:
{
  "source": [
    "aws.ec2"
  ],
  "detail-type": [
    "AWS API Call via CloudTrail"
  ],
  "detail": {
    "eventSource": [
      "ec2.amazonaws.com"
    ],
    "eventName": [
      "ModifyImageAttribute"
    ]
  }
}
  • For Targets, select Lambda function, and select the Lambda function created in Step 2.

  • Choose Configure details.

2. On the Step 2: Configure rule details page, enter a name and description for the rule.

3. For State, select Enabled.

4. Choose Create rule.

Solution validation

Confirm that the solution works by changing an AMI:

  1. Open the Amazon EC2 console. From the menu, select AMIs under the Images heading.
  2. Select one of the AMIs, and choose Actions, Modify Image Permissions. To create an AMI, see How do I create an AMI that is based on my EBS-backed EC2 instance?
  3. Choose Private.
  4. For AWS Account Number, choose an account with which to share the AMI.
  5. Choose Add Permission, Save.
  6. Check your inbox to verify that you received an email from SNS. The email contains a summary message with the following information, followed by the full event:
  • Event Name
  • Account
  • AMI ID
  • Image Action
  • Source IP

For Image Action, the email lists one of the following events:

  • Image made Public
  • Image made Private
  • Image shared with AWS account
  • Image Unshared with AWS account

The message also includes the account ID of any shared or unshared accounts.

Creating other alert notifications

This post shows you how to automate notifications when AMI permissions change, but the solution is a blueprint for a wide variety of use cases that require alert notifications. You can use the CloudTrail event history to find the API associated with the event that you want to receive notifications about, and create a new CloudWatch Events rule for that event.

  1. Open the CloudTrail console and choose Event history.
  2. Explore the CloudTrail event history and locate the event name associated with the actions that you performed in your AWS account.
  3. Make a note of the event name and modify the eventName parameter in Step 4, 1.b to configure alerts for that particular event.

Automating Notifications - CloudTrail console

Conclusion

This post demonstrated how you can use CloudWatch to create automated notifications when AMI permissions change. Additionally, you can use the CloudTrail event history to find the API for other events, and use the preceding walkthrough to create other event alerts.

For further reading, see the following posts:

 

 

from AWS Compute Blog

Integrating an Inferencing Pipeline with NVIDIA DeepStream and the G4 Instance Family

Integrating an Inferencing Pipeline with NVIDIA DeepStream and the G4 Instance Family

Contributed by: Amr Ragab, Business Development Manager, Accelerated Computing, AWS and Kong Zhao, Solution Architect, NVIDIA Corporation

AWS continually evolves GPU offerings, striving to showcase how new technical improvements created by AWS partners improve the platform’s performance.

One result from AWS’s collaboration with NVIDIA is the recent release of the G4 instance type, a technology update from the G2 and G3. The G4 features a Turing T4 GPU with 16GB of GPU memory, offered under the Nitro hypervisor with one GPU to 4 GPUS per node. A bare metal option will be released in the coming months. It also includes up to 1.8 TB of local non-volatile memory express (NVMe) storage and up to 100 Gbps of network bandwidth.

The Turing T4 is the latest offering from NVIDIA, accelerating machine learning (ML) training and inferencing, video transcoding, and other compute-intensive workloads. With such a diverse array of optimized directives, you can now perform diverse accelerated compute workloads on a single instance family.

NVIDIA has also taken the lead in providing a robust and performant software layer in the form of SDKs and container solutions through the NVIDIA GPU Cloud (NGC) container registry. These accelerated components, combined with AWS elasticity and scale, provide a powerful combination for performant pipelines on AWS.

NVIDIA DeepStream SDK

This post focuses on one such NVIDIA SDK: DeepStream.

The DeepStream SDK is built to provide an end-to-end video processing and ML inferencing analytics solution. It uses the Video Codec API and TensorRT as key components.

DeepStream also supports an edge-cloud strategy to stream perception on the edge and other sensor metadata into AWS for further processing. An example includes wide-area consumption of multiple camera streams and metadata through the Amazon Kinesis platform.

Another classic workload that can take advantage of DeepStream is compiling the model artifacts resulting from distributed training in AWS with Amazon SageMaker Neo. Use this model on the edge or on an Amazon S3 video data lake.

If you are interested in exploring these solutions, contact your AWS account team.

Deployment

Set up programmatic access to AWS to instantiate a g4dn.2xlarge instance type with Ubuntu 18.04 in a subnet that routes SSH access. If you are interested in the full stack details, the following are required to set up the instance to execute DeepStream SDK workflows.

  • An Ubuntu 18.04 Instance with:
    • NVIDIA Turing T4 Driver (418.67 or latest)
    • CUDA 10.1
    • nvidia-docker2

Alternatively, you can launch the NVIDIA Deep Learning AMI available in AWS Marketplace, which includes the latest drivers and SDKs.

aws ec2 run-instances --region us-east-1 --image-id ami-026c8acd92718196b --instance-type g4dn.2xlarge --key-name <key-name> —subnet-id <subnet> --security-group-ids {<security-groupids>} —block-device-mappings 'DeviceName=/dev/sda1,Ebs={VolumeSize=75}'

When the instance is up, connect with SSH and pull the latest DeepStream SDK Docker image from the NGC container registry.

docker pull nvcr.io/nvidia/deepstream:4.0-19.07

nvidia-docker run -it --rm -v /usr/lib/x86_64-linux-gnu/libnvidia-encode.so:/usr/lib/x86_64-linux-gnu/libnvidia-encode.so -v /tmp/.X11-unix:/tmp/.X11-unix -p 8554:8554 -e DISPLAY=$DISPLAY nvcr.io/nvidia/deepstream: 4.0-19.07

If your instance is running a full X environment, you can pass the authentication and display to the container to view the results in real time. However, for the purposes of this post, just execute the workload on the shell.

Go to the /root/deepstream_sdk_v4.0_x86_64/samples/configs/deepstream-app/ folder.

The following configuration files are included in the package:

  • source30_1080p_resnet_dec_infer_tiled_display_int8.txt: This configuration file demonstrates 30 stream decodes with primary inferencing.
  • source4_1080p_resnet_dec_infer_tiled_display_int8.txt: This configuration file demonstrates four stream decodes with primary inferencing, object tracking, and three different secondary classifiers.
  • source4_1080p_resnet_dec_infer_tracker_sgie_tiled_display_int8_gpu1.txt: This configuration file demonstrates four stream decodes with primary inferencing, object tracking, and three different secondary classifiers on GPU 1.
  • config_infer_primary.txt: This configuration file configures an nvinfer element as the primary detector.
  • config_infer_secondary_carcolor.txt, config_infer_secondary_carmake.txt, config_infer_secondary_vehicletypes.txt: These configuration files configure an nvinfer element as the secondary classifier.
  • iou_config.txt: This configuration file configures a low-level Intersection over Union (IOU) tracker.
  • source1_usb_dec_infer_resnet_int8.txt: This configuration file demonstrates one USB camera as input.

The following sample models are provided with the SDK.

Model Model type Number of classes Resolution
Primary Detector Resnet10 4 640 x 368
Secondary Car Color Classifier Resnet18 12 224 x 224
Secondary Car Make Classifier Resnet18 6 224 x 224
Secondary Vehicle Type Classifier Resnet18 20 224 x 224

Edit the configuration file source30_1080p_dec_infer-resnet_tiled_display_int8.txt to disable [sink0] and enable [sink1] for file output. Save the file, then run the DeepStream sample code.

[sink0]
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
 
[sink1]
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265
codec=1
sync=0
#iframeinterval=10
bitrate=2000000
output-file=out.mp4
source-id=0
 
deepstream-app -c source30_1080p_dec_infer-resnet_tiled_display_int8.txt

You get performance data on the inferencing workflow.

 (deepstream-app:1059): GLib-GObject-WARNING **: 20:38:25.991: g_object_set_is_valid_property: object class 'nvv4l2h264enc' has no property named 'bufapi-version'
Creating LL OSD context new
 
Runtime commands:
        h: Print this help
        q: Quit
 
        p: Pause
        r: Resume
 
NOTE: To expand a source in the 2D tiled display and view object details, left-click on the source.
      To go back to the tiled display, right-click anywhere on the window.
 
** INFO: <bus_callback:163>: Pipeline ready
 
Creating LL OSD context new
** INFO: <bus_callback:149>: Pipeline running
 
 
**PERF: FPS 0 (Avg)     FPS 1 (Avg)     FPS 2 (Avg)     FPS 3 (Avg)     FPS 4 (Avg)     FPS 5 (Avg)     FPS 6 (Avg)     FPS 7 (Avg)     FPS 8 (Avg)     FPS 9 (Avg)     FPS 10 (Avg)   FPS 11 (Avg)     FPS 12 (Avg)    FPS 13 (Avg)    FPS 14 (Avg)    FPS 15 (Avg)    FPS 16 (Avg)    FPS 17 (Avg)    FPS 18 (Avg)    FPS 19 (Avg)    FPS 20 (Avg)    FPS 21 (Avg)    FPS 22 (Avg)    FPS 23 (Avg)    FPS 24 (Avg)    FPS 25 (Avg)    FPS 26 (Avg)    FPS 27 (Avg)    FPS 28 (Avg)    FPS 29 (Avg)
**PERF: 35.02 (35.02)   37.92 (37.92)   37.93 (37.93)   35.85 (35.85)   36.39 (36.39)   38.40 (38.40)   35.85 (35.85)   35.18 (35.18)   35.60 (35.60)   35.02 (35.02)   38.77 (38.77)  37.71 (37.71)    35.18 (35.18)   38.60 (38.60)   38.40 (38.40)   38.60 (38.60)   34.77 (34.77)   37.70 (37.70)   35.97 (35.97)   37.00 (37.00)   35.51 (35.51)   38.40 (38.40)   38.60 (38.60)   38.40 (38.40)   38.13 (38.13)   37.70 (37.70)   35.85 (35.85)   35.97 (35.97)   37.92 (37.92)   37.92 (37.92)
**PERF: 39.10 (37.76)   38.90 (38.60)   38.70 (38.47)   38.70 (37.78)   38.90 (38.10)   38.90 (38.75)   39.10 (38.05)   38.70 (37.55)   39.10 (37.96)   39.10 (37.76)   39.10 (39.00)  39.10 (38.68)    39.10 (37.83)   39.10 (38.95)   39.10 (38.89)   39.10 (38.95)   38.90 (37.55)   38.70 (38.39)   38.90 (37.96)   38.50 (38.03)   39.10 (37.98)   38.90 (38.75)   38.30 (38.39)   38.70 (38.61)   38.90 (38.67)   39.10 (38.66)   39.10 (38.05)   39.10 (38.10)   39.10 (38.74)   38.90 (38.60)
**PERF: 38.91 (38.22)   38.71 (38.65)   39.31 (38.82)   39.31 (38.40)   39.11 (38.51)   38.91 (38.82)   39.31 (38.56)   39.31 (38.26)   39.11 (38.42)   38.51 (38.06)   38.51 (38.80)  39.31 (38.94)    39.31 (38.42)   39.11 (39.02)   37.71 (38.41)   39.31 (39.10)   39.31 (38.26)   39.31 (38.77)   39.31 (38.51)   39.31 (38.55)   39.11 (38.44)   39.31 (38.98)   39.11 (38.69)   39.31 (38.90)   39.11 (38.85)   39.31 (38.93)   39.31 (38.56)   39.31 (38.59)   39.31 (38.97)   39.31 (38.89)
**PERF: 37.56 (38.03)   38.15 (38.50)   38.35 (38.68)   38.35 (38.38)   37.76 (38.29)   38.15 (38.62)   38.35 (38.50)   37.56 (38.06)   38.15 (38.35)   37.76 (37.97)   37.96 (38.55)  38.35 (38.77)    38.35 (38.40)   37.56 (38.59)   38.35 (38.39)   37.96 (38.77)   36.96 (37.88)   38.35 (38.65)   38.15 (38.41)   38.35 (38.49)   38.35 (38.41)   38.35 (38.80)   37.96 (38.47)   37.96 (38.62)   37.56 (38.47)   37.56 (38.53)   38.15 (38.44)   38.35 (38.52)   38.35 (38.79)   38.35 (38.73)
**PERF: 40.71 (38.63)   40.31 (38.91)   40.51 (39.09)   40.90 (38.95)   39.91 (38.65)   40.90 (39.14)   40.90 (39.04)   40.51 (38.60)   40.71 (38.87)   40.51 (38.54)   40.71 (39.04)  40.90 (39.25)    40.71 (38.92)   40.90 (39.11)   40.90 (38.96)   40.90 (39.25)   40.90 (38.56)   40.90 (39.15)   40.11 (38.79)   40.90 (39.03)   40.90 (38.97)   40.90 (39.27)   40.90 (39.02)   40.90 (39.14)   39.51 (38.71)   40.90 (39.06)   40.51 (38.90)   40.71 (39.01)   40.90 (39.27)   40.90 (39.22)
**PERF: 39.46 (38.78)   39.26 (38.97)   39.46 (39.16)   39.26 (39.00)   39.26 (38.76)   39.26 (39.16)   39.06 (39.04)   39.46 (38.76)   39.46 (38.98)   39.26 (38.67)   39.46 (39.12)  39.46 (39.29)    39.26 (38.98)   39.26 (39.14)   39.26 (39.01)   38.65 (39.14)   38.45 (38.54)   39.46 (39.21)   39.46 (38.91)   39.46 (39.11)   39.26 (39.03)   39.26 (39.27)   39.46 (39.10)   39.26 (39.16)   39.26 (38.81)   39.26 (39.10)   39.06 (38.93)   39.46 (39.09)   39.06 (39.23)   39.26 (39.23)
**PERF: 39.04 (38.82)   38.84 (38.95)   38.84 (39.11)   38.84 (38.98)   38.84 (38.77)   38.64 (39.08)   39.04 (39.04)   39.04 (38.80)   39.04 (38.99)   39.04 (38.73)   38.64 (39.04)  39.04 (39.25)    38.44 (38.90)   39.04 (39.13)   38.84 (38.99)   38.44 (39.03)   39.04 (38.62)   39.04 (39.18)   38.84 (38.90)   38.84 (39.07)   37.84 (38.84)   39.04 (39.24)   39.04 (39.09)   39.04 (39.14)   38.64 (38.78)   38.64 (39.03)   39.04 (38.95)   38.84 (39.05)   38.64 (39.14)   38.24 (39.08)
** INFO: <bus_callback:186>: Received EOS. Exiting ...
 
Quitting
App run successful

The output video file, out.mp4, is under the current folder and can be played after download.

Extending the architecture further, you can make use of AWS Batch to execute an event-driven pipeline.

Here, the input file from S3 triggers an Amazon CloudWatch event, standing up a G4 instance with a DeepStream Docker image, sourced in Amazon ECR, to process the pipeline. The video and ML analytics results can be pushed back to S3 for further processing.

Conclusion

With this basic architecture in place, you can execute a video analytics and ML inferencing pipeline. Future work can also include integration with Kinesis and cataloging DeepStream results. Let us know how it goes working with DeepStream and the NVIDIA container stack on AWS.

 

from AWS Compute Blog

Automating your lift-and-shift migration at no cost with CloudEndure Migration

Automating your lift-and-shift migration at no cost with CloudEndure Migration

This post is courtesy of Gonen Stein, Head of Product Strategy, CloudEndure 

Acquired by AWS in January 2019, CloudEndure offers a highly automated migration tool to simplify and expedite rehost (lift-and-shift) migrations. AWS recently announced that CloudEndure Migration is now available to all customers and partners at no charge.

Each free CloudEndure Migration license provides 90 days of use following agent installation. During this period, you can perform all migration steps: replicate your source machines, conduct tests, and perform a scheduled cutover to complete the migration.

Overview

In this post, I show you how to obtain free CloudEndure Migration licenses and how to use CloudEndure Migration to rehost a machine from an on-premises environment to AWS. Although I’ve chosen to focus on an on-premises-to-AWS use case, CloudEndure also supports migration from cloud-based environments. For those of you who are interested in advanced automation, I include information on how to further automate large-scale migration projects.

Understanding CloudEndure Migration

CloudEndure Migration works through an agent that you install on your source machines. No reboot is required, and there is no performance impact on your source environment. The CloudEndure Agent connects to the CloudEndure User Console, which issues an API call to your target AWS Region. The API call creates a Staging Area in your AWS account that is designated to receive replicated data.

CloudEndure Migration automated rehosting consists of three main steps :

  1. Installing the Agent: The CloudEndure Agent replicates entire machines to a Staging Area in your target.
  2. Configuration and testing: You configure your target machine settings and launch non-disruptive tests.
  3. Performing cutover: CloudEndure automatically converts machines to run natively in AWS.

The Staging Area comprises both lightweight Amazon EC2 instances that act as replication servers and staging Amazon EBS volumes. Each source disk maps to an identically sized EBS volume in the Staging Area. The replication servers receive data from the CloudEndure Agent running on the source machines and write this data onto staging EBS volumes. One replication server can handle multiple source machines replicating concurrently.

After all source disks copy to the Staging Area, the CloudEndure Agent continues to track and replicate any changes made to the source disks. Continuous replication occurs at the block level, enabling CloudEndure to replicate any application that runs on supported x86-based Windows or Linux operating systems via an installed agent.

When the target machines launch for testing or cutover, CloudEndure automatically converts the target machines so that they boot and run natively on AWS. This conversion includes injecting the appropriate AWS drivers, making appropriate bootloader changes, modifying network adapters, and activating operating systems using the AWS KMS. This machine conversion process generally takes less than a minute, irrespective of machine size, and runs on all launched machines in parallel.

CloudEndure Migration Architecture

CloudEndure Migration Architecture

Installing CloudEndure Agent

To install the Agent:

  1. Start your migration project by registering for a free CloudEndure Migration license. The registration process is quick–use your email address to create a username and password for the CloudEndure User Console. Use this console to create and manage your migration projects.

    After you log in to the CloudEndure User Console, you need an AWS access key ID and secret access key to connect your CloudEndure Migration project to your AWS account. To obtain these credentials, sign in to the AWS Management Console and create an IAM user.Enter your AWS credentials in the CloudEndure User Console.

  2. Configure and save your migration replication settings, including your migration project’s Migration Source, Migration Target, and Staging Area. For example, I selected the Migration Source: Other Infrastructure, because I am migrating on-premises machines. Also, I selected the Migration Target AWS Region: AWS US East (N. Virginia). The CloudEndure User Console also enables you to configure a variety of other settings after you select your source and target, such as subnet, security group, VPN or Direct Connect usage, and encryption.

    After you save your replication settings, CloudEndure prompts you to install the CloudEndure Agent on your source machines. In my example, the source machines consist of an application server, a web server, and a database server, and all three are running Debian GNU / Linux 9.

  3. Download the CloudEndure Agent Installer for Linux by running the following command:
    wget -O ./installer_linux.py https://console.cloudendure.com/installer_linux.py
  4. Run the Installer:
    sudo python ./installer_linux.py -t <INSTALLATION TOKEN> --no-prompt

    You can install the CloudEndure Agent locally on each machine. For large-scale migrations, use the unattended installation parameters with any standard deployment tool to remotely install the CloudEndure Agent on your machines.

    After the Agent installation completes, CloudEndure adds your source machines to the CloudEndure User Console. From there, your source machines undergo several initial replication steps. To obtain a detailed breakdown of these steps, in the CloudEndure User Console, choose Machines, and select a specific machine to open the Machine Details View page.

    details page

    Data replication consists of two stages: Initial Sync and Continuous Data Replication. During Initial Sync, CloudEndure copies all of the source disks’ existing content into EBS volumes in the Staging Area. After Initial Sync completes, Continuous Data Replication begins, tracking your source machines and replicating new data writes to the staging EBS volumes. Continuous Data Replication makes sure that your Staging Area always has the most up-to-date copy of your source machines.

  5. To track your source machines’ data replication progress, in the CloudEndure User Console, choose Machines, and see the Details view.
    When the Data Replication Progress status column reads Continuous Data Replication, and the Migration Lifecycle status column reads Ready for Testing, Initial Sync is complete. These statuses indicate that the machines are functioning correctly and are ready for testing and migration.

Configuration and testing

To test how your machine runs on AWS, you must configure the Target Machine Blueprint. This Blueprint is a set of configurations that define where and how the target machines are launched and provisioned, such as target subnet, security groups, instance type, volume type, and tags.

For large-scale migration projects, APIs can be used to configure the Blueprint for all of your machines within seconds.

I recommend performing a test at least two weeks before migrating your source machines, to give you enough time to identify potential problems and resolve them before you perform the actual cutover. For more information, see Migration Best Practices.

To launch machines in Test Mode:

  1. In the CloudEndure User Console, choose Machines.
  2. Select the Name box corresponding to each machine to test.
  3. Choose Launch Target Machines, Test Mode.Launch target test machines

After the target machines launch in Test Mode, the CloudEndure User Console reports those machines as Tested and records the date and time of the test.

Performing cutover

After you have completed testing, your machines continue to be in Continuous Data Replication mode until the scheduled cutover window.

When you are ready to perform a cutover:

  1. In the CloudEndure User Console, choose Machines.
  2. Select the Name box corresponding to each machine to migrate.
  3. Choose Launch Target Machines, Cutover Mode.
    To confirm that your target machines successfully launch, see the Launch Target Machines menu. As your data replicates, verify that the target machines are running correctly, make any necessary configuration adjustments, perform user acceptance testing (UAT) on your applications and databases, and redirect your users.
  4. After the cutover completes, remove the CloudEndure Agent from the source machines and the CloudEndure User Console.
  5. At this point, you can also decommission your source machines.

Conclusion

In this post, I showed how to rehost an on-premises workload to AWS using CloudEndure Migration. CloudEndure automatically converts your machines from any source infrastructure to AWS infrastructure. That means they can boot and run natively in AWS, and run as expected after migration to the cloud.

If you have further questions see CloudEndure Migration, or Registering to CloudEndure Migration.

Get started now with free CloudEndure Migration licenses.

from AWS Compute Blog

Architecting multiple microservices behind a single domain with Amazon API Gateway

Architecting multiple microservices behind a single domain with Amazon API Gateway

This post is courtesy of Roberto Iturralde, Solutions Architect.

Today’s modern architectures are increasingly microservices-based, with separate engineering teams working independently on services with their own feature requirements and deployment pipelines. The benefits of this approach include increased agility and release velocity.

Microservice architectures also come with some challenges, particularly when they make up parts of a public service or API. These include enforcing engineering and security standards and collating application logs and metrics for a cross-service operational view.

It’s also important to have the microservices feel like a cohesive product to external customers, for authentication and metering in particular:

  • The engineering teams want autonomy.
  • The security team wants a cross-service view and to make it easy for the teams to adhere to the organization’s guidelines.
  • Customers want to feel like they’re using a unified product.

The AWS toolbox

AWS offers many services that you can weave together to meet these needs.

Amazon API Gateway is a fully managed service for deploying and managing a unified front door to your applications. It has features for routing your domain’s traffic to different backing microservices, enforcing consistent authentication and authorization with fine-grained permissions across them, and implementing consistent API throttling and usage metering. The microservice that backs a given API can live in another AWS account. You don’t have to expose it to the internet.

Amazon Cognito is a user management service with rich support for authentication and authorization of users. You can manage those users within Amazon Cognito or from other federated IdPs. Amazon Cognito can vend JSON Web Tokens and integrates natively with API Gateway to support OAuth scopes for fine-grained API access.

Amazon CloudWatch is a monitoring and management service that collects and visualizes data across AWS services. CloudWatch dashboards are customizable home pages that can contain graphs showing metrics and alarms. You can customize these to represent a specific microservice, a collection of microservices that comprise a product, or any other meaningful view with fine-grained access control to the dashboard.

AWS X-Ray is an analysis and debugging tool designed for distributed applications. It has tools to help gain insight into the performance of your microservices, and the APIs that front them, to measure and debug any potential customer impact.

AWS Service Catalog allows the central management and self-service creation of AWS resources that meet your organization’s guidelines and best practices. You can require separate permissions for managing catalog entries from deploying catalog entries, allowing a central team to define and publish templates for resources across the company.

Architectural options

There are many options for how you can combine these AWS services to meet your requirements. Your decisions may also depend on your expertise with AWS. The following features are common to all the designs below:

  • Amazon Route 53 has registered custom domains and hosts their DNS. You could also use an external registrar and DNS service.
  • AWS Certificate Manager (ACM) manages Transport Layer Security (TLS) certificates for the custom domains that route traffic to API Gateway APIs in a given account.
  • Amazon Cognito manages the users who access the APIs in API Gateway.
  • Service Catalog holds catalog products for API Gateway APIs that adhere to the organizational guidelines and best practices, such as security configuration and default API throttling. Microservice teams have permission to create an API pointed to their service and configure specific parameters, with approvals required for production environments. For more information, see Standardizing infrastructure delivery in distributed environments using AWS Service Catalog.

The following shows common design patterns and their high-level benefits and challenges.

Single AWS account

Microservices, their fronting API Gateway APIs, and supporting services are in the same AWS account. This account also includes core AWS services such as the following:

  • Route 53 for domain name registration and DNS
  • ACM for managing server certificates for your domain
  • Amazon Cognito for user management
  • Service Catalog for the catalog of best-practice product templates to use across the organization

Single AWS account example

Use this approach if you do not yet have a multi-account strategy or if you use AWS native tools for observability. With a single AWS account, the microservices can share the same networking topology, and so more easily communicate with each other when needed. With all the API Gateway APIs in the same AWS account, you can configure API throttling, metering, authentication, and authorization features for a unified experience for customers. You can also route traffic to a given API using subdomains or base path mapping in API Gateway.

A single AWS account can manage TLS certificates for AWS domains in one place. This feature is available to all API Gateway APIs. Having the microservices and their API Gateway APIs in the same AWS account gives more complete X-Ray service maps, given that X-Ray currently can’t analyze traces across AWS accounts. Similarly, you have a complete view of the metrics all AWS services publish to CloudWatch. This feature allows you to create CloudWatch dashboards that span the API Gateway APIs and their backing microservices.

There is an increased blast radius with this architecture, because the microservices share the same account. The microservices can impact each other through shared AWS service limits or mistakes by team members on other microservice teams. Most AWS services support tagging for cost allocation and granular access control, but there are some features of AWS services that do not. Because of this, it’s more difficult to separate the costs of each microservice completely.

Separate AWS accounts

When using separate AWS accounts, each API Gateway API lives in the same AWS account as its backing microservice. Separate AWS accounts hold the Service Catalog portfolio, domain registration (using Route 53), and aggregated logs from the microservices. The organization account, security account, and other core accounts are discussed further in the AWS Landing Zone Solution.

Separate AWS accounts

Use this architecture if you have a mature multi-account strategy and existing tooling for cross-account observability. In this approach, an AWS account encapsulates a microservice completely, for cost isolation and reduced blast radius. With the API Gateway API in the same account as the backing microservice, you have a complete view of the microservice in CloudWatch and X-Ray.

You can only meter API usage by microservice because API Gateway usage plans can’t track activity across accounts. Implement a process to ensure each customer’s API Gateway API key is the same across accounts for a smooth customer experience.

API Gateway base path mappings are local to an AWS account, so you must use subdomains to separate the microservices that comprise a product under a single domain. However, you can have a complete view of each microservice in the CloudWatch dashboards and X-Ray console for its AWS account. This creates a view across microservices that requires aggregation in a central AWS account or external tool.

Central API account

Using a central API account is similar to the separate account architecture, except the API Gateway APIs are in a central account.

Central API account

This architecture is the best approach for most users. It offers a balance of the benefits of microservice separation with the unification of particular services for a better end-user experience. Each microservice has an AWS account, which isolates it from the other services and reduces the risk of AWS service limit contention or accidents due to sharing the account with other engineering teams.

Because each microservice lives in a separate account, that account’s bill captures all the costs for that microservice. You can track the API costs, which are in the shared API account, using tags on API Gateway resources.

While the microservices are isolated in separate AWS accounts, the API Gateway throttling, metering, authentication, and authorization features are centralized for a consistent experience for customers. You can use subdomains or API Gateway base path mappings to route traffic to different API Gateway APIs. Also, the TLS certificates for your domains are centrally managed and available to all API Gateway APIs.

You can now split CloudWatch metrics, X-Ray traces, and application logs across accounts for a given microservice and its fronting API Gateway API. Unify these in a central AWS account or a third-party tool.

Conclusion

The breadth of the AWS Cloud presents many architectural options to customers. When designing your systems, it’s essential to understand the benefits and challenges of design decisions before implementing a solution.

This post walked you through three common architectural patterns for allowing independent microservice teams to operate behind a unified domain presented to your customers. The best approach for your organization depends on your priorities, experience, and familiarity with AWS.

from AWS Compute Blog

Using new vCPU-based On-Demand Instance limits with Amazon EC2

Using new vCPU-based On-Demand Instance limits with Amazon EC2

This post is contributed by Saloni Sonpal, Senior Product Manager, Amazon EC2

As an Amazon EC2 customer running On-Demand Instances, you can increase or decrease your compute capacity depending on your application’s needs, and only pay for what you use. EC2 implements instance limits, which give you a highly elastic experience while protecting you from unintentional spending or abuse.

To streamline launching On-Demand Instances, AWS is introducing simplified EC2 limits on September 24, 2019, based on the number of virtual central processing units (vCPUs) attached to your running instances. In addition to now measuring usage in number of vCPUs, there will only be five different On-Demand Instance limits—one limit that governs the usage of standard instance families such as A, C, D, H, I, M, R, T, and Z, and one limit per accelerated instance family for FPGA (F), graphic-intensive (G), general purpose GPU (P), and special memory optimized (X) instances. Use your limits in terms of the number of vCPUs for the EC2 instance types to launch any combination of instance types that meet your changing application needs. For example, with a Standard Instance Family limit of 256 vCPUs, you could launch 32 m5.2xlarge instances (32 x 8 vCPUs) or 16 c5.4xlarge instances (16 x 16 vCPUs) or any combination of applicable instance types and sizes that total 256 vCPUs. For more information on vCPU mapping for instance types, see Amazon EC2 Instance Types.

During a transition period from September 24, 2019, through October 24, 2019, you can opt in to receive vCPU-based instance limits. When you opt in, EC2 automatically computes your new limits, giving you access to launch at least the same number of instances (if not more) than you do currently. Beginning October 24, 2019, all accounts will switch to vCPU-based instance limits, and the current count-based instance limits will no longer be supported. Although the switchover will not impact your ability to launch EC2 instances, you should familiarize yourself with the new On-Demand Instance limits experience and opt into vCPU limits at a time of your choosing.

You can always view and manage your limits from the Limits page in the Amazon EC2 console and from the EC2 Service page in the Service Quotas console. With Amazon CloudWatch metrics integration, you can monitor EC2 usage against limits as well as configure alarms to warn about approaching limits. For more information, see EC2 On-Demand Instance limits. If you have any questions, contact the AWS support team on the community forums and via AWS Premium Support.

from AWS Compute Blog