Tag: Tutorial

Are You Well-Architected? – AWS Virtual Workshop

Are You Well-Architected? – AWS Virtual Workshop

Are You Well-Architected? – AWS Virtual Workshop
Most businesses depend on a portfolio of technology solutions to operate and be successful every day. How do you know if you and your team are following best practice, or what the risks in your architectures might be? In this virtual hands-on workshop, we will show how the AWS Well-Architected framework provides prescriptive architectural advice, and how the AWS Well-Architected Tool allows you to measure and improve your technology portfolio. We will explain how other customers are using AWS Well-Architected in their business and share insights into what we have learned from reviewing tens of thousands of architectures across Operational Excellence, Security, Reliability, Performance Efficiency, and Cost Optimization. We will also demonstrate and walk you through step-by-step how to use the Well-Architected Tool.

Learning Objectives:
– Learn the value proposition and components of Well-Architected
– Learn how to use the Well-Architected Tool to measure workloads
– Learn insights from reviews of tens of thousands of architectures

View on YouTube

Running Java applications on Amazon EC2 A1 instances with Amazon Corretto

Running Java applications on Amazon EC2 A1 instances with Amazon Corretto

This post is contributed by Jeff Underhill | EC2 Principal Business Development Manager

Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). Production-ready Linux builds of JDK8 and JDK 11 for the 64-bit Arm architecture were released Sep 17, 2019. Scale-out Java applications can get significant cost savings using the Arm-based Amazon EC2 A1 instances. Read on to learn more about Amazon EC2 A1 instances, Amazon Corretto, and how support for 64-bit Arm in Amazon Corretto is a significant development for Java developers building cloud native applications.

What are Amazon EC2 A1 instances?

Last year at re:Invent Amazon Web Services (AWS) introduced Amazon EC2 A1 instances powered by AWS Graviton Processors that feature 64-bit Arm Neoverse cores and custom silicon designed by AWS. The A1 instances deliver up to 45% cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, distributed data stores, and Arm-native software development workflows that are supported by the extensive, and growing Arm software ecosystem.

Why Java and Amazon EC2 A1 instances?
The majority of customers we’ve spoken to have experienced a seamless transition and are realizing cost benefits with A1 instances, especially customers that are transitioning architecture-agnostic applications that often run seamlessly on Arm-based platforms. Today, there are many examples of architecture-agnostic programming languages, such as PHP, Python, node.js, and GoLang, and all of these are well supported on Arm-based A1 instances. The Java programming language has been around for almost 25 years and is one of the most broadly adopted programming languages. The TIOBE Programming Community Index (as of Sept’19) shows Java ranked as the #1 or #2 programming language between 2004-2019, and the annual GitHub Octoverse report shows Java has consistently ranked #2 between 2014 and 2018. Java was designed to have as few implementation dependencies as possible, which enables portability of Java applications regardless of processor architectures. This portability enables choice of how and where to run your Java-based workloads.

What is Amazon Corretto?
Java is one of the most popular languages in use by AWS customers, and AWS is committed to supporting Java and keeping it free. That’s why AWS introduced Amazon Corretto, a no-cost, multi-platform, production-ready distribution of OpenJDK from Amazon. AWS runs Corretto internally on thousands of production services, and has committed to distributing security updates to Corretto 8 at no cost until at least June, 2023. Amazon Corretto is available for both JDK 8 and JDK 11 – you can learn more in the documentation and if you’re curious about what goes into building Java in the open and specifically the Amazon Corretto OpenJDK distribution then check out this OSCON video.

What’s new?
Java provides you with the choice of how and where to run your applications, and Amazon EC2 provides you with the broadest and deepest portfolio of compute instances available in the cloud. AWS wants its customers to be able to run their workloads in the most efficient way as defined by their specific use case and business requirements and that includes providing a consistent platform experience across the Amazon EC2 instance portfolio. We’re fortunate to have James Gosling, the designer of Java, as a member of the Amazon team, and he recently took to Twitter to announce the General Availability (GA) of Amazon Corretto for the Arm architecture:

For those of you that like playing with Linux on ARM, the Corretto build for ARM64 is now GA. Fully production ready. Both JDK8 and JDK11

 

Open Source – “It takes a village”
It’s important to recognize the significance of Open Source Software (OSS) and the community of people involved to develop, build and test a production ready piece of software as complex as Java. So, because I couldn’t have said it better myself, here’s a tweet from my colleague Matt Wilson who took a moment to recognize all the hard work of the Red Hat and Java community developers:

Many thanks to all the hard work from @redhatopen developers, and all the #OpenSource Java community that played a part in making 64 bit Arm support in OpenJDK distributions a reality!

Ready to cost optimize your scale-out Java applications?
With the 8.222.10.4 and 11.0.4.11.1 releases of Amazon Corretto that became generally available Sep 17, 2019, new and existing AWS Corretto users can now deploy their production Java applications on Arm-based EC2 A1 instances. If you have scale-out applications and are interested in optimizing cost then you’ll want to take the A1 instances for a test-drive to see if they’re a fit for your specific use case. You can learn more at the Amazon Corretto website, and the downloads are all available here for Amazon Corretto 8Amazon Corretto 11 and if you’re using containers here’s the Docker Official image.  If you have any questions about your own workloads running on Amazon EC2 A1 instances, contact us at [email protected].

from AWS Compute Blog

Building a pocket platform-as-a-service with Amazon Lightsail

Building a pocket platform-as-a-service with Amazon Lightsail

This post was written by Robert Zhu, a principal technical evangelist at AWS and a member of the GraphQL Working Group. 

When you start a new web-based project, you figure out wha kind of infrastructure you need. For my projects, I prioritize simplicity, flexibility, value, and on-demand capacity, and find myself quickly needing the following features:

  • DNS configuration
  • SSL support
  • Subdomain to a service
  • SSL reverse proxy to localhost (similar to ngrok and serveo)
  • Automatic deployment after a commit to the source repo (nice to have)

new projects have different requirements compared to mature projects

Amazon Lightsail is perfect for building a simple “pocket platform” that provides all these features. It’s cheap and easy for beginners and provides a friendly interface for managing virtual machines and DNS. This post shows step-by-step how to assemble a pocket platform on Amazon Lightsail.

Walkthrough

The following steps describe the process. If you prefer to learn by watching videos instead, view the steps by watching the following: Part 1, Part 2, Part 3.

Prerequisites

You should be familiar with: Linux, SSH, SSL, Docker, Nginx, HTTP, and DNS.

Steps

Use the following steps to assemble a pocket platform.

Creating a domain name and static IP

First, you need a domain name for your project. You can register your domain with any domain name registration service, such as Amazon Route53.

  1. After your domain is registered, open the Lightsail console, choose the Networking tab, and choose Create static IP.

Lightsail console networking tab

  1. On the Create static IP page, give the static IP a name you can remember and don’t worry about attaching it to an instance just yet. Choose Create DNS zone.

 

  1. On the Create a DNS zone page, enter your domain name and then choose Create DNS zone. For this post, I use the domain “raccoon.news.DNS zone in Lightsail with two A records
  2. Choose Add Record and create two A records—“@.raccoon.news” and “raccoon.news”—both resolving to the static IP address you created earlier. Then, copy the values for the Lightsail name servers at the bottom of the page. Go back to your domain name provider, and edit the name servers to point to the Lightsail name servers. Since I registered my domain with Route53, here’s what it looks like:

Changing name servers in Route53

Note: If you registered your domain with Route53, make sure you change the name server values under “domain registration,” not “hosted zones.” If you registered your domain with Route53, you need to delete the hosted zone that Route53 automatically creates for your domain.

Setting up your Lightsail instance

While you wait for your DNS changes to propagate, set up your Lightsail instance.

  1. In the Lightsail console, create a new instance and select Ubuntu 18.04.

Choose OS Only and Ubuntu 18.04 LTS

For this post, you can use the cheapest instance. However, when you run anything in production, make sure you choose an instance with enough capacity for your workload.

  1. After the instance launches, select it, then click on the Networking tab and open two additional TCP ports: 443 and 2222. Then, attach the static IP allocated earlier.
  2. To connect to the Lightsail instance using SSH, download the SSH key, and save it to a friendly path, for example: ~/ls_ssh_key.pem.

Click the download link to download the SSH key

  • Restrict permissions for your SSH key:

chmod 400 ~/ls_ssh_key.pem

  • Connect to the instance using SSH:

ssh -i ls_ssh_key.pem [email protected]_IP

  1. After you connect to the instance, install Docker to help manage deployment and configuration:

sudo apt-get update && sudo apt-get install docker.io
sudo systemctl start docker
sudo systemctl enable docker
docker run hello-world

  1. After Docker is installed, set up a gateway using called the nginx-proxy container. This container lets you route traffic to other containers by providing the “VIRTUAL_HOST” environment variable. Conveniently, nginx-proxy comes with an SSL companion, nginx-proxy-letsencrypt, which uses Let’s Encrypt.

# start the reverse proxy container
sudo docker run --detach \
    --name nginx-proxy \
    --publish 80:80 \
    --publish 443:443 \
    --volume /etc/nginx/certs \
    --volume /etc/nginx/vhost.d \
    --volume /usr/share/nginx/html \
    --volume /var/run/docker.sock:/tmp/docker.sock:ro \
    jwilder/nginx-proxy

# start the letsencrypt companion
sudo docker run --detach \
    --name nginx-proxy-letsencrypt \
    --volumes-from nginx-proxy \
    --volume /var/run/docker.sock:/var/run/docker.sock:ro \
    --env "DEFAULT_EMAIL=YOUREMAILHERE" \
    jrcs/letsencrypt-nginx-proxy-companion

# start a demo web server under a subdomain
sudo docker run --detach \
    --name nginx \
    --env "VIRTUAL_HOST=test.EXAMPLE.COM" \
    --env "LETSENCRYPT_HOST=test.EXAMPLE.COM" \
    nginx

Pay special attention to setting a valid email for the DEFAULT_EMAIL environment variable on the proxy companion; otherwise, you’ll need to specify the email whenever you start a new container. If everything went well, you should be able to navigate to https://test.EXAMPLE.COM and see the nginx default content with a valid SSL certificate that has been auto-generated by Let’s Encrypt.

A publicly accessible URL served from our Lightsail instance with SSL support

Troubleshooting:

  • In the Lightsail console, make sure that Port 443 is open.
  • Let’s Encrypt rate limiting (for reference if you encounter issues with SSL certificate issuance): https://letsencrypt.org/docs/rate-limits/.

Deploying a localhost proxy with SSL

Most developers prefer to code on a dev machine (laptop or desktop) because they can access the file system, use their favorite IDE, recompile, debug, and more. Unfortunately, developing on a dev machine can introduce bugs due to differences from the production environment. Also, certain services (for example, Alexa Skills or GitHub Webhooks) require SSL to work, which can be annoying to configure on your local machine.

For this post, you can use an SSL reverse proxy to make your local dev environment resemble production from the browser’s point of view. This technique also helps allow your test application to make API requests to production endpoints with Cross-Origin Resource Sharing restrictions. While it’s not a perfect solution, it takes you one step closer toward a frictionless dev/test feedback loop. You may have used services like ngrok and serveo for this purpose. By running a reverse proxy, you won’t need to spread your domain and SSL settings across multiple services.

To run a reverse proxy, create an SSH reverse tunnel. After the reverse tunnel SSH session is initiated, all network requests to the specified port on the host are proxied to your dev machine. However, since your Lightsail instance is already using port 22 for VPS management, you need a different SSH port (use 2222 from earlier). To keep everything organized, run the SSH server for port 2222 inside a special proxy container. The following diagram shows this solution.

Diagram of how an SSL reverse proxy works with SSH tunneling

Using Dockerize an SSH service as a starting point, I created a repository with a working Dockerfile and nginx config for reference. Here are the summary steps:

git clone https://github.com/robzhu/nginx-local-tunnelcd nginx-local-tunnel

docker build -t {DOCKERUSER}/dev-proxy . --build-arg ROOTPW={PASSWORD}

# start the proxy container
# Note, 2222 is the port we opened on the instance earlier.
docker run --detach -p 2222:22 \
    --name dev-proxy \
    --env "VIRTUAL_HOST=dev.EXAMPLE.com" \
    --env "LETSENCRYPT_HOST=dev.EXAMPLE.com" \
    {DOCKERUSER}/dev-proxy

# Ports explained:
# 3000 refers to the port that your app is running on localhost.
# 2222 is the forwarded port on the host that we use to directly SSH into the container.
# 80 is the default HTTP port, forwarded from the host
ssh -R :80:localhost:3000 -p 2222 [email protected]

# Start sample app on localhost
cd node-hello && npm i
nodemon main.js

# Point browser to https://dev.EXAMPLE.com

The reverse proxy subdomain works only as long as the reverse proxy SSH connection remains open. If there is no SSH connection, you should see an nginx gateway error:

Nginx will return 502 if you try to access the reverse proxy without running the SSH tunnel

While this solution is handy, be extremely careful, as it could expose your work-in-progress to the internet. Consider adding additional authorization logic and settings for allowing/denying specific IPs.

Setting up automatic deployment

Finally, build an automation workflow that watches for commits on a source repository, builds an updated container image, and re-deploys the container on your host. There are many ways to do this, but here’s the combination I’ve selected for simplicity:

  1. First, create a GitHub repository to host your application source code. For demo purposes, you can clone my express hello-world example. On the Docker hub page, create a new repository, click the GitHub icon, and select your repository from the dropdown list.Create GitHub repo to host your application source code
  2. Now Docker watches for commits to the repo and builds a new image with the “latest” tag in response. After the image is available, start the container as follows:

docker run --detach \
    --name app \
    --env "VIRTUAL_HOST=app.raccoon.news" \
    --env "LETSENCRYPT_HOST=app.raccoon.news" \
    robzhu/express-hello

  1. Finally, use Watchtower to poll dockerhub and update the “app” container whenever a new image is detected:

docker run -d \
    --name watchtower \
    -v /var/run/docker.sock:/var/run/docker.sock \
    containrrr/watchtower \
    --interval 10 \
    APPCONTAINERNAME

 

Summary

Your Pocket PaaS is now complete! As long as you deploy new containers and add the VIRTUAL_HOST and LETSENCRYPT_HOST environment variables, you get automatic subdomain routing and SSL termination. With SSH reverse tunneling, you can develop on your local dev machine using your favorite IDE and test/share your app at https://dev.EXAMPLE.COM.

Because this is a public URL with SSL, you can test Alexa Skills, GitHub Webhooks, CORS settings, PWAs, and anything else that requires SSL. Once you’re happy with your changes, a git commit triggers an automated rebuild of your Docker image, which is automatically redeployed by Watchtower.

I hope this information was useful. Thoughts? Leave a comment or direct-message me on Twitter: @rbzhu.

from AWS Compute Blog

Update: Issue affecting HashiCorp Terraform resource deletions after the VPC Improvements to AWS Lambda

Update: Issue affecting HashiCorp Terraform resource deletions after the VPC Improvements to AWS Lambda

On September 3, 2019, we announced an exciting update that improves the performance, scale, and efficiency of AWS Lambda functions when working with Amazon VPC networks. You can learn more about the improvements in the original blog post. These improvements represent a significant change in how elastic network interfaces (ENIs) are configured to connect to your VPCs. With this new model, we identified an issue where VPC resources, such as subnets, security groups, and VPCs, can fail to be destroyed via HashiCorp Terraform. More information about the issue can be found here. In this post we will help you identify whether this issue affects you and the steps to resolve the issue.

How do I know if I’m affected by this issue?

This issue only affects you if you use HashiCorp Terraform to destroy environments. Versions of Terraform AWS Provider that are v2.30.0 or older are impacted by this issue. With these versions you may encounter errors when destroying environments that contain AWS Lambda functions, VPC subnets, security groups, and Amazon VPCs. Typically, terraform destroy fails with errors similar to the following:

Error deleting subnet: timeout while waiting for state to become 'destroyed' (last state: 'pending', timeout: 20m0s)

Error deleting security group: DependencyViolation: resource sg-<id> has a dependent object
        	status code: 400, request id: <guid>

Depending on which AWS Regions the VPC improvements are rolled out, you may encounter these errors in some Regions and not others.

How do I resolve this issue if I am affected?

You have two options to resolve this issue. The recommended option is to upgrade your Terraform AWS Provider to v2.31.0 or later. To learn more about upgrading the Provider, visit the Terraform AWS Provider Version 2 Upgrade Guide. You can find information and source code for the latest releases of the AWS Provider on this page. The latest version of the Terraform AWS Provider contains a fix for this issue as well as changes that improve the reliability of the environment destruction process. We highly recommend that you upgrade the Provider version as the preferred option to resolve this issue.

If you are unable to upgrade the Provider version, you can mitigate the issue by making changes to your Terraform configuration. You need to make the following sets of changes to your configuration:

  1. Add an explicit dependency, using a depends_on argument, to the aws_security_group and aws_subnet resources that you use with your Lambda functions. The dependency has to be added on the aws_security_group or aws_subnet and target the aws_iam_policy resource associated with IAM role configured on the Lambda function. See the example below for more details.
  2. Override the delete timeout for all aws_security_group and aws_subnet resources. The timeout should be set to 40 minutes.

The following configuration file shows an example where these changes have been made(scroll to see the full code):

provider "aws" {
    region = "eu-central-1"
}
 
resource "aws_iam_role" "lambda_exec_role" {
  name = "lambda_exec_role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}
 
data "aws_iam_policy" "LambdaVPCAccess" {
  arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
 
resource "aws_iam_role_policy_attachment" "sto-lambda-vpc-role-policy-attach" {
  role       = "${aws_iam_role.lambda_exec_role.name}"
  policy_arn = "${data.aws_iam_policy.LambdaVPCAccess.arn}"
}
 
resource "aws_security_group" "allow_tls" {
  name        = "allow_tls"
  description = "Allow TLS inbound traffic"
  vpc_id      = "vpc-<id>"
 
  ingress {
    # TLS (change to whatever ports you need)
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    # Please restrict your ingress to only necessary IPs and ports.
    # Opening to 0.0.0.0/0 can lead to security vulnerabilities.
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "tcp"
    cidr_blocks     = ["0.0.0.0/0"]
  }
  
  timeouts {
    delete = "40m"
  }
  depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]  
}
 
resource "aws_subnet" "main" {
  vpc_id     = "vpc-<id>"
  cidr_block = "172.31.68.0/24"

  timeouts {
    delete = "40m"
  }
  depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]
}
 
resource "aws_lambda_function" "demo_lambda" {
    function_name = "demo_lambda"
    handler = "index.handler"
    runtime = "nodejs10.x"
    filename = "function.zip"
    source_code_hash = "${filebase64sha256("function.zip")}"
    role = "${aws_iam_role.lambda_exec_role.arn}"
    vpc_config {
     subnet_ids         = ["${aws_subnet.main.id}"]
     security_group_ids = ["${aws_security_group.allow_tls.id}"]
  }
}

The key block to note here is the following, which can be seen in both the “allow_tls” security group and “main” subnet resources:

timeouts {
  delete = "40m"
}
depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]

These changes should be made to your Terraform configuration files before destroying your environments for the first time.

Can I delete resources remaining after a failed destroy operation?

Destroying environments without upgrading the provider or making the configuration changes outlined above may result in failures. As a result, you may have ENIs in your account that remain due to a failed destroy operation. These ENIs can be manually deleted a few minutes after the Lambda functions that use them have been deleted (typically within 40 minutes). Once the ENIs have been deleted, you can re-re-run terraform destroy.

from AWS Compute Blog

AWS Office Hours: Amazon CloudFront – AWS Online Tech Talks

AWS Office Hours: Amazon CloudFront – AWS Online Tech Talks

AWS Office Hours: Amazon CloudFront – AWS Online Tech Talks
Just getting started with Amazon CloudFront and [email protected]? Get your answers directly from our experts during this AWS Office Hours session. Learn what a content delivery network (CDN) such as Amazon CloudFront is and how it works. Find out about the benefits it provides, common challenges, how it performs, recently released features, and examples of how customers are using CloudFront. You will also learn about re-customizing content delivery through [email protected] – a serverless compute service that lets you execute functions to customize the content delivered through CloudFront. Lastly, you will learn about best practices.

Learning Objectives:
– How do I accelerate both my static and dynamic content on AWS?
– How can I secure my content on AWS at the Edge?
– What are some best practices for Amazon CloudFront?

View on YouTube

Improving the Getting Started experience with AWS Lambda

Improving the Getting Started experience with AWS Lambda

A common question from developers is, “How do I get started with creating serverless applications?” Frequently, I point developers to the AWS Lambda console where they can create a new Lambda function and immediately see it working.

While you can learn the basics of a Lambda function this way, it does not encompass the full serverless experience. It does not allow you to take advantage of best practices like infrastructure as code (IaC) or continuous integration and continuous delivery (CI/CD). A full-on serverless application could include a combination of services like Amazon API Gateway, Amazon S3, and Amazon DynamoDB.

To help you start right with serverless, AWS has added a Create application experience to the Lambda console. This enables you to create serverless applications from ready-to-use sample applications, which follow these best practices:

  • Use infrastructure as code (IaC) for defining application resources
  • Provide a continuous integration and continuous deployment (CI/CD) pipeline for deployment
  • Exemplify best practices in serverless application structure and methods

IaC

Using IaC allows you to automate deployment and management of your resources. When you define and deploy your IaC architecture, you can standardize infrastructure components across your organization. You can rebuild your applications quickly and consistently without having to perform manual actions. You can also enforce best practices such as code reviews.

When you’re building serverless applications on AWS, you can use AWS CloudFormation directly, or choose the AWS Serverless Application Model, also known as AWS SAM. AWS SAM is an open source framework for building serverless applications that makes it easier to build applications quickly. AWS SAM provides a shorthand syntax to express APIs, functions, databases, and event source mappings. Because AWS SAM is built on CloudFormation, you can specify any other AWS resources using CloudFormation syntax in the same template.

Through this new experience, AWS provides an AWS SAM template that describes the entire application. You have instant access to modify the resources and security as needed.

CI/CD

When editing a Lambda function in the console, it’s live the moment that the function is saved. This works when developing against test environments, but risks introducing untested, faulty code in production environments. That’s a stressful atmosphere for developers with the unneeded overhead of manually testing code on each change.

Developers say that they are looking for an automated process for consistently testing and deploying reliable code. What they need is a CI/CD pipeline.

CI/CD pipelines are more than just convenience, they can be critical in helping development teams to be successful. CI/CDs provide code integration, testing, multiple environment deployments, notifications, rollbacks, and more. The functionality depends on how you choose to configure it.

When you create a new application through Lambda console, you create a CI/CD pipeline to provide a framework for automated testing and deployment. The pipeline includes the following resources:

Best practices

Like any other development pattern, there are best practices for serverless applications. These include testing strategies, local development, IaC, and CI/CD. When you create a Lambda function using the console, most of this is abstracted away. A common request from developers learning about serverless is for opinionated examples of best practices.

When you choose Create application, the application uses many best practices, including:

  • Managing IaC architectures
  • Managing deployment with a CI/CD pipeline
  • Runtime-specific test examples
  • Runtime-specific dependency management
  • A Lambda execution role with permissions boundaries
  • Application security with managed policies

Create an application

Now, lets walk through creating your first application.

  1. Open the Lambda console, and choose Applications, Create application.
  2. Choose Serverless API backend. The next page shows the architecture, services used, and development workflow of the chosen application.
  3. Choose Create and then configure your application settings.
    • For Application name and Application description, enter values.
    • For Runtime, the preview supports Node.js 10.x. Stay tuned for more runtimes.
    • For Source Control Service, I chose CodeCommit for this example, but you can choose either. If you choose GitHub, you are asked to connect to your GitHub account for authorization.
    • For Repository Name, feel free to use whatever you want.
    • Under Permissions, check Create roles and permissions boundary.
  4. Choose Create.

Exploring the application

That’s it! You have just created a new serverless application from the Lambda console. It takes a few moments for all the resources to be created. Take a moment to review what you have done so far.

Across the top of the application, you can see four tabs, as shown in the following screenshot:

  • Overview—Shows the current page, including a Getting started section, and application and toolchain resources of the application
  • Code—Shows the code repository and instructions on how to connect
  • Deployments—Links to the deployment pipeline and a deployment history.
  • Monitoring—Reports on the application health and performance

getting started dialog

The Resources section lists all the resources specific to the application. This application includes three Lambda functions, a DynamoDB table, and the API. The following screenshot shows the resources for this sample application.resources view

Finally, the Infrastructure section lists all the resources for the CI/CD pipeline including the AWS Identity and Access Management (IAM) roles, the permissions boundary policy, the S3 bucket, and more. The following screenshot shows the resources for this sample application.application view

About Permissions Boundaries

This new Create application experience utilizes an IAM permissions boundary to help further secure the function that gets created and prevent an overly permissive function policy from being created later on. The boundary is a separate policy that acts as a maximum bound on what an IAM policy for your function can be created to have permissions for. This model allows developers to build out the security model of their application while still meeting certain requirements that are often put in place to prevent overly permissive policies and is considered a best practice. By default, the permissions boundary that is created limits the application access to just the resources that are included in the example template. In order to expand the permissions of the application, you’ll first need to extend what is defined in the permissions boundary to allow it.

A quick test

Now that you have an application up and running, try a quick test to see if it works.

  1. In the Lambda console, in the left navigation pane, choose Applications.
  2. For Applications, choose Start Right application.
  3. On the Endpoint details card, copy your endpoint.
  4. From a terminal, run the following command:
    curl -d '{"id":"id1", "name":"name1"}' -H "Content-Type: application/json" -X POST <YOUR-ENDPOINT>

You can find tips like this, and other getting started hints in the README.md file of your new serverless application.

Outside of the console

With the introduction of the Create application function, there is now a closer tie between the Lambda console and local development. Before this feature, you would get started in the Lambda console or with a framework like AWS SAM. Now, you can start the project in the console and then move to local development.

You have already walked through the steps of creating an application, now pull it local and make some changes.

  1. In the Lambda console, in the left navigation pane, choose Applications.
  2. Select your application from the list and choose the Code tab.
  3. If you used CodeCommit, choose Connect instructions to configure your local git client. To copy the URL, choose the SSH squares icon.
  4. If you used GitHub, click on the SSH squares icon.
  5. In a terminal window, run the following command:
    git clone <your repo>
  6. Update one of the Lambda function files and save it.
  7. In the terminal window, commit and push the changes:
    git commit -am "simple change"
    git push
  8. In the Lambda console, under Deployments, choose View in CodePipeline.codepipeline pipeline

The build has started and the application is being deployed .

Caveats

submit feedback

This feature is currently available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). This is a feature beta and as such, it is not a full representation of the final experience. We know this is limited in scope and request your feedback. Let us know your thoughts about any future enhancements you would like to see. The best way to give feedback is to use the feedback button in the console.

Conclusion

With the addition of the Create application feature, you can now start right with full serverless applications from within the Lambda console. This delivers the simplicity and ease of the console while still offering the power of an application built on best practices.

Until next time: Happy coding!

from AWS Compute Blog

Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League

Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League

Getting Hands-On with Machine Learning and Ready to Race in the AWS DeepRacer League
Developers, start your engines! This virtual workshop will provide developers of all skill levels an opportunity to get hands-on with AWS DeepRacer. Learn about the basics of machine learning and reinforcement learning (a machine learning technique, ideal for autonomous driving). During the workshop, you will build a reinforcement learning model and submit that model to the AWS DeepRacer League for a chance to win a trip to re:Invent 2019.

Learning Objectives:
– Learn the basics of machine learning
– Build a reinforcement learning model
– Learn how to get started with AWS DeepRacer and participate in the league

View on YouTube

Lower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances

Lower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances

Lower Costs by Right Sizing Your Instance with Amazon EC2 T3 General Purpose Burstable Instances
EC2 T3 instances are the next generation of low cost burstable general-purpose instance types that provide a baseline level of CPU performance with the ability to burst CPU usage at any time for as long as required. T3 instances are designed for applications with moderate CPU usage that experience temporary spikes in use. T3 instances also offer a balance of compute, memory, and network resources for a broad spectrum of general purpose workloads. These include micro-services, low-latency interactive applications, small and medium databases, virtual desktops, development environments, code repositories, and business-critical applications. In this tech talk, we will provide an overview of T3 instances, help you understand what workloads are ideal for them, and show how the T3 credit system works so that you can lower your EC2 instance costs today.

Learning Objectives:
– Understand what T3 instances are
– Understand what workloads are appropriate for T3 instances
– Understand how the credit system works

View on YouTube

Enterprise-Class Security, High-Availability, & Scalability with Amazon ElastiCache

Enterprise-Class Security, High-Availability, & Scalability with Amazon ElastiCache

Enterprise-Class Security, High-Availability, & Scalability with Amazon ElastiCache
Modern day enterprises face large scale challenges like security, global access, data availability, and scalability to meet their business needs. In this tech talk, you will learn about new enterprise-friendly enhancements you can leverage for your mission-critical workloads. From reader endpoint and customer managed keys for encryption to online scaling up or down, we continue to deliver the scalability, availability, compliance, and security that you care about.

Learning Objectives:
– Understand that Amazon ElastiCache is a fully-managed, enterprise-friendly in-memory database compatible with Redis and Memcached
– Learn how the newly released features work
– Explore these new features on your own in the console to build your enterprise apps

View on YouTube

Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services

Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services

Accelerate Cloud Adoption and Reduce Operational Risk with AWS Managed Services
Enterprises want to adopt AWS at scale, but their teams often lack the skills and experience needed to make the move. AWS Managed Services (AMS) operates AWS on your behalf providing a secure and compliant Landing Zone, an enterprise operating model “”out-of-the-box””, day-to-day infrastructure management, security and compliance control, and cost optimization. In this tech talk, we’ll show you how AMS accelerates your migration to AWS, reduces your operating costs, improves security and compliance, and enables you to focus on your differentiating business priorities.

Learning Objectives:
– Understand how to properly plan for operating your cloud infrastructure post migration
– Learn how AMS extends your security perimeter to the cloud, including Active Directory integration and compliance certification mapping
– Learn how AMS lowers the cost of operating AWS

View on YouTube