Tag: EC2

Automatically Remediate Noncompliant AWS Resources using Lambda

Automatically Remediate Noncompliant AWS Resources using Lambda

While enterprises are capable of rapidly scaling their infrastructure in the cloud, there’s a corresponding increase in the demand for scalable mechanisms to meet security and compliance requirements based on corporate policies, auditors, security teams, and others.

For example, we can easily and rapidly launch hundreds of resources – such as EC2 instances – in the cloud, but we also need to have approaches for managing the security and compliance of these resources along with the surrounding infrastructure. It’s not good enough to simply passively monitor noncompliant resources; you need to automatically fix the configuration that led to the noncompliant resources.

Using a collection of AWS services, you can detect non-compliant resources and automatically remediate these resources to maintain compliance without human intervention.

In this post, you’ll learn how to automatically remediate non-compliant AWS resources as code using AWS services such as AWS Config Rules, Amazon CloudWatch Event Rules, and AWS Lambda. You’ll learn the step-by-step instructions for configuring automated remediation using the AWS Console.

The diagram below shows the key AWS resources and relationships you’ll be creating.

Let’s get started!

Create an S3 Bucket for CloudTrail

In this section, you’ll create an Amazon S3 bucket for use with CloudTrail. If you’ve already established CloudTrail, this section is optional. Here are the steps:

  1. Go to the S3 console
  2. Click the Create bucket button
  3. Enter ccoa-cloudtrail-ACCOUNTID in the Bucket name field (replacing ACCOUNTID with your account id)
  4. Click Next on the Configure Options screen
  5. Click Next on the Set Permissions screen
  6. Click Create bucket on the Review screen

Create a CloudTrail Trail

In this section, you’ll create a trail for AWS CloudTrail. If you’ve already established CloudTrail, this section is optional. Here are the steps:

  1. Go to the CloudTrail console
  2. Click the Create trail button
  3. Enter ccoa-cloudtrail in the Trail name field
  4. Choose the checkbox next to Select all S3 buckets in your account in the Data events section
  5. Choose the No radio button for the Create a new S3 bucket field in the Storage location section.
  6. Choose the S3 bucket you just created from the S3 bucket dropdown.
  7. Click the Create button

Create an AWS Config Recorder

In this section, you’ll configure the settings for AWS Config which includes turning on the Conifig recorder along with a delivery channel. If you’ve already configured AWS Config, this section is optional. Here are the steps:

  1. Go to the AWS Config console
  2. If it’s your first time using Config, click the Get Started button
  3. Select the Include global resources (e.g., AWS IAM resources) checkbox
  4. In the Amazon SNS topic section, select the Stream configuration changes and notifications to an Amazon SNS topic. checkbox
  5. Choose the Create a topic radio button in the Amazon SNS topic section
  6. In the Amazon S3 bucket section, select the Create a bucket radio button
  7. In the AWS Config role section, select the Use an existing AWS Config service-linked role radio button
  8. Click the Next button
  9. Click the Skip button on the AWS Config rules page
  10. Click the Confirm button on the Review page

Create an S3 Bucket in Violation of Compliance Rules

In this section, you’ll create an S3 bucket that allows people to put files into the bucket. We’re doing this for demonstration purposes since you should not grant any kind of public access to your S3 bucket. Here are the steps:

  1. Go to the S3 console
  2. Click the Create bucket button
  3. Enter ccoa-s3-write-violation-ACCOUNTID in the Bucket name field (replacing ACCOUNTID with your account id)
  4. Click Next on the Configure Options screen
  5. Unselect the Block all public access checkbox and click Next on the Set Permissions screen
  6. Click Create bucket on the Review screen
  7. Select the ccoa-s3-write-violation-ACCOUNTID bucket and choose the Permissions tab
  8. Click on Bucket Policy and paste the contents from below into the Bucket policy editor text area (replace both MYBUCKETNAME values with the ccoa-s3-write-violation-ACCOUNTID bucket you just created)
  9. Click the Save button

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:Abort*",
        "s3:DeleteObject",
        "s3:GetBucket*",
        "s3:GetObject",
        "s3:List*",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::MYBUCKETNAME",
        "arn:aws:s3:::MYBUCKETNAME/*"
      ]
    }
  ]
}

You’ll receive this message: You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.

Create an IAM Policy and Role for Lambda

In this section, you’ll create an IAM Policy and Role that established the permissions that the Lambda function will use. Here are the steps:

  1. Go to the IAM console
  2. Click on Policies
  3. Click Create policy
  4. Click the JSON tab
  5. Copy and replace the contents below into the JSON text area
  6. Click the Review policy button
  7. Enter ccoa-s3-write-policy in the *Name field
  8. Click the Create policy button
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:DeleteBucketPolicy",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}
  1. Click on Roles
  2. Click the Create role button
  3. Click Lambda from the Choose the service that will use this role section
  4. Click the Next: Permissions button
  5. Click ccoa-s3-write-policy in the Filter policies search field
  6. Select the checkbox next to ccoa-s3-write-policy and click on the Next: Tags button
  7. Click the Next: Review button
  8. Enter ccoa-s3-write-role in the Role name field
  9. Click the Create role button

Create a Lambda Function to Auto-remediate S3 Buckets

In this section, you’ll create a Lambda function that is written in Node.js and performs the automatic remediation by deleting the S3 Bucket Policy associated with the bucket. Here are the steps:

  1. Go to the Lambda console
  2. Click the Create function button
  3. Keep the Author from scratch radio button selected and enter ccoa-s3-write-remediation in the Function name field
  4. Choose Node.js 10.x for the Runtime
  5. Under Permissions choose the Choose or create an execution role
  6. Under Execution role, choose Use an existing role
  7. In the Existing role dropdown, choose ccoa-s3-write-role
  8. Click the Create function button
  9. Scroll to the Function code section and within the index.js pane, copy and replace the code from below
var AWS = require('aws-sdk');

exports.handler = function(event) {
  console.log("request:", JSON.stringify(event, undefined, 2));

    var s3 = new AWS.S3({apiVersion: '2006-03-01'});
    var resource = event['detail']['requestParameters']['evaluations'];
    console.log("evaluations:", JSON.stringify(resource, null, 2));
    
  
for (var i = 0, len = resource.length; i < len; i++) {
  if (resource[i]["complianceType"] == "NON_COMPLIANT")
  {
      console.log(resource[i]["complianceResourceId"]);
      var params = {
        Bucket: resource[i]["complianceResourceId"]
      };

      s3.deleteBucketPolicy(params, function(err, data) {
        if (err) console.log(err, err.stack); // an error occurred
        else     console.log(data);           // successful response
      });
  }
}


};
  1. Click the Save button

Create an AWS Config Rule

In this section, you’ll create an AWS Config Rule that uses a Managed Config Rule to detect when there are S3 buckets that allow public writes. The Managed Config Rule runs a Lambda function to detect when S3 buckets on not in compliance. Here are the steps:

  1. Go to the Config console
  2. Click Rules
  3. Click the Add rule button
  4. In the filter box, type s3-bucket-public-write-prohibited
  5. Choose the s3-bucket-public-write-prohibited rule
  6. Click on the Remediation action dropdown within the Choose remediation action section
  7. Choose the AWS-PublishSNSNotification remediation in the dropdown
  8. Click Yes in the Auto remediation field
  9. In the Parameters field, enter arn:aws:iam::ACCOUNTID:role/aws-service-role/ssm.amazonaws.com/AWSServiceRoleForAmazonSSM in the AutomationAssumeRole field (replacing ACCOUNTID with your AWS account id)
  10. In the Parameters field, enter s3-bucket-public-write-prohibited violated in the Message field
  11. In the Parameters field, enter arn:aws:sns:us-east-1:ACCOUNTID:ccoa-awsconfig-ACCOUNTID in the TopicArn field (replacing ACCOUNTID with your AWS account id)
  12. Click the Save button

Cloudwatch Event Rule

In this section, you’ll create an Amazon CloudWatch Event Rule which monitors when the S3_BUCKET_PUBLIC_WRITE_PROHIBITED Config Rule is deemed noncompliant. Here are the steps:

  1. Go to the CloudWatch console
  2. Click on Rules
  3. Click the Create rule button
  4. Choose Event pattern in the Event Source section
  5. In the Event Pattern Preview section, click Edit
  6. Copy the contents from below and replace in the Event pattern text area
  7. Click the Save button
  8. Click the Add target button
  9. Choose Lambda function
  10. Select the ccoa-s3-write-remediation function you’d previously created.
  11. Click the Configure details button
  12. Enter ccoa-s3-write-cwe in the Name field
  13. Click the Create rule button

 

{
  "source":[
    "aws.config"
  ],
  "detail":{
    "requestParameters":{
      "evaluations":{
        "complianceType":[
          "NON_COMPLIANT"
        ]
      }
    },
    "additionalEventData":{
      "managedRuleIdentifier":[
        "S3_BUCKET_PUBLIC_WRITE_PROHIBITED"
      ]
    }
  }
}

View Config Rules

In this section, you’ll verify that the Config Rule has been triggered and that the S3 bucket resource has been automatically remediated:

  1. Go to the Config console
  2. Click on Rules
  3. Select the s3-bucket-public-write-prohibited rule
  4. Click the Re-evaluate button
  5. Go back Rules in the Config console
  6. Go to the S3 console and choose the ccoa-s3-write-violation-ACCOUNTID bucket that the bucket policy has been removed.
  7. Go back Rules in the Config console and confirm that the s3-bucket-public-write-prohibited rule is Compliant

Summary

In this post, you learned how to setup a robust automated compliance and remediation infrastructure for non-compliant AWS resources using services such as S3, AWS Config & Config Rules, Amazon CloudWatch Event Rules, AWS Lambda, IAM, and others. By leveraging this approach, your AWS infrastructure is capable of rapidly scaling resources while ensuring these resources are always in compliance without humans needing to manually intervene.

This general approach can be replicated for many other types of security and compliance checks using managed and custom config rules along with custom remediations. This way your compliance remains in lockstep with the rest of your AWS infrastructure.

Resources

The post Automatically Remediate Noncompliant AWS Resources using Lambda appeared first on Stelligent.

from Blog – Stelligent

Running Java applications on Amazon EC2 A1 instances with Amazon Corretto

Running Java applications on Amazon EC2 A1 instances with Amazon Corretto

This post is contributed by Jeff Underhill | EC2 Principal Business Development Manager

Amazon Corretto is a no-cost, multiplatform, production-ready distribution of the Open Java Development Kit (OpenJDK). Production-ready Linux builds of JDK8 and JDK 11 for the 64-bit Arm architecture were released Sep 17, 2019. Scale-out Java applications can get significant cost savings using the Arm-based Amazon EC2 A1 instances. Read on to learn more about Amazon EC2 A1 instances, Amazon Corretto, and how support for 64-bit Arm in Amazon Corretto is a significant development for Java developers building cloud native applications.

What are Amazon EC2 A1 instances?

Last year at re:Invent Amazon Web Services (AWS) introduced Amazon EC2 A1 instances powered by AWS Graviton Processors that feature 64-bit Arm Neoverse cores and custom silicon designed by AWS. The A1 instances deliver up to 45% cost savings for scale-out and Arm-based applications such as web servers, containerized microservices, caching fleets, distributed data stores, and Arm-native software development workflows that are supported by the extensive, and growing Arm software ecosystem.

Why Java and Amazon EC2 A1 instances?
The majority of customers we’ve spoken to have experienced a seamless transition and are realizing cost benefits with A1 instances, especially customers that are transitioning architecture-agnostic applications that often run seamlessly on Arm-based platforms. Today, there are many examples of architecture-agnostic programming languages, such as PHP, Python, node.js, and GoLang, and all of these are well supported on Arm-based A1 instances. The Java programming language has been around for almost 25 years and is one of the most broadly adopted programming languages. The TIOBE Programming Community Index (as of Sept’19) shows Java ranked as the #1 or #2 programming language between 2004-2019, and the annual GitHub Octoverse report shows Java has consistently ranked #2 between 2014 and 2018. Java was designed to have as few implementation dependencies as possible, which enables portability of Java applications regardless of processor architectures. This portability enables choice of how and where to run your Java-based workloads.

What is Amazon Corretto?
Java is one of the most popular languages in use by AWS customers, and AWS is committed to supporting Java and keeping it free. That’s why AWS introduced Amazon Corretto, a no-cost, multi-platform, production-ready distribution of OpenJDK from Amazon. AWS runs Corretto internally on thousands of production services, and has committed to distributing security updates to Corretto 8 at no cost until at least June, 2023. Amazon Corretto is available for both JDK 8 and JDK 11 – you can learn more in the documentation and if you’re curious about what goes into building Java in the open and specifically the Amazon Corretto OpenJDK distribution then check out this OSCON video.

What’s new?
Java provides you with the choice of how and where to run your applications, and Amazon EC2 provides you with the broadest and deepest portfolio of compute instances available in the cloud. AWS wants its customers to be able to run their workloads in the most efficient way as defined by their specific use case and business requirements and that includes providing a consistent platform experience across the Amazon EC2 instance portfolio. We’re fortunate to have James Gosling, the designer of Java, as a member of the Amazon team, and he recently took to Twitter to announce the General Availability (GA) of Amazon Corretto for the Arm architecture:

For those of you that like playing with Linux on ARM, the Corretto build for ARM64 is now GA. Fully production ready. Both JDK8 and JDK11

 

Open Source – “It takes a village”
It’s important to recognize the significance of Open Source Software (OSS) and the community of people involved to develop, build and test a production ready piece of software as complex as Java. So, because I couldn’t have said it better myself, here’s a tweet from my colleague Matt Wilson who took a moment to recognize all the hard work of the Red Hat and Java community developers:

Many thanks to all the hard work from @redhatopen developers, and all the #OpenSource Java community that played a part in making 64 bit Arm support in OpenJDK distributions a reality!

Ready to cost optimize your scale-out Java applications?
With the 8.222.10.4 and 11.0.4.11.1 releases of Amazon Corretto that became generally available Sep 17, 2019, new and existing AWS Corretto users can now deploy their production Java applications on Arm-based EC2 A1 instances. If you have scale-out applications and are interested in optimizing cost then you’ll want to take the A1 instances for a test-drive to see if they’re a fit for your specific use case. You can learn more at the Amazon Corretto website, and the downloads are all available here for Amazon Corretto 8Amazon Corretto 11 and if you’re using containers here’s the Docker Official image.  If you have any questions about your own workloads running on Amazon EC2 A1 instances, contact us at [email protected].

from AWS Compute Blog

Building a pocket platform-as-a-service with Amazon Lightsail

Building a pocket platform-as-a-service with Amazon Lightsail

This post was written by Robert Zhu, a principal technical evangelist at AWS and a member of the GraphQL Working Group. 

When you start a new web-based project, you figure out wha kind of infrastructure you need. For my projects, I prioritize simplicity, flexibility, value, and on-demand capacity, and find myself quickly needing the following features:

  • DNS configuration
  • SSL support
  • Subdomain to a service
  • SSL reverse proxy to localhost (similar to ngrok and serveo)
  • Automatic deployment after a commit to the source repo (nice to have)

new projects have different requirements compared to mature projects

Amazon Lightsail is perfect for building a simple “pocket platform” that provides all these features. It’s cheap and easy for beginners and provides a friendly interface for managing virtual machines and DNS. This post shows step-by-step how to assemble a pocket platform on Amazon Lightsail.

Walkthrough

The following steps describe the process. If you prefer to learn by watching videos instead, view the steps by watching the following: Part 1, Part 2, Part 3.

Prerequisites

You should be familiar with: Linux, SSH, SSL, Docker, Nginx, HTTP, and DNS.

Steps

Use the following steps to assemble a pocket platform.

Creating a domain name and static IP

First, you need a domain name for your project. You can register your domain with any domain name registration service, such as Amazon Route53.

  1. After your domain is registered, open the Lightsail console, choose the Networking tab, and choose Create static IP.

Lightsail console networking tab

  1. On the Create static IP page, give the static IP a name you can remember and don’t worry about attaching it to an instance just yet. Choose Create DNS zone.

 

  1. On the Create a DNS zone page, enter your domain name and then choose Create DNS zone. For this post, I use the domain “raccoon.news.DNS zone in Lightsail with two A records
  2. Choose Add Record and create two A records—“@.raccoon.news” and “raccoon.news”—both resolving to the static IP address you created earlier. Then, copy the values for the Lightsail name servers at the bottom of the page. Go back to your domain name provider, and edit the name servers to point to the Lightsail name servers. Since I registered my domain with Route53, here’s what it looks like:

Changing name servers in Route53

Note: If you registered your domain with Route53, make sure you change the name server values under “domain registration,” not “hosted zones.” If you registered your domain with Route53, you need to delete the hosted zone that Route53 automatically creates for your domain.

Setting up your Lightsail instance

While you wait for your DNS changes to propagate, set up your Lightsail instance.

  1. In the Lightsail console, create a new instance and select Ubuntu 18.04.

Choose OS Only and Ubuntu 18.04 LTS

For this post, you can use the cheapest instance. However, when you run anything in production, make sure you choose an instance with enough capacity for your workload.

  1. After the instance launches, select it, then click on the Networking tab and open two additional TCP ports: 443 and 2222. Then, attach the static IP allocated earlier.
  2. To connect to the Lightsail instance using SSH, download the SSH key, and save it to a friendly path, for example: ~/ls_ssh_key.pem.

Click the download link to download the SSH key

  • Restrict permissions for your SSH key:

chmod 400 ~/ls_ssh_key.pem

  • Connect to the instance using SSH:

ssh -i ls_ssh_key.pem [email protected]_IP

  1. After you connect to the instance, install Docker to help manage deployment and configuration:

sudo apt-get update && sudo apt-get install docker.io
sudo systemctl start docker
sudo systemctl enable docker
docker run hello-world

  1. After Docker is installed, set up a gateway using called the nginx-proxy container. This container lets you route traffic to other containers by providing the “VIRTUAL_HOST” environment variable. Conveniently, nginx-proxy comes with an SSL companion, nginx-proxy-letsencrypt, which uses Let’s Encrypt.

# start the reverse proxy container
sudo docker run --detach \
    --name nginx-proxy \
    --publish 80:80 \
    --publish 443:443 \
    --volume /etc/nginx/certs \
    --volume /etc/nginx/vhost.d \
    --volume /usr/share/nginx/html \
    --volume /var/run/docker.sock:/tmp/docker.sock:ro \
    jwilder/nginx-proxy

# start the letsencrypt companion
sudo docker run --detach \
    --name nginx-proxy-letsencrypt \
    --volumes-from nginx-proxy \
    --volume /var/run/docker.sock:/var/run/docker.sock:ro \
    --env "DEFAULT_EMAIL=YOUREMAILHERE" \
    jrcs/letsencrypt-nginx-proxy-companion

# start a demo web server under a subdomain
sudo docker run --detach \
    --name nginx \
    --env "VIRTUAL_HOST=test.EXAMPLE.COM" \
    --env "LETSENCRYPT_HOST=test.EXAMPLE.COM" \
    nginx

Pay special attention to setting a valid email for the DEFAULT_EMAIL environment variable on the proxy companion; otherwise, you’ll need to specify the email whenever you start a new container. If everything went well, you should be able to navigate to https://test.EXAMPLE.COM and see the nginx default content with a valid SSL certificate that has been auto-generated by Let’s Encrypt.

A publicly accessible URL served from our Lightsail instance with SSL support

Troubleshooting:

  • In the Lightsail console, make sure that Port 443 is open.
  • Let’s Encrypt rate limiting (for reference if you encounter issues with SSL certificate issuance): https://letsencrypt.org/docs/rate-limits/.

Deploying a localhost proxy with SSL

Most developers prefer to code on a dev machine (laptop or desktop) because they can access the file system, use their favorite IDE, recompile, debug, and more. Unfortunately, developing on a dev machine can introduce bugs due to differences from the production environment. Also, certain services (for example, Alexa Skills or GitHub Webhooks) require SSL to work, which can be annoying to configure on your local machine.

For this post, you can use an SSL reverse proxy to make your local dev environment resemble production from the browser’s point of view. This technique also helps allow your test application to make API requests to production endpoints with Cross-Origin Resource Sharing restrictions. While it’s not a perfect solution, it takes you one step closer toward a frictionless dev/test feedback loop. You may have used services like ngrok and serveo for this purpose. By running a reverse proxy, you won’t need to spread your domain and SSL settings across multiple services.

To run a reverse proxy, create an SSH reverse tunnel. After the reverse tunnel SSH session is initiated, all network requests to the specified port on the host are proxied to your dev machine. However, since your Lightsail instance is already using port 22 for VPS management, you need a different SSH port (use 2222 from earlier). To keep everything organized, run the SSH server for port 2222 inside a special proxy container. The following diagram shows this solution.

Diagram of how an SSL reverse proxy works with SSH tunneling

Using Dockerize an SSH service as a starting point, I created a repository with a working Dockerfile and nginx config for reference. Here are the summary steps:

git clone https://github.com/robzhu/nginx-local-tunnelcd nginx-local-tunnel

docker build -t {DOCKERUSER}/dev-proxy . --build-arg ROOTPW={PASSWORD}

# start the proxy container
# Note, 2222 is the port we opened on the instance earlier.
docker run --detach -p 2222:22 \
    --name dev-proxy \
    --env "VIRTUAL_HOST=dev.EXAMPLE.com" \
    --env "LETSENCRYPT_HOST=dev.EXAMPLE.com" \
    {DOCKERUSER}/dev-proxy

# Ports explained:
# 3000 refers to the port that your app is running on localhost.
# 2222 is the forwarded port on the host that we use to directly SSH into the container.
# 80 is the default HTTP port, forwarded from the host
ssh -R :80:localhost:3000 -p 2222 [email protected]

# Start sample app on localhost
cd node-hello && npm i
nodemon main.js

# Point browser to https://dev.EXAMPLE.com

The reverse proxy subdomain works only as long as the reverse proxy SSH connection remains open. If there is no SSH connection, you should see an nginx gateway error:

Nginx will return 502 if you try to access the reverse proxy without running the SSH tunnel

While this solution is handy, be extremely careful, as it could expose your work-in-progress to the internet. Consider adding additional authorization logic and settings for allowing/denying specific IPs.

Setting up automatic deployment

Finally, build an automation workflow that watches for commits on a source repository, builds an updated container image, and re-deploys the container on your host. There are many ways to do this, but here’s the combination I’ve selected for simplicity:

  1. First, create a GitHub repository to host your application source code. For demo purposes, you can clone my express hello-world example. On the Docker hub page, create a new repository, click the GitHub icon, and select your repository from the dropdown list.Create GitHub repo to host your application source code
  2. Now Docker watches for commits to the repo and builds a new image with the “latest” tag in response. After the image is available, start the container as follows:

docker run --detach \
    --name app \
    --env "VIRTUAL_HOST=app.raccoon.news" \
    --env "LETSENCRYPT_HOST=app.raccoon.news" \
    robzhu/express-hello

  1. Finally, use Watchtower to poll dockerhub and update the “app” container whenever a new image is detected:

docker run -d \
    --name watchtower \
    -v /var/run/docker.sock:/var/run/docker.sock \
    containrrr/watchtower \
    --interval 10 \
    APPCONTAINERNAME

 

Summary

Your Pocket PaaS is now complete! As long as you deploy new containers and add the VIRTUAL_HOST and LETSENCRYPT_HOST environment variables, you get automatic subdomain routing and SSL termination. With SSH reverse tunneling, you can develop on your local dev machine using your favorite IDE and test/share your app at https://dev.EXAMPLE.COM.

Because this is a public URL with SSL, you can test Alexa Skills, GitHub Webhooks, CORS settings, PWAs, and anything else that requires SSL. Once you’re happy with your changes, a git commit triggers an automated rebuild of your Docker image, which is automatically redeployed by Watchtower.

I hope this information was useful. Thoughts? Leave a comment or direct-message me on Twitter: @rbzhu.

from AWS Compute Blog

Update: Issue affecting HashiCorp Terraform resource deletions after the VPC Improvements to AWS Lambda

Update: Issue affecting HashiCorp Terraform resource deletions after the VPC Improvements to AWS Lambda

On September 3, 2019, we announced an exciting update that improves the performance, scale, and efficiency of AWS Lambda functions when working with Amazon VPC networks. You can learn more about the improvements in the original blog post. These improvements represent a significant change in how elastic network interfaces (ENIs) are configured to connect to your VPCs. With this new model, we identified an issue where VPC resources, such as subnets, security groups, and VPCs, can fail to be destroyed via HashiCorp Terraform. More information about the issue can be found here. In this post we will help you identify whether this issue affects you and the steps to resolve the issue.

How do I know if I’m affected by this issue?

This issue only affects you if you use HashiCorp Terraform to destroy environments. Versions of Terraform AWS Provider that are v2.30.0 or older are impacted by this issue. With these versions you may encounter errors when destroying environments that contain AWS Lambda functions, VPC subnets, security groups, and Amazon VPCs. Typically, terraform destroy fails with errors similar to the following:

Error deleting subnet: timeout while waiting for state to become 'destroyed' (last state: 'pending', timeout: 20m0s)

Error deleting security group: DependencyViolation: resource sg-<id> has a dependent object
        	status code: 400, request id: <guid>

Depending on which AWS Regions the VPC improvements are rolled out, you may encounter these errors in some Regions and not others.

How do I resolve this issue if I am affected?

You have two options to resolve this issue. The recommended option is to upgrade your Terraform AWS Provider to v2.31.0 or later. To learn more about upgrading the Provider, visit the Terraform AWS Provider Version 2 Upgrade Guide. You can find information and source code for the latest releases of the AWS Provider on this page. The latest version of the Terraform AWS Provider contains a fix for this issue as well as changes that improve the reliability of the environment destruction process. We highly recommend that you upgrade the Provider version as the preferred option to resolve this issue.

If you are unable to upgrade the Provider version, you can mitigate the issue by making changes to your Terraform configuration. You need to make the following sets of changes to your configuration:

  1. Add an explicit dependency, using a depends_on argument, to the aws_security_group and aws_subnet resources that you use with your Lambda functions. The dependency has to be added on the aws_security_group or aws_subnet and target the aws_iam_policy resource associated with IAM role configured on the Lambda function. See the example below for more details.
  2. Override the delete timeout for all aws_security_group and aws_subnet resources. The timeout should be set to 40 minutes.

The following configuration file shows an example where these changes have been made(scroll to see the full code):

provider "aws" {
    region = "eu-central-1"
}
 
resource "aws_iam_role" "lambda_exec_role" {
  name = "lambda_exec_role"
  assume_role_policy = <<EOF
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Action": "sts:AssumeRole",
      "Principal": {
        "Service": "lambda.amazonaws.com"
      },
      "Effect": "Allow",
      "Sid": ""
    }
  ]
}
EOF
}
 
data "aws_iam_policy" "LambdaVPCAccess" {
  arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaVPCAccessExecutionRole"
}
 
resource "aws_iam_role_policy_attachment" "sto-lambda-vpc-role-policy-attach" {
  role       = "${aws_iam_role.lambda_exec_role.name}"
  policy_arn = "${data.aws_iam_policy.LambdaVPCAccess.arn}"
}
 
resource "aws_security_group" "allow_tls" {
  name        = "allow_tls"
  description = "Allow TLS inbound traffic"
  vpc_id      = "vpc-<id>"
 
  ingress {
    # TLS (change to whatever ports you need)
    from_port   = 443
    to_port     = 443
    protocol    = "tcp"
    # Please restrict your ingress to only necessary IPs and ports.
    # Opening to 0.0.0.0/0 can lead to security vulnerabilities.
    cidr_blocks = ["0.0.0.0/0"]
  }
 
  egress {
    from_port       = 0
    to_port         = 0
    protocol        = "tcp"
    cidr_blocks     = ["0.0.0.0/0"]
  }
  
  timeouts {
    delete = "40m"
  }
  depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]  
}
 
resource "aws_subnet" "main" {
  vpc_id     = "vpc-<id>"
  cidr_block = "172.31.68.0/24"

  timeouts {
    delete = "40m"
  }
  depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]
}
 
resource "aws_lambda_function" "demo_lambda" {
    function_name = "demo_lambda"
    handler = "index.handler"
    runtime = "nodejs10.x"
    filename = "function.zip"
    source_code_hash = "${filebase64sha256("function.zip")}"
    role = "${aws_iam_role.lambda_exec_role.arn}"
    vpc_config {
     subnet_ids         = ["${aws_subnet.main.id}"]
     security_group_ids = ["${aws_security_group.allow_tls.id}"]
  }
}

The key block to note here is the following, which can be seen in both the “allow_tls” security group and “main” subnet resources:

timeouts {
  delete = "40m"
}
depends_on = ["aws_iam_role_policy_attachment.sto-lambda-vpc-role-policy-attach"]

These changes should be made to your Terraform configuration files before destroying your environments for the first time.

Can I delete resources remaining after a failed destroy operation?

Destroying environments without upgrading the provider or making the configuration changes outlined above may result in failures. As a result, you may have ENIs in your account that remain due to a failed destroy operation. These ENIs can be manually deleted a few minutes after the Lambda functions that use them have been deleted (typically within 40 minutes). Once the ENIs have been deleted, you can re-re-run terraform destroy.

from AWS Compute Blog

Improving the Getting Started experience with AWS Lambda

Improving the Getting Started experience with AWS Lambda

A common question from developers is, “How do I get started with creating serverless applications?” Frequently, I point developers to the AWS Lambda console where they can create a new Lambda function and immediately see it working.

While you can learn the basics of a Lambda function this way, it does not encompass the full serverless experience. It does not allow you to take advantage of best practices like infrastructure as code (IaC) or continuous integration and continuous delivery (CI/CD). A full-on serverless application could include a combination of services like Amazon API Gateway, Amazon S3, and Amazon DynamoDB.

To help you start right with serverless, AWS has added a Create application experience to the Lambda console. This enables you to create serverless applications from ready-to-use sample applications, which follow these best practices:

  • Use infrastructure as code (IaC) for defining application resources
  • Provide a continuous integration and continuous deployment (CI/CD) pipeline for deployment
  • Exemplify best practices in serverless application structure and methods

IaC

Using IaC allows you to automate deployment and management of your resources. When you define and deploy your IaC architecture, you can standardize infrastructure components across your organization. You can rebuild your applications quickly and consistently without having to perform manual actions. You can also enforce best practices such as code reviews.

When you’re building serverless applications on AWS, you can use AWS CloudFormation directly, or choose the AWS Serverless Application Model, also known as AWS SAM. AWS SAM is an open source framework for building serverless applications that makes it easier to build applications quickly. AWS SAM provides a shorthand syntax to express APIs, functions, databases, and event source mappings. Because AWS SAM is built on CloudFormation, you can specify any other AWS resources using CloudFormation syntax in the same template.

Through this new experience, AWS provides an AWS SAM template that describes the entire application. You have instant access to modify the resources and security as needed.

CI/CD

When editing a Lambda function in the console, it’s live the moment that the function is saved. This works when developing against test environments, but risks introducing untested, faulty code in production environments. That’s a stressful atmosphere for developers with the unneeded overhead of manually testing code on each change.

Developers say that they are looking for an automated process for consistently testing and deploying reliable code. What they need is a CI/CD pipeline.

CI/CD pipelines are more than just convenience, they can be critical in helping development teams to be successful. CI/CDs provide code integration, testing, multiple environment deployments, notifications, rollbacks, and more. The functionality depends on how you choose to configure it.

When you create a new application through Lambda console, you create a CI/CD pipeline to provide a framework for automated testing and deployment. The pipeline includes the following resources:

Best practices

Like any other development pattern, there are best practices for serverless applications. These include testing strategies, local development, IaC, and CI/CD. When you create a Lambda function using the console, most of this is abstracted away. A common request from developers learning about serverless is for opinionated examples of best practices.

When you choose Create application, the application uses many best practices, including:

  • Managing IaC architectures
  • Managing deployment with a CI/CD pipeline
  • Runtime-specific test examples
  • Runtime-specific dependency management
  • A Lambda execution role with permissions boundaries
  • Application security with managed policies

Create an application

Now, lets walk through creating your first application.

  1. Open the Lambda console, and choose Applications, Create application.
  2. Choose Serverless API backend. The next page shows the architecture, services used, and development workflow of the chosen application.
  3. Choose Create and then configure your application settings.
    • For Application name and Application description, enter values.
    • For Runtime, the preview supports Node.js 10.x. Stay tuned for more runtimes.
    • For Source Control Service, I chose CodeCommit for this example, but you can choose either. If you choose GitHub, you are asked to connect to your GitHub account for authorization.
    • For Repository Name, feel free to use whatever you want.
    • Under Permissions, check Create roles and permissions boundary.
  4. Choose Create.

Exploring the application

That’s it! You have just created a new serverless application from the Lambda console. It takes a few moments for all the resources to be created. Take a moment to review what you have done so far.

Across the top of the application, you can see four tabs, as shown in the following screenshot:

  • Overview—Shows the current page, including a Getting started section, and application and toolchain resources of the application
  • Code—Shows the code repository and instructions on how to connect
  • Deployments—Links to the deployment pipeline and a deployment history.
  • Monitoring—Reports on the application health and performance

getting started dialog

The Resources section lists all the resources specific to the application. This application includes three Lambda functions, a DynamoDB table, and the API. The following screenshot shows the resources for this sample application.resources view

Finally, the Infrastructure section lists all the resources for the CI/CD pipeline including the AWS Identity and Access Management (IAM) roles, the permissions boundary policy, the S3 bucket, and more. The following screenshot shows the resources for this sample application.application view

About Permissions Boundaries

This new Create application experience utilizes an IAM permissions boundary to help further secure the function that gets created and prevent an overly permissive function policy from being created later on. The boundary is a separate policy that acts as a maximum bound on what an IAM policy for your function can be created to have permissions for. This model allows developers to build out the security model of their application while still meeting certain requirements that are often put in place to prevent overly permissive policies and is considered a best practice. By default, the permissions boundary that is created limits the application access to just the resources that are included in the example template. In order to expand the permissions of the application, you’ll first need to extend what is defined in the permissions boundary to allow it.

A quick test

Now that you have an application up and running, try a quick test to see if it works.

  1. In the Lambda console, in the left navigation pane, choose Applications.
  2. For Applications, choose Start Right application.
  3. On the Endpoint details card, copy your endpoint.
  4. From a terminal, run the following command:
    curl -d '{"id":"id1", "name":"name1"}' -H "Content-Type: application/json" -X POST <YOUR-ENDPOINT>

You can find tips like this, and other getting started hints in the README.md file of your new serverless application.

Outside of the console

With the introduction of the Create application function, there is now a closer tie between the Lambda console and local development. Before this feature, you would get started in the Lambda console or with a framework like AWS SAM. Now, you can start the project in the console and then move to local development.

You have already walked through the steps of creating an application, now pull it local and make some changes.

  1. In the Lambda console, in the left navigation pane, choose Applications.
  2. Select your application from the list and choose the Code tab.
  3. If you used CodeCommit, choose Connect instructions to configure your local git client. To copy the URL, choose the SSH squares icon.
  4. If you used GitHub, click on the SSH squares icon.
  5. In a terminal window, run the following command:
    git clone <your repo>
  6. Update one of the Lambda function files and save it.
  7. In the terminal window, commit and push the changes:
    git commit -am "simple change"
    git push
  8. In the Lambda console, under Deployments, choose View in CodePipeline.codepipeline pipeline

The build has started and the application is being deployed .

Caveats

submit feedback

This feature is currently available in US East (Ohio), US East (N. Virginia), US West (N. California), US West (Oregon), EU (Ireland), and Asia Pacific (Tokyo). This is a feature beta and as such, it is not a full representation of the final experience. We know this is limited in scope and request your feedback. Let us know your thoughts about any future enhancements you would like to see. The best way to give feedback is to use the feedback button in the console.

Conclusion

With the addition of the Create application feature, you can now start right with full serverless applications from within the Lambda console. This delivers the simplicity and ease of the console while still offering the power of an application built on best practices.

Until next time: Happy coding!

from AWS Compute Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

2019 Week in Review 4

GitLab Commit, held in New York last week, brought us news that GitLab completed a $268 million Series E round of fundraising. The company reports that it is now valued at $2.75 billion and that it plans to invest the cash infusion in its DevOps platform offerings — including monitoring, security, and planning.

In addition, the firm announced GitLab 12.3 which seeks to underscore that point which includes a WAF built into the GitLab SDLC platform for monitoring and reporting of security concerns related to Kubernetes clusters. It also includes new analytics features and enhanced compliance capabilities.

To stay up-to-date on DevOps best practices, cloud security, and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

DevOps News

  • GitHub announced that they have integrated the Checks API with GitHub Page, allowing operators to easily understand why a GitHub Page build failed. And, as Pages is now a GitHub App, users are able to see build status via the Checks interface.
  • And in other Git news, Semmle revealed that it is joining GitHub. According to the companies, security researchers use Semmle to find vulnerabilities in code with simple declarative queries which they then share the Semmle community to improve the safety of code in other codebases.
  • At FutureStack in New York last week, New Relic announced the “industry’s first observability platform that is open, connected and programmable, enabling companies to create more perfect software.” The new capabilities include New Relic Logs, New Relic Traces, New Relic Metrics, and New Relic AI. In addition, the company unveiled the ability for customers and partners to build new applications via programming on the New Relic One Platform.
  • Kubernetes has delivered Kubernetes 1.16, which it reports consists of 31 enhancements including custom resources, a metrics registry, and significant changes to the Kubernetes API.

AWS News

  • Amazon has unveiled a new Step Functions feature in all regions where Step Functions is offered, support for dynamic parallelism. According to AWS, this was probably the most requested feature for Step Functions as it unblocks the implementation of new use cases and can help optimize existing ones. Specifically, now Step Functions support a new Map state type for dynamic parallelism.
  • Heavy CloudFormation users will be happy to see that Amazon has expanded its capabilities; now operators can use CloudFormation templates to configure and provision additional features for Amazon EC2, Amazon ECS, Amazon ElastiCache, Amazon ES, and more. You can see the full list of new capabilities here.
  • AWS has brought to market a new Amazon WorkSpaces feature that will now restore a WorkSpace to a last known healthy state, allowing you to easily recover from the impact of inaccessible WorkSpaces caused by incompatible 3rd party updates on Workspaces.
  • AWS continues to evolve its IoT solution set with AWS IoT Greengrass 1.9.3 . Now available, AWS has added support for ARMv6 and new machine learning inference capabilities.
  • AWS introduced in preview the NoSQL Workbench for Amazon DynamoDB. The application to help operators design and visualize data models, run queries on data, and generate code for applications is free, client-side, and available for Windows and macOS.

Flux7 News
Flux7 has several upcoming educational opportunities. Please join us at:

  • Our October 9, 2019 Webinar, DevOps as a Foundation for Digital Transformation. This free 1-hour webinar from GigaOm Research brings together experts in DevOps, featuring GigaOm analyst Jon Collins and a special guest from Flux7, CEO and co-founder Aater Suleman. The discussion will focus on how to scale DevOps efforts beyond the pilot and deliver a real foundation for innovation and digital transformation.
  • The High-Performance Computing Immersion Day on October 11, 2019, in Houston, TX where attendees will gain in-depth, hands-on training with services such as Batch, Parallel Cluster, Elastic Fabric Adapter (EFA), FSX for Lustre, and more in an introductory session. Register Here Today.
  • The AWS Container Services workshop October 17, 2019 in San Antonio, TX. Designed for infrastructure administrators, developers, and architects, this workshop is designed as an introductory session that provides a mix of classroom training and hands-on labs. Register Here.

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

DevOps Foundation for Digital Transformation: Live GigaOm Webinar

DevOps Foundation for Digital Transformation: Live GigaOm Webinar

Gigaom Webinar DevOps FoundationsJoin us on October 9th at Noon from the comfort of your desk as we bring you a free 1-hour webinar on how to scale DevOps efforts beyond the pilot and deliver a real foundation for innovation and digital transformation. Hosted by GigaOm Research analyst Jon Collins and Aater Suleman, CEO and co-founder of DevOps consulting firm Flux7, the discussion will share how to effectively create a DevOps foundation and scale for success.

Specifically, attendees to the Webinar will learn:

  • Causes underlying some of the key challenges to scaling DevOps today
  • A starting baseline for achieving the benefits of an enterprise DevOps implementation
  • How to link DevOps improvements with digital transformation goals
  • Trade-offs between technical, process automation and skills improvements
  • Steps to delivering on the potential of DevOps and enterprise agility
  • How to make a real difference to their organizations, drawing from first-hand in the field experience, across multiple transformation projects.

Register now to join GigaOm and Flux7 for this free expert webinar.

We all know the strategy — transform the enterprise to use digital technologies and deliver significantly increased levels of customer engagement and new business value through innovation. Key to this is DevOps effectiveness, that is, how fast an organization can take new ideas, translate them into software and deploy them into a live environment.

But many organizations struggle to get beyond the starting blocks, coming up against a legion of challenges from skills to existing systems and platforms. Innovation speed and efficiency suffer, costs rise and the potential value does not materialize. So, what to do? Join our Webinar and learn new skills for scaling DevOps efforts beyond the pilot to deliver a real foundation for innovation and digital transformation.

Join us and GigaOm as we explore how to scale a strong DevOps foundation across the enterprise to achieve key business benefits. Interested in additional reading before the presentation? Enjoy these resources on AWS DevOps, DevOps automation and Agile DevOps and be sure to subscribe to our DevOps blog below to stay on top of the latest trends and industry news.

from Flux7 DevOps Blog

Visualizing Sensor Data in Amazon QuickSight

Visualizing Sensor Data in Amazon QuickSight

This post is courtesy of Moheeb Zara, Developer Advocate, AWS Serverless

The Internet of Things (IoT) is a term used wherever physical devices are networked in some meaningful connected way. Often, this takes the form of sensor data collection and analysis. As the number of devices and size of data scales, it can become costly and difficult to keep up with demand.

Using AWS Serverless Application Model (AWS SAM), you can reduce the cost and time to market of an IoT solution. This guide demonstrates how to collect and visualize data from a low-cost, Wi-Fi connected IoT device using a variety of AWS services. Much of this can be accomplished within the AWS Free Usage Tier, which is necessary for the following instructions.

Services used

The following services are used in this example:

What’s covered in this post?

This post covers:

  • Connecting an Arduino MKR 1010 Wi-Fi device to AWS IoT Core.
  • Forwarding messages from an AWS IoT Core topic stream to a Lambda function.
  • Using a Kinesis Data Firehose delivery stream to store data in S3.
  • Analyzing and visualizing data stored in S3 using Amazon QuickSight.

Connect the device to AWS IoT Core using MQTT

The Arduino MKR 1010 is a low-cost, Wi-Fi enabled, IoT device, shown in the following image.

An Arduino MKR 1010 Wi-Fi microcontroller

Its analog and digital input and output pins can be used to read sensors or to write to actuators. Arduino provides a detailed guide on how to securely connect this device to AWS IoT Core. The following steps build upon it to push arbitrary sensor data to a topic stream and ultimately visualize that data using Amazon QuickSight.

  1. Start by following this comprehensive guide to using an Arduino MKR 1010 with AWS IoT Core. Upon completion, your device is connected to AWS IoT Core using MQTT (Message Queuing Telemetry Transport), a protocol for publishing and subscribing to messages using topics.
  2. In the Arduino IDE, choose File, Sketch, Include Library, and Manage Libraries.
  3. In the window that opens, search for ArduinoJson and select the library by Benoit Blanchon. Choose install.

4. Add #include <ArduinoJson.h> to the top of your sketch from the Arduino guide.

5. Modify the publishMessage() function with this code. It publishes a JSON message with two keys: time (ms) and the current value read from the first analog pin.

void publishMessage() {  
  Serial.println("Publishing message");

  // send message, the Print interface can be used to set the message contents
  mqttClient.beginMessage("arduino/outgoing");
  
  // create json message to send
  StaticJsonDocument<200> doc;
  doc["time"] = millis();
  doc["sensor_a0"] = analogRead(0);
  serializeJson(doc, mqttClient); // print to client
  
  mqttClient.endMessage();
}

6. Save and upload the sketch to your board.

Create a Kinesis Firehose delivery stream

Amazon Kinesis Data Firehose is a service that reliably loads streaming data into data stores, data lakes, and analytics tools. Amazon QuickSight requires a data store to create visualizations of the sensor data. This simple Kinesis Data Firehose delivery stream continuously uploads data to an S3 storage bucket. The next sections cover how to add records to this stream using a Lambda function.

  1. In the Kinesis Data Firehose console, create a new delivery stream, called SensorDataStream.
  2. Leave the default source as a Direct PUT or other sources and choose Next.
  3. On the next screen, leave all the default values and choose Next.
  4. Select Amazon S3 as the destination and create a new bucket with a unique name. This is where records are continuously uploaded so that they can be used by Amazon QuickSight.
  5. On the next screen, choose Create New IAM Role, Allow. This gives the Firehose delivery stream permission to upload to S3.
  6. Review and then choose Create Delivery Stream.

It can take some time to fully create the stream. In the meantime, continue on to the next section.

Invoking Lambda using AWS IoT Core rules

Using AWS IoT Core rules, you can forward messages from devices to a Lambda function, which can perform actions such as uploading to an Amazon DynamoDB table or an S3 bucket, or running data against various Amazon Machine Learning services. In this case, the function transforms and adds a message to the Kinesis Data Firehose delivery stream, which then adds that data to S3.

AWS IoT Core rules use the MQTT topic stream to trigger interactions with other AWS services. An AWS IoT Core rule is created by using an SQL statement, a topic filter, and a rule action. The Arduino example publishes messages every five seconds on the topic arduino/outgoing. The following instructions show how to consume those messages with a Lambda function.

Create a Lambda function

Before creating an AWS IoT Core rule, you need a Lambda function to consume forwarded messages.

  1. In the AWS Lambda console, choose Create function.
  2. Name the function ArduinoConsumeMessage.
  3. For Runtime, choose Author From Scratch, Node.js10.x. For Execution role, choose Create a new role with basic Lambda permissions. Choose Create.
  4. On the Execution role card, choose View the ArduinoConsumeMessage-role-xxxx on the IAM console.
  5. Choose Attach Policies. Then, search for and select AmazonKinesisFirehoseFullAccess.
  6. Choose Attach Policy. This applies the necessary permissions to add records to the Firehose delivery stream.
  7. In the Lambda console, in the Designer card, select the function name.
  8. Paste the following in the code editor, replacing SensorDataStream with the name of your own Firehose delivery stream. Choose Save.
const AWS = require('aws-sdk')

const firehose = new AWS.Firehose()
const StreamName = "SensorDataStream"

exports.handler = async (event) => {
    
    console.log('Received IoT event:', JSON.stringify(event, null, 2))
    
    let payload = {
        time: new Date(event.time),
        sensor_value: event.sensor_a0
    }
    
    let params = {
            DeliveryStreamName: StreamName,
            Record: { 
                Data: JSON.stringify(payload)
            }
        }
        
    return await firehose.putRecord(params).promise()

}

Create an AWS IoT Core rule

To create an AWS IoT Core rule, follow these steps.

  1. In the AWS IoT console, choose Act.
  2. Choose Create.
  3. For Rule query statement, copy and paste SELECT * FROM 'arduino/outgoing’. This subscribes to the outgoing message topic used in the Arduino example.
  4. Choose Add action, Send a message to a Lambda function, Configure action.
  5. Select the function created in the last set of instructions.
  6. Choose Create rule.

At this stage, any message published to the arduino/outgoing topic forwards to the ArduinoConsumeMessage Lambda function, which transforms and puts the payload on the Kinesis Data Firehose stream and also logs the message to Amazon CloudWatch. If you’ve connected an Arduino device to AWS IoT Core, it publishes to that topic every five seconds.

The following steps show how to test functionality using the AWS IoT console.

  1. In the AWS IoT console, choose Test.
  2. For Publish, enter the topic arduino/outgoing .
  3. Enter the following test payload:
    {  “time”: 1567023375013,

      “sensor_a0”: 456

    }

  4. Choose Publish to topic.
  5. Navigate back to your Lambda function.
  6. Choose Monitoring, View logs in CloudWatch.
  7. Select a log item to view the message contents, as shown in the following screenshot.

Visualizing data with Amazon QuickSight

To visualize data with Amazon QuickSight, follow these steps.

  1. In the Amazon QuickSight console, sign up.
  2. Choose Manage Data, New Data Set. Select S3 as the data source.
  3. A manifest file is necessary for Amazon QuickSight to be able to fetch data from your S3 bucket. Copy the following into a file named manifest.json. Replace YOUR-BUCKET-NAME with the name of the bucket created for the Firehose delivery stream.
    {
       "fileLocations":[
          {
             "URIPrefixes":[
                "s3://YOUR-BUCKET-NAME/"
             ]
          }
       ],
       "globalUploadSettings":{
          "format":"JSON"
       }
    }
  4. Upload the manifest.json file.
  5. Choose Connect, then Visualize. You may have to give Amazon QuickSight explicit permissions to your S3 bucket.
  6. Finally, design the Amazon QuickSight visualizations in the drag and drop editor. Drag the two available fields into the center card to generate a Sum of Sensor_value by Time visual.

Conclusion

This post demonstrated visualizing data from a securely connected remote IoT device. This was achieved by connecting an Arduino to AWS IoT Core using MQTT, forwarding messages from the topic stream to Lambda using IoT Core rules, putting records on an Amazon Kinesis Data Firehose delivery stream, and using Amazon QuickSight to visualize the data stored within an S3 bucket.

With these building blocks, it is possible to implement highly scalable and customizable IoT data collection, analysis, and visualization. With the use of other AWS services, you can build a full end-to-end platform for an IoT product that can reliably handle volume. To further explore how hardware and AWS Serverless can work together, visit the Amazon Web Services page on Hackster.

from AWS Compute Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

2019 Week in Review 4

GitLab Commit, held in New York last week, brought us news that GitLab completed a $268 million Series E round of fundraising. The company reports that it is now valued at $2.75 billion and that it plans to invest the cash infusion in its DevOps platform offerings — including monitoring, security, and planning.

In addition, the firm announced GitLab 12.3 which seeks to underscore that point which includes a WAF built into the GitLab SDLC platform for monitoring and reporting of security concerns related to Kubernetes clusters. It also includes new analytics features and enhanced compliance capabilities.

To stay up-to-date on DevOps best practices, cloud security, and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

DevOps News

  • GitHub announced that they have integrated the Checks API with GitHub Page, allowing operators to easily understand why a GitHub Page build failed. And, as Pages is now a GitHub App, users are able to see build status via the Checks interface.
  • And in other Git news, Semmle revealed that it is joining GitHub. According to the companies, security researchers use Semmle to find vulnerabilities in code with simple declarative queries which they then share the Semmle community to improve the safety of code in other codebases.
  • At FutureStack in New York last week, New Relic announced the “industry’s first observability platform that is open, connected and programmable, enabling companies to create more perfect software.” The new capabilities include New Relic Logs, New Relic Traces, New Relic Metrics, and New Relic AI. In addition, the company unveiled the ability for customers and partners to build new applications via programming on the New Relic One Platform.
  • Kubernetes has delivered Kubernetes 1.16, which it reports consists of 31 enhancements including custom resources, a metrics registry, and significant changes to the Kubernetes API.

AWS News

  • Amazon has unveiled a new Step Functions feature in all regions where Step Functions is offered, support for dynamic parallelism. According to AWS, this was probably the most requested feature for Step Functions as it unblocks the implementation of new use cases and can help optimize existing ones. Specifically, now Step Functions support a new Map state type for dynamic parallelism.
  • Heavy CloudFormation users will be happy to see that Amazon has expanded its capabilities; now operators can use CloudFormation templates to configure and provision additional features for Amazon EC2, Amazon ECS, Amazon ElastiCache, Amazon ES, and more. You can see the full list of new capabilities here.
  • AWS has brought to market a new Amazon WorkSpaces feature that will now restore a WorkSpace to a last known healthy state, allowing you to easily recover from the impact of inaccessible WorkSpaces caused by incompatible 3rd party updates on Workspaces.
  • AWS continues to evolve its IoT solution set with AWS IoT Greengrass 1.9.3 . Now available, AWS has added support for ARMv6 and new machine learning inference capabilities.
  • AWS introduced in preview the NoSQL Workbench for Amazon DynamoDB. The application to help operators design and visualize data models, run queries on data, and generate code for applications is free, client-side, and available for Windows and macOS.

Flux7 News
Flux7 has several upcoming educational opportunities. Please join us at:

  • Our October 9, 2019 Webinar, DevOps as a Foundation for Digital Transformation. This free 1-hour webinar from GigaOm Research brings together experts in DevOps, featuring GigaOm analyst Jon Collins and a special guest from Flux7, CEO and co-founder Aater Suleman. The discussion will focus on how to scale DevOps efforts beyond the pilot and deliver a real foundation for innovation and digital transformation.
  • The High-Performance Computing Immersion Day on October 11, 2019, in Houston, TX where attendees will gain in-depth, hands-on training with services such as Batch, Parallel Cluster, Elastic Fabric Adapter (EFA), FSX for Lustre, and more in an introductory session. Register Here Today.
  • The AWS Container Services workshop October 17, 2019 in San Antonio, TX. Designed for infrastructure administrators, developers, and architects, this workshop is designed as an introductory session that provides a mix of classroom training and hands-on labs. Register Here.

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 10

With the HPC User Forum this past week, we saw several High-Performance Computing (HPC) related news announcements. Starting off, Hyperion, who established the Forum in 1999, shared that HPC in the cloud is gaining traction, with new major growth areas coming from AI/ML/DL, big data analytics, and non-traditional HPC users from the enterprise space.

Univa, in turn, introduced Navops Launch 2.0. The newest version of its platform is focused on simplifying enterprise HPC workloads migration to the cloud. It also announced the expansion of its Navops Launch HPC cloud-automation platform to now support the Slurm workload scheduler. And, HPE announced ML Ops, a container-based solution that supports ML workflows and lifecycles across on-premises, public cloud and hybrid cloud environments.

To stay up-to-date on DevOps best practices, cloud use cases like HPC, and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

DevOps News

  • HashiCorp announced the beta version of Clustering for HashiCorp Terraform Enterprise. According to a blog announcement, the new Clustering functionality enables users to easily install and manage a scalable cluster that can meet their performance and availability requirements. The clustering capability in Terraform Enterprise includes the ability to scale to meet workload demand, enhanced availability and an easier installation and management process.
  • HashiCorp is partnering with VMware to support the Service Mesh Federation Specification. A new service mesh integration between Consul Enterprise and NSX-SM will allow traffic to flow securely beyond the boundary of each individual mesh, enabling flexibility and interoperability.
  • While we’re discussing service mesh, Kong announced a new open source project called Kuma. In a press release, Kuma is described as a universal control plane that addresses the limitations of first-generation service mesh technologies by enabling seamless management of any service on the network. Kuma runs on any platform – including Kubernetes, containers, virtual machines, bare metal, and other legacy environments.
  • In other news, ScyllaDB announced a new project — Alternator. The firm describes the open-source software in a press release as enabling application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB, allowing DynamoDB users to migrate to an open-source database that runs anywhere — on any cloud platform, on-premises, on bare-metal, virtual machines or Kubernetes.

AWS News

  • First introduced at re:Invent last year, AWS just announced GA of Amazon Quantum Ledger Database (QLDB). QLDB is a ledger database that is intended as a system of record for stored data. According to Amazon, it maintains a complete, immutable history of all committed changes to the data that cannot be updated, altered, or deleted. The QLDB API allows you to cryptographically verify that the history is accurate and legitimate, making it ideal for finance, ecommerce, manufacturing, and more.
  • To gain better understanding of network flow and avoid legwork typically associated with this, Amazon has introduced the availability of additional metadata that can now be included in Flow Log records. Amazon notes that enriched Flow Logs allow operators to simplify their scripts or remove the need for post-processing altogether, by reducing the number of computations or look-ups required to extract meaningful information from the log data. For example, operators can choose to add metadata such as vpc-ic, subnet-id, instance-id, or tcp-flags.
  • AWS Service Catalog introduced the ability to get visibility of portfolio and product budgets with integration to AWS Budgets. The newly added feature means that users can now make and connect budgets with portfolios and products and track spend to them.
  • Having worked recently on a QuickSight project for a customer, our DevOps consultants enjoyed these two articles on how to Federate Amazon QuickSight access with Okta for single-sign on to QuickSight and Create advanced insights using Level Aware Aggregations in Amazon QuickSight which illustrates several examples how to perform calculations on data to derive advanced and meaningful insights.

Flux7 News

  • Read Flux7’s newest article, Flux7 Case Study: Technology’s Role in the Agile Enterprise, in which we share our journey to becoming an Agile Enterprise. In this story of how we at Flux7 have moved through the process, this article shares how we have adopted specific supporting technologies to further our agile goals.
  • Join us at Flux7 as we and AWS Present a High Performance Computing Immersion Day on October 11, 2019, in Houston, TX. Attendees to the hands-on training session will learn about services such as Batch, Parallel Cluster, Elastic Fabric Adapter (EFA), FSX for Lustre, and more in an introductory session. Register Here Today.

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog