Tag: AWS Management Tools Blog

How to self-service manage AWS Auto Scaling groups and Amazon Redshift with AWS Service Catalog Service Actions

How to self-service manage AWS Auto Scaling groups and Amazon Redshift with AWS Service Catalog Service Actions

Some of the customers I work with provide AWS Service Catalog products to their end-users to enable self-service for launching and managing Amazon Redshift, EMR clusters or web applications at scale using AWS Auto Scaling groups. These end-users would like the ability to self-manage these resources, for example, be able to take a snapshot of an instance or data warehouse.  With AWS Service Catalog, end-users can launch data warehouse products using Redshift, a web farm using EC2 or a Hadoop instance using EMR.

In this blog post, I will show you how to enable your end-users by creating self-service actions using AWS Service Catalog Service Actions with AWS Systems Manager. You will also learn how to use the Service Actions feature to manage these products, for example, how to start or stop EC2 instances running under an auto scaling group and also how to backup EC2 and Redshift.

This solution uses the following AWS services. Most of the resources are set up for you with an AWS CloudFormation stack:

Background

Here are some of AWS Service Catalog concepts referenced in this post. For more information, see Overview of AWS Service Catalog.

  • A product is a blueprint for building the AWS resources to make available for deployment on AWS, along with the configuration information. Create a product by importing an AWS CloudFormation template, or, in case of AWS Marketplace-based products, by copying the product to AWS Service Catalog. A product can belong to multiple portfolios.
  • A portfolio is a collection of products, together with the configuration information. Use portfolios to manage user access to specific products. You can grant portfolio access for an AWS Identity and Access Management (IAM) user, IAM group, or IAM role level.
  • A provisioned product is an AWS CloudFormation stack; that is, the AWS resources that are created. When an end-user launches a product, AWS Service Catalog provisions the product from an AWS CloudFormation stack.
  • Constraints control the way users can deploy a product. With launch constraints, you can specify a role that the AWS Service Catalog can assume to launch a product.

Solution overview­

The following diagram maps out the solution architecture.

 

Here’s the process for the administrator:

  1. The administrator creates an AWS CloudFormation template for an auto scaling group.
  2. The administrator then creates an AWS Service Catalog product based on the CloudFormation template
  3. AWS Systems Manager is then used to create a SSM automation document that will manage the EC2 instances under an auto scaling group and Redshift cluster. An AWS Service Catalog self service action is then created based on the automation documents and attached to the AWS Service Catalog auto scaling group and Redshift product.

Here’s the process when the end-user launches the auto scaling group product:

  1. The end-user selects and launches an AWS Service Catalog auto scaling group or the Redshift product.
  2. The end-user uses the AWS Service Catalog console to select the auto scaling group, or the Redshift product then chooses the self-service action to stop or start the EC2 instances or create a snapshot of Redshift.
  3. Behind the scene, invisible to the end-user the SSM automation document stops or starts the EC2 instances or takes a snapshot of Redshift.

Step 1: Configuring an environment

To get the setup material:

  1. Download the sc_ssm_autoscale.zip file with the configuration content.
  2. Unzip the contents and save them to a folder. Note the folder’s location.

Create your AWS Service Catalog auto scaling group and Redshift products:

  1. Log in to your AWS account as an administrator. Ensure that you have an AdministratorAccess IAM policy attached to your login because you’re going to create AWS resources.
  2. In the Amazon S3 console, create a bucket. Leave the default values except as noted.
    •  Bucket name – scssmblog-<accountNumber>. (No dashes in the account number e.g. scssmblog-999999902040)

To upload content to the new bucket:

  1. Select your bucket, and choose Upload, Add files.
  2. Navigate to the folder that contains the configuration content. Select all the files and choose Open. Leave the default values except as noted.
  3. After the Review page, from the list of files, select the sc_setup_ssm_autoscale.json file.
  4. Right-click the link under Object URL and choose Copy link address.

To launch the configuration stack:

  1. In the AWS CloudFormation console, choose Create Stack, Amazon S3 URL, paste the URL you just copied, and then choose Next.
  2. On the Specify stack details page, specify the following:
    1. Stack name: scssmblogSetup
    2. S3Bucket: scssmblog-<accountNumber>
    3. SCEndUser: The current user name
  3. Leave the default values except as noted.
  4. On the Review page, check the box next to I acknowledge that AWS CloudFormation might create IAM resources with custom names, and choose Create.
  5. After the status of the stack changes to CREATE COMPLETE, select the stack and choose Outputs to see the output.

Find the ServiceCatalog entry choose the URL to the right.

Congratulations! You have completed the setup.

Step 2: Creating the AWS SSM automation document

You will repeat these steps for the Redshift snapshot document.

  1. Open the file ssmasg_stop.json for ASG action redshift_snapshot.json for the Redshift document next you downloaded in the previous step.
  2. Copy the contents.
  3. Log into the AWS Systems Manager console as an admin user.
  4. Choose Documents from the menu at the bottom left.
  5. Choose Create document:
    • Name – SCAutoScalingEC2stop for ASG – SCSnapshotstop for Redshift
    • Target Type
      • /AWS::AutoScaling::AutoScalingGroup for ASG
      • /AWS::Redshift::Cluster   for Redshift
    • Document type – Automation document
    • JSON – paste the content you copied from step 2
    • Choose Create document

You will see a green banner saying your document was successfully created.

 

Step 3: Create a AWS Service Catalog self-service action

  1. Log into the AWS Service Catalog console as an admin user.
  2. On the left navigation pane, choose Service actions.
  3. Choose Create new action.
  4. On the Define page choose Custom documents.
  5. Choose the document you just created for ASG.
  6. Choose Next.
  7. On the Configure page, leave the default values.
  8. Choose Create action.

You will see a banner saying the product has been created and is now ready to use.
Repeat for the Redshift product.

Step 4: Associate action to the product

  1. On the Service actions page, choose the action you created.
  2. Choose Associate action.
  3. Choose the AutoScaling product.
  4. Choose the Version.
  5. Choose Associate action.

Repeat for the Redshift product.

Congratulation! your new service action has been associated with the product. The next step is to deploy. the AutoScaling and Redshift products and use the new self-service action.

Step 5: Launching the AWS Service Catalog product

Redshift

  1. Log into the AWS Service Catalog console as an admin or end-user.
  2. On the left navigation pane on top, choose Products list.
  3. Choose the Redshift product.
  4. Choose LAUNCH PRODUCT.
  5. Enter a name – myredshift
  6. Choose Next.
  7. On the Parameters page:
    • DBName – mydb001
    • MasterUserPassword – enter a password
  8. Choose Next.
  9. On the TagOptions page choose Next.
  10. On the Notifications page choose Next.
  11. On the Review page choose Launch.

Auto Scaling Group

  1. Log into the AWS Service Catalog console as an admin or end-user.
  2. On the left navigation pane on top, choose Products list.
  3. Choose the AutoScaling product.
  4. Choose LAUNCH PRODUCT.
  5. Enter a name – myscacg
  6. Choose Next.
  7. On the Parameters page:
    • Serverpostfix – default
    • Imageid – enter an amz-linux ami for your current region
  8. Choose Next.
  9. On the TagOptions page choose Next.
  10. On the Notifications page choose Next.
  11. On the Review page choose Launch.

Wait for the status to change to Completed.

 

Step 6: Executing the self-service action

Auto Scaling Group

  1. Choose Actions.
  2. Choose the self-service action you created SCAutoScalingEC2stop.
  3. Choose RUN ACTION to confirm.

Redshift

  1. Choose Actions.
  2. Choose the self-service action you created SCSnapshotstop.
  3. Choose RUN ACTION to confirm.

Congratulations, you have successfully executed the new self-service action.

 

Cleanup process

To avoid incurring cost, please delete resources that are not needed. You can terminate the Service Catalog product deployed by selecting Action then Terminate.

 

Conclusion
In this post, you learned an easy way to backup Redshift databases and to manage EC2 instances in an auto scaling group. You also saw how there’s an extra layer of governance and control when you use AWS Service Catalog to deploy resources to support business objectives.

About the Author

Kenneth Walsh is a New York-based Solutions Architect focusing on AWS Marketplace. Kenneth is passionate about cloud computing and loves being a trusted advisor for his customers. When he’s not working with customers on their journey to the cloud, he enjoys cooking, audio books, movies, and spending time with his family and dog.

from AWS Management Tools Blog

Introducing Amazon CloudWatch Container Insights for Amazon ECS

Introducing Amazon CloudWatch Container Insights for Amazon ECS

Amazon Elastic Container Service (Amazon ECS) lets you monitor resources using Amazon CloudWatch, a service that provides metrics for CPU and memory reservation and cluster and services utilization. In the past, you had to enable custom monitoring of services and tasks. Now, you can monitor, troubleshoot, and set alarms for all your Amazon ECS resources using CloudWatch Container Insights. This fully managed service collects, aggregates, and summarizes Amazon ECS metrics and logs.

The CloudWatch Container Insights dashboard gives you access to the following information:

  • CPU and memory utilization
  • Task and service counts
  • Read/write storage
  • Network Rx/Tx
  • Container instance counts for clusters, services, and tasks
  • and more

Direct access to these metrics offers you much fuller insight into and control over your Amazon ECS resources.

With CloudWatch Container Insights, you can:

  • Gain access to CloudWatch Container Insights dashboard metrics
  • Integrate with CloudWatch Logs Insights to dynamically query and analyze container application and performance logs
  • Create CloudWatch alarm notifications to track performance and potential issues
  • Enable Container Insights with one click, no need for additional configuration or the use of sidecars to monitor your tasks.

CloudWatch Container Insights can also support Amazon Elastic Kubernetes Service (Amazon EKS).

Overview

In this post, I guide you through Container Insights setup and introduce you to the Container Insights dashboard. I demonstrate the Amazon ECS metrics available through this console. I show you how to query Container Insights performance logs to obtain specific metric data. Finally, I walk you through the default, automatically generated dashboard, explaining how to right-size tasks and scale services.

Enable Container Insights on new clusters by default

First, configure your Amazon ECS service to enable Container Insights by default for clusters created with your current IAM user or role.

  1. Open the Amazon ECS console.
  2. In the navigation pane, choose Account Settings.
  3. To enable the Container Insights default opt-in, check the box at the bottom of the page. If this setting is not enabled, Container Insights can be enabled later when creating a cluster.

Next, follow the Amazon ECS Workshop for AWS Fargate instructions to create an Amazon ECS cluster with Container Insights enabled. You are creating a cluster with a web frontend and multiple backend services.

When completed, access the frontend application using the URL of the inbound Application Load Balancer, as in the following diagram.

You can also access it by using the output of the following command:

alb_url=$(aws cloudformation describe-stacks --stack-name fargate-demo-alb --query 'Stacks[0].Outputs[?OutputKey==`ExternalUrl`].OutputValue' --output text)
echo "Open $alb_url in your browser"

Explore CloudWatch Container Insights

After the cluster is running, run the container tasks to confirm that Container Insights is enabled and CloudWatch is collecting Amazon ECS metrics.

To view the newly collected metrics, navigate to the CloudWatch console and choose Container Insights. To view your automatic dashboard, select an ECS Resource dimension. The available Amazon ECS options are ECS clusters, ECS services, and ECS tasks. Choose ECS Clusters.

The dashboard should then display the available metrics for your cluster. These metrics should include CPU, memory, tasks, services, and network utilization.

In the following dashboard example, the tasks are running on Fargate and are required to use AWSVPC networking mode. Container Insights doesn’t currently support AWSVPC networking mode for network metrics in this first release. You can see in the following graph that these metrics are omitted. This cluster was set up to support only Fargate tasks, and the container instance count is equal to zero.

At the bottom of the page, select an ECS cluster. Choose Actions, View performance logs. This selection leads to CloudWatch Logs Insights, where you can quickly and effectively query CloudWatch Logs data metrics. CloudWatch Logs Insights includes a purpose-built query language with simple but powerful commands. For more information, see Analyzing Log Data with CloudWatch Logs Insights.

Container Insights provides a default query that can be executed by choosing Run query. This query reports all of the metrics that CloudWatch collects from the cluster, minute by minute. You can expand each data item to investigate individual metrics.

With CloudWatch Container Insights, you can monitor an ECS cluster from a unified view, quickly identifying and responding to any operational issues.

Explore use cases

In this section, I explore how to use CloudWatch and Container Insights to manage your cluster. Start by generating traffic to your cluster. Create one web request per second to your frontend URL:

alb_url=$(aws cloudformation describe-stacks --stack-name fargate-demo-alb --query 'Stacks[0].Outputs[?OutputKey==`ExternalUrl`].OutputValue' --output text)
while true; do curl -so /dev/null $alb_url ; sleep 1 ; done &;

In the CloudWatch Insights dashboard, choose ECS Services, and select your cluster. As the following dashboard screenshot shows, CPU and memory utilization are minimal, and the frontend service can handle this load.

Next, simulate high traffic to the cluster using ApacheBench. The following ApacheBench command generates nine concurrent requests (-c 9), for 60 seconds (-t 60), and ignores length variances (-l). Loop this repeatedly every second.

while true; do ab -l -c 9 -t 60 $alb_url ; sleep 1; done

Task CPU increases significantly while Memory Utilization remains low.

Adjust the dashboard time range to see the average CPU and memory utilization for each task in the past 30 minutes, as shown in the following screenshot.

To see individual resource utilization average over 30 minutes, scroll to the bottom of the dashboard.

Select any task and choose View performance logs. This selection opens CloudWatch Logs Insights for the ECS cluster. In the query box, enter the following query and choose Run query:

stats avg(CpuUtilized), avg(MemoryUtilized) by bin (30m) as period, TaskDefinitionFamily, TaskDefinitionRevision 
| filter Type = "Task" | sort period desc, TaskDefinitionFamily

From the query result, the frontend service has an average CPU utilization of 229.2273 unit and memory utilization of 71 megabytes. The current frontend task configuration uses 512 MiB of memory and 256 unit of CPU. Your frontend service has used almost all CPU resources that it has reserved.

To improve the frontend service performance based on the CPU metrics, one option is to re-size the task. First, create a new revision of this task definition with a new CPU and Memory value. Then, update the frontend service to use the new revision.

Step 1: Create a new task definition revision.

  1. In the ECS console, choose Task Definitions.
  2. For ecsdemo-frontend, select the check box and choose Create new revision.
  3. On the Create new revision of Task Definition screen, under Container Definitions, choose ecsdemo-frontend.
  4. For Memory Limits (MiB), choose Soft limit and enter the value of 1024.
  5. Under Environment, for CPU units, enter 512.
  6. Choose Update.
  7. On the Create new revision of Task Definition screen, under Task size, choose 0.5 vCPU, which is 512 CPU unit.
  8. For Memory, choose 1GB.
  9. Choose Create.

Step 2: Update the frontend service with the new task definition.

  1. Choose Cluster and select the cluster.
  2. On the Services tab, for ecsdemo-frontend, select the check box and choose Update.
  3. For Task Definition, select the revision that you previously created in the first step.
  4. Choose Skip to review then Update Service.

ECS spins up the new tasks with this new revision of the task definition and removes the old ones.

As shown in the dashboard, the average percentage CPU Utilization remains high. The current load still stresses CPU. On the positive note, the load balancer for the frontend service can handle significantly more requests, from 25k–57k requests over 5 minutes.

The benchmarking result from ApacheBench shows the same evidence. From the client perspective, the frontend service is able to process over twice the requests. By increasing the CPU available for your task, you increase the frontend service’s ability to handle the load. Remember that the frontend service consists of three tasks and CPU usage remains high.

To continue to improve the frontend service performance based on the CPU metrics, increase task size or scale out the service. With the current load, the average RequestCountPerTarget value is around 8k/5-minute interval. Update the frontend service to automatically scale tasks and keep RequestCountPerTarget closed to 1000 requests per target.

Run the following command to update the frontend service, setting up Service Auto Scaling with a maximum of 25 tasks and a minimum of 3 tasks. For Scaling policy, use Target Tracking Scaling Policy with the target value of 1000 for RequestCountPerTarget.

cd ~/environment/fargate-demo
export clustername=$(aws cloudformation describe-stacks --stack-name fargate-demo --query 'Stacks[0].Outputs[?OutputKey==`ClusterName`].OutputValue' --output text)
export alb_arn=$(aws cloudformation describe-stack-resources --stack-name fargate-demo-alb | jq -r '.[][] | select(.ResourceType=="AWS::ElasticLoadBalancingV2::LoadBalancer").PhysicalResourceId')
export target_group_arn=$(aws cloudformation describe-stack-resources --stack-name fargate-demo-alb | jq -r '.[][] | select(.ResourceType=="AWS::ElasticLoadBalancingV2::TargetGroup").PhysicalResourceId')
export target_group_label=$(echo $target_group_arn |grep -o 'targetgroup.*')
export alb_label=$(echo $alb_arn |grep -o 'farga-Publi.*')
export resource_label="app/$alb_label/$target_group_label"

aws application-autoscaling register-scalable-target \
    --service-namespace ecs \
    --scalable-dimension ecs:service:DesiredCount \
    --resource-id service/${clustername}/ecsdemo-frontend \
    --min-capacity 3 \
    --max-capacity 25
 
envsubst <config.json.template >/tmp/config.json
 
aws application-autoscaling put-scaling-policy --cli-input-json file:///tmp/config.json

Now the frontend service starts to scale. CloudWatch Container Insights displays the number of tasks in each step of scaling.

 

From the load balancer of your frontend service request metric, you can see that your cluster can handle even more requests, which is approximately 153k requests or 6.1k request per target over 5 minutes. If you had not set the maximum number of tasks of 25 for the Auto Scaling policy, the frontend service would have scaled out even more as the threshold is 1000 requests per target.

You see the same evidence from ApacheBench: the frontend service is able to process more requests and the time per request is much smaller.

From the query result in CloudWatch Logs Insight, the average CPU utilization is at 158 out of the 512 CPU unit configured earlier, which is relatively low.

You can see that you can use these metrics from Container Insights to help fine-tune your cluster. Similarly to how you spotted a task that was suffering, you could spot an oversized task using similar techniques and reduce the configuration, saving money in turn.

To see the frontend service scale in, stop the load by pressing CRTL-C twice to cancel the ab loop. For curl loop, type fg to bring the process to the foreground, then press CRTL-C. Within a couple of minutes, the frontend services start and continue to scale in.

Conclusion

In the past, you had to implement custom metrics. Now, CloudWatch Container Insights for Amazon ECS helps you focus on monitoring and managing your application, so that you can respond quickly to operational issues. The service provides sharper insight into your Amazon ECS clusters, services, and tasks through added CPU and memory metrics. You can use CloudWatch Log Insights for right-sizing, alarms, scaling, and query performance log metrics that drive more informed analysis.

In this post, I introduced these new CloudWatch container metrics. I walked you through the default, automatically generated dashboard, showing you how to use the CloudWatch Container Insights console to right-size tasks and scale services. I dived deep into a performance log event provided by CloudWatch Logs Insights. I showed you how to use query language to find a specific metric’s value and choose the best value for right-sizing purposes.

CloudWatch Container Insights is generally available for Amazon ECS, AWS Fargate, Amazon EKS, and Kubernetes. For more information, see the documentation on Using Container Insights. To provide feedback or subscribe to email updates for this feature, email us at [email protected].

 

About the Author

Sirirat Kongdee is a Sr. Solutions Architect at Amazon Web Services. She loves working with customers and helping them remove roadblocks from their cloud journey. She enjoys traveling (whether for work or not) as much as she enjoys hanging out with her pug in front of the TV.

 

from AWS Management Tools Blog

Managing Amazon WorkSpaces by integrating AWS Service Catalog with ServiceNow

Managing Amazon WorkSpaces by integrating AWS Service Catalog with ServiceNow

As enterprises adopt Amazon WorkSpaces as their virtual desktop solution, there is a need to implement an ITSM-based self-service offering for provisioning and operations.

In this post, you will learn how to integrate AWS Service Catalog with ServiceNow so users can request their own WorkSpace instances inclusive of all business-level approvals and auditing. You will then see how to use Self-Service Actions to add operations functions directly from ServiceNow to allow users to reboot, terminate, repair, or upgrade their WorkSpaces.

Overview

AWS Service Catalog allows you to manage commonly deployed AWS services and provisioned software products centrally. This service helps your organization achieve consistent governance and compliance requirements, while enabling users to deploy only the approved AWS services they need.

ServiceNow is an enterprise service-management platform that places a service-oriented lens on the activities, tasks, and processes needed for a modern work environment. AWS Service Catalog is a self-service application through which end users can order IT services based on request fulfillment approvals and workflows, enabling you to approve a specific request within ServiceNow (for example, a request for a WorkSpace to be provisioned).

Solution

This solution shows how AWS Service Catalog can be used to enable a self-service lifecycle-management offering for Amazon WorkSpaces from within ServiceNow. Using this solution:

  • Users can provision, upgrade, and terminate their WorkSpace instance from within the ServiceNow portal.
    • At the request stage, users can select the instance size, type, and configuration parameters when creating their order in the AWS Service Catalog.
    • After the instance is created, the user can follow the same process to request service actions such as reboot, terminate, rebuild, or upgrade.
  •  ServiceNow admins can determine (based on IAM roles) which Amazon WorkSpaces software bundle each group of users installs by default.

The arrows in the following diagram depict the API flow between the services when users access Amazon WorkSpaces via ServiceNow and AWS Service Catalog.

 

Prerequisites

To get started, do the following:

1.       Install and configure the AWS Service Catalog connector for ServiceNow.

2.       Add an Amazon WorkSpaces product.

After installing the prerequisites, you have an AWS Service Catalog-provisioned product. Now, you can access the following Create WorkSpace Instance page to provision, upgrade, and terminate WorkSpace instances within ServiceNow.

 

 

Adding AWS Service Catalog operational actions

Next, you will add AWS Service Catalog Self Service Actions, enabling you to run an AWS API call or command on the Workspace instance, including:….”Install a software package.

·         Reboot a workspace instance.

·         Change performance modes.

·         Repair a workspace instance.

For each Service-Action that you want to create, you will need to add an AWS Systems Manager automation document. In this example, you will create an AWS Service Catalog service action to reboot a workspace instance.

First, create a JSON file for the Service-Action that you wish to create.

Here’s sample code for an API-driven Amazon WorkSpaces reboot

{
  "description": "Reboot WorkSpaces instances",
  "schemaVersion": "0.3",
  "assumeRole": "",
  "parameters": {
    "WorkspaceId": {
      "type": "String",
      "description": "WorkspaceID- ws-xxxx"
    },
    "WPAction": {
      "type": "String",
      "description": "Action",
      "default": "Reboot"
    },
    "AutomationAssumeRole": {
      "type": "String",
      "description": "(Optional) The ARN of the role that allows Automation to perform the actions on your behalf.",
      "default": ""
    }
  },
  "mainSteps": [
    {
      "name": "wpreboot",
      "action": "aws:executeAwsApi",
      "inputs": {
        "Service": "workspaces",
        "Api": "RebootWorkspaces",
        "RebootWorkspaceRequests": [
          {
            "WorkspaceId": ""
          }
        ]
      },
      "isEnd": "True"
    }
  ]
}

After you create this file, execute the AWS CLI command to build the automation document and link it to Amazon WorkSpaces.

 

Note

Complete this task in the AWS CLI to enable the AWS::WorkSpaces::Workspace target.

In this example, the file is named wpreboot.json to create an automation document called wpreboot. Run the following command:

C:\ssm>aws ssm create-document –content file://c:\ssm\wpreboot.json –name wpreboot –document-type Automation –target /AWS::WorkSpaces::Workspace

Test this action in Systems Manager to ensure that it’s working as expected.

Next, add the automation document to a new AWS Service Catalog self-service action.  Instuctions can be found at: https://docs.aws.amazon.com/servicecatalog/latest/adminguide/using-service-actions.html  Once completed, you should have the service actions associated with your Amazon WorkSpaces product similar to the following example.

 

In the ServiceNow portal, you should now have this “reboot” option associated with your product as shown in the following example.

 

Adding ServiceNow Workflows

As a final step, you will build ServiceNow Workflows to allow you to add approvals, notifications, open change records, and other organizational-based requirements before an order is approved.

The AWS Service Catalog connector for ServiceNow contains the following Workflows that you can use as a starting point. The workflows should be updated to meet the needs of your organization.

·         AWS Service Catalog – Approve Change Request

·         AWS Service Catalog – Execute Provisioned Product Action

·         AWS Service Catalog – Invoke Workflow Task

·         AWS Service Catalog – Provision Product Request

·         AWS Service Catalog – Track Product record

 

Summary

Integrating AWS Service Catalog with ServiceNow gives end users the ability to create a self-service lifecycle-management solution for Amazon WorkSpaces in a familiar, secure, ITSM-aligned process. With the addition of Service Actions, enterprises can add additional operational capabilities such as the ability to upgrade, reboot, repair or install software to their Amazon WorkSpace from within the ServiceNow Portal.

 

About the author

Alan DeLucia is a New York based Business Development Manager with AWS Service Catalog and AWS Control Tower. Alan enjoys helping customers build management capabilities and governance into their AWS solutions. In his free time, Alan is an avid Mountain Biker and enjoys spending time and vacationing with his family.

from AWS Management Tools Blog

Using the AWS Config Auto Remediation feature for Amazon S3 bucket compliance

Using the AWS Config Auto Remediation feature for Amazon S3 bucket compliance

AWS Config keeps track of the configuration of your AWS resources and their relationships to your other resources. It can also evaluate those AWS resources for compliance. This service uses rules that can be configured to evaluate AWS resources against desired configurations.

For example, there are AWS Config rules that check whether or not your Amazon S3 buckets have logging enabled or your IAM users have an MFA device enabled. AWS Config rules use AWS Lambda functions to perform the compliance evaluations, and the Lambda functions return the compliance status of the evaluated resources as compliant or noncompliant. The noncompliant resources are remediated using the remediation action associated to the AWS Config rule. With Auto Remediation feature of AWS Config rules, the remediation action can be executed automatically when a resource is found non-compliant.

Until now, remediation actions had to be executed manually for each noncompliant resource. This is not always feasible if you have many noncompliant resources for which you want to execute remediation actions. It can also pose risks if these resources remain without remediation for an extended amount of time.

In this post, you learn how to use the new AWS Config Auto Remediation feature on a noncompliant S3 bucket to ensure it is remediated automatically.

Overview

The AWS Config Auto Remediation feature automatically remediates non-compliant resources evaluated by AWS Config rules. You can associate remediation actions with AWS Config rules and choose to execute them automatically to address non-compliant resources without manual intervention.

You can:

  • Choose the remediation action you want to associate from a pre populated list.
  • Create your own custom remediation actions using AWS Systems Manager Automation documents.

If a resource is still non-compliant after auto remediation, you can set the rule to try auto remediation again.

Solution

This post describes how to use the AWS Config Auto Remediation feature to auto remediate any non-compliant S3 buckets using the following AWS Config rules:

  • s3-bucket-logging-enabled
  • s3-bucket-server-side-encryption-enable
  • s3-bucket-public-write-prohibited
  • s3-bucket-public-read-prohibited

These AWS Config rules act as controls to prevent any non-compliant S3 activities.

Prerequisites

Make sure you have the following prerequisites before following the solution in this post:

  • You must have AWS Config enabled in your AWS account. For more information, see Getting Started with AWS Config.
  • The AutomationAssumeRole in the remediation action parameters should be assumable by SSM. The user must have pass-role permissions for that role when they create the remediation action in AWS Config, and that role must have whatever permissions the SSM document requires. For example, it may need “s3:PutEncryptionConfiguration” or something else specific to the API call that SSM uses.
  • (Optional): While setting up remediation action, if you want to pass the resource ID of non-compliant resources to the remediation action, choose Resource ID parameter. If selected, at runtime that parameter is substituted with the ID of the resource to be remediated. Each parameter has either a static value or a dynamic value. If you do not choose a specific resource ID parameter from the drop-down list, you can enter values for each key. If you choose a resource ID parameter from the drop-down list, you can enter values for all the other keys except the selected resource ID parameter.

Steps

Use the following steps to set up Auto Remediation for each of the four AWS Config rules.

To set up Auto Remediation for s3-bucket-logging-enabled

The “s3-bucket-logging-enabled” AWS Config rule checks whether logging is enabled for your S3 buckets. Use the following steps to auto-remediate an S3 bucket whose logging is not enabled:

  1. Sign in to the AWS Management Console and open the AWS Config console.
  2. On the left pane, choose Rules
  3. On the Rules page, under Rule name, select s3-bucket-logging-enabled and then choose Add rule to add it to the rule list. (If the rule already exists, select it from the rule list and then choose Edit.)  There is one bucket named “tests3loggingnotenabled” which shows as a non-compliant resource under “s3-bucket-logging-enabled” rule.
  4. Return to the Rules page and choose Edit.
  5. In the Choose remediation action section, from the Remediation action list, select AWS-ConfigureS3BucketLogging. (AWS-ConfigureS3BucketLogging is an AWS SSM Automation document that enables logging on an S3 bucket using SSM Automation.)
  6. In the Auto remediation section, select Yes to automatically remediate non-compliant resources.
  7. In the Parameters section, enter the values for the required parameters such as AutomationAssumeRole, Grantee details required to execute the remediation action, and the Target bucket to store logs.
  8. Choose Save. The “s3-bucket-logging-enabled” AWS Config rule can now auto-remediate non-compliant resources. A confirmation that it executed the remediation action shows in the Action status column.S3 bucket Server access logging is now enabled automatically using the AWS Config Auto Remediation feature.

To set up Auto Remediation for s3-bucket-server-side-encryption-enable

The “s3-bucket-server-side-encryption-enabled” AWS Config rule checks that your S3 bucket either has S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server side encryption.

  1. Sign in to the AWS Management Console and open the AWS Config console 
  2. On the left pane, choose Rules
  3. On the Rules page, under Rule name, select s3-bucket-server-side-encryption-enabled and then choose Add rule to add it to the rule list. (If the rule already exists, select it from the rule list and then choose Edit.)There is one S3 bucket named “s3notencrypted” which is shown as a non-compliant resource under “s3-bucket-server-side-encryption-enabled” rule.
  4. Return to the Rules page and choose Edit.
  5. In the Choose remediation action section, from the Remediation action list, select AWS-EnableS3BucketEncryption. (AWS-EnableS3BucketEncryption is an AWS SSM Automation document that enables server-side encryption on an S3 bucket using SSM Automation. )
  6. In the Auto remediation section, select Yes to automatically remediate non-compliant resources.
  7. In the Parameters section, enter the values for AutomationAssumeRole, SSE algorithm required to execute the remediation action.
  8. Choose Save. The “s3-bucket-server-side-encryption-enabled” AWS Config rule can now auto-remediate non-compliant resources. A confirmation that it executed the remediation action shows in the Action status column.S3 bucket server-side encryption is now enabled automatically using the AWS Config Auto Remediation feature.

To set up auto remediation for s3-bucket-public-read-prohibited and s3-bucket-public-write-prohibited

An AWS S3 bucket can be protected from public read and write using AWS Config rules “s3-bucket-public-read-prohibited” and “s3-bucket-public-write-prohibited” respectively. Enable these AWS Config rules as discussed in the above two scenarios and enable auto remediation feature with existing SSM Document remediation action “AWSDisableS3BucketPublicReadWrite”. This remediation action disables an S3 bucket’s public Write and Read access via private ACL.

Conclusion

In this post, you saw how to auto-remediate non-compliant S3 resources using the AWS Config auto remediation feature for AWS Config rules. You can also use this feature to maintain compliance of other AWS resources using existing SSM documents or custom SSM documents. For more details, see Remediating Non-compliant AWS Resources by AWS Config Rules.

For pricing details on AWS Config rules, visit the AWS Config pricing page.

 

About the Author

Harshitha Putta is an Associate Consultant with AWS Professional Services in Seattle, WA. She is passionate about building innovative solutions using AWS services to help customers achieve their business objectives. She enjoys spending time with family and friends, playing board games and hiking.

from AWS Management Tools Blog

Replacing SSH access to reduce management and security overhead with AWS Systems Manager

Replacing SSH access to reduce management and security overhead with AWS Systems Manager

Cesar Soares, DevOps and cloud infrastructure manager, VR Beneficios

In many corporate enterprises, interactive shell access to cloud or datacenter environments is a necessity. It must be supported in a secure, auditable manner, often programmatic or via scripting, and with strong access controls. As discussed in a previous post by Jeff Barr, AWS Systems Manager Session Manager is just the tool to meet these business requirements.

This post describes how VR Beneficios, a large, Brazil-based benefits company with over 40 years of industry experience, replaced all its SSH access with the secure interactive shell access provided by Session Manager.

Overview
Interactive shell access to traditional server-based resources often comes with high management and security overhead. The reason is that user accounts, passwords, SSH keys, and inbound network ports need to be maintained to provide this level of access. Often, there is also the cost of supporting additional infrastructure for bastion hosts, which is a common way of creating a security boundary between less-secure to more-secure resources.

The conversation tends to become more complex when additional security or functional requirements are needed, because of this type functionality is usually not natively supported. These requirements include auditability, access control, single sign-on—or, as is often seen with AWS customers—programmatic access to the resources to leverage scripting or automation.

Solution
After evaluating various options, VR Beneficios decided to use Session Manager because it solved the business problems outlined earlier, including the seamless and programmatic access. The latter reason was particularly important, because the company needs to manage resources in multiple AWS accounts and because it reduces the probability of human error.

There are additional security benefits with AWS Systems Manager, including:

VR Beneficios is also eliminating network management overhead, which includes eliminating the need to open inbound network ports. In traditional architectures, these ports must be maintained at multiple layers, including network firewalls, or in some cases, direct public access for systems directly connected to the Internet.  Session Manager allows us to remove the need for our managed instances to be publicly accessible. Managed instances that are managed with Session Manager can also make use of AWS PrivateLink, which restricts traffic between EC2 managed instances and AWS Systems Manager to the Amazon network.

Additional benefits that VR Beneficios plans to use in the future include limiting managed instance access via resource tags and instance IDs, the “Run As” capability to restrict the level of access users can assume when using Session Manager, and also using SCP-based file transfers, as needed.

Architecture
VR Beneficios uses the workflow shown in the following diagram to manage on-premises and EC2 instances with Session Manager. It consists of multiple AWS accounts to manage the development, testing, and production environments, along with an on-premises environment.

There is a centralized management account where all the administrator accounts live. This configuration allows for all the management users to be in a single account, along with the ability to use customized IAM policies to include access to S3 and CloudWatch Logs.

 

VR Beneficios completed the rollout of Systems Manager to manage both AWS Cloud and on-premises resources, including hundreds of resources managed by Systems Manager. As part of the rollout, all VPCs were configured with SSM endpoints to ensure that all traffic remains local within the AWS infrastructure.

Break/fix scenarios
VR Beneficios uses interactive shell access mainly in break/fix scenarios when DevOps automation is not an option. The following scenario shows how this would happen:

  1. Go to the AWS CLI and connect to the instances using the Session Manager Plugin.
  2. Always make connections from the Management account using the role-arn of the other accounts to switch. Examples of access:
    – aws ssm start-session –target “i-XXXXXXXXXXXX” –profile vrdev
    – aws ssm start-session –target “i-XXXXXXXXXXXX ” –profile vrtest
    – aws ssm start-session –target “i-XXXXXXXXXXXX ” –profile vrprd
    – aws ssm start-session –target “i-XXXXXXXXXXXX ” –profile vrbeneficios
  3. Because the audit capability of Session Manager is a huge benefit, there are also periodic reviews of the activity captured via CloudWatch Logs and the S3 bucket configured for command output.

Remote infrastructure management
Although this post focuses on the company’s use of Session Manager, VR Beneficios also uses Systems Manager to manage its infrastructure remotely. This includes using the Run command to deploy the CloudWatch agent across all environments to keep all AWS-based agents up to date and also gather inventory data.

Summary
This solution described in this post using Session Manager is just one of the many ways that VR Beneficios leverages AWS management and governance services. Using these tools, the company maintains control over cost, compliance, and security without impacting its pace of innovation and operational efficiency.

About the Author

Cesar Soares is a DevOps and cloud infrastructure manager at VR Beneficios, with over 17 years of experience in the technology field. He actively works with AWS and AWS Premier Partners to continue pushing the pace of innovation. At the same time, he seeks increasing operational efficiency and security across multiple environments, including offshore and nearshore operations, and the AWS platform. Cesar is active in the technology community and can be reached at https://www.linkedin.com/in/alexandrecesarsoares.

from AWS Management Tools Blog

Maximizing features and functionality in AWS CloudTrail

Maximizing features and functionality in AWS CloudTrail

Thanks to the following AWS CloudTrail experts for their work on this post:

  • Avneesh Singh, Senior Product Manager, AWS CloudTrail
  • Jeff McRae, Software Development Manager, AWS CloudTrail
  • Keith Robertson, Software Development Manager, AWS CloudTrail
  • Susan Ferrell, Senior Technical Writer, AWS

Are you taking advantage of all the features and functionality that AWS CloudTrail offers? Here are some best practices, tips, and tricks for working with CloudTrail to help you get the most out of using it.

This service is enabled for you when you create your AWS account, and it’s easy to set up a trail for continuous logging and history. This post answers some frequently asked questions that people ask about CloudTrail.

What is CloudTrail?

CloudTrail is an AWS service that enables governance, compliance, and operational and risk auditing of your AWS account. Use the information recorded in CloudTrail logs and in the CloudTrail console to review information about actions taken by a user, role, or AWS service. Each action is recorded as an event in CloudTrail, including actions taken in the AWS Management Console and with the AWS CLI, AWS SDKs, and APIs.

How does CloudTrail work across Regions?

Keep AWS Regions in mind when working with CloudTrail. CloudTrail always logs events in the AWS Region where they occur, unless they are global service events.

If you sign in to the console to perform an action, the sign-in event is a global service event, and is logged to any multi-region trail in the US East (N.Virginia) Region, or to a single-region trail in any Region that contains global service events. But if you create a trail that only logs events in US East (Ohio), without global service events, a sign-in event would not be logged.

How do I start using CloudTrail?

Create a trail! Although CloudTrail is enabled for you by default in the CloudTrail console, the Event history only covers the most recent 90 days of management event activity. Anything that happened before then is no longer available—unless you create a trail to keep an ongoing record of events.

When creating your first trail, we recommend creating one that logs management events for all Regions. Here’s why:

  • Simplicity. A single trail that logs management events in all Regions is easier to maintain over time. For example, if you create a trail named LogsAllManagementEventsInAllRegions, it’s obvious what events that trail logs, isn’t it? No matter how your usage changes or how AWS changes, the scope remains the same. Over time, as new AWS Regions are added, and you work in more than one AWS Region, that trail still does what it says: logs all management events in every AWS Region. You have a complete record of all management events that CloudTrail logs.
  • No surprises. Global service events are included in your logs, along with all other management events. If you create a trail in a single AWS Region, you only log events in that Region—and global service events may not necessarily be logged in that Region.
  • You know what you’re paying. If this is your first trail, and you log all management events in all AWS Regions, it’s free. Then, create additional trails to meet your business needs. For example, you can add a second trail for management events that copies all management events to a separate S3 bucket for your security team to analyze, and you are charged for the second trail. If you add a trail to log data events for Amazon S3 buckets or AWS Lambda functions, even if it’s the first trail capturing data events, you are charged for it, because a trail that captures data events always incurs charges. For more information about CloudTrail costs, see AWS CloudTrail Pricing.

How do I manage costs for CloudTrail?

That’s a common request. Here are some ways to get started:

I created a trail. What should I do next?

Consider two important things: who has access to your log files, and how to get the most out of those log files. Then do the following:

Understanding log files and what’s in them helps you become familiar with your AWS account activity and spot unusual patterns.

Over time, you’ll find there are many log files with a lot of data. CloudTrail makes a significant amount of data available to you. To get the most out of the data collected by CloudTrail, and to make that data actionable, you might want to leverage the query power of Amazon Athena, an interactive, serverless query service that makes it easy for anyone with SQL skills to quickly analyze large-scale datasets. You could also set up Amazon CloudWatch to monitor your logs and notify you when specific activities occur. For more information, see AWS Service Integrations with CloudTrail Logs.

Is there a better way to log events for several AWS accounts instead of creating a trail for each one?

Yes, there is! To manage multiple AWS accounts, you can create an organization in AWS Organizations. Then create an organization trail, which is a single trail configuration that is replicated to all member accounts automatically. It logs events for all accounts in an organization, so you can log and analyze activity for all organization accounts.

Only the master account for an organization can create or modify an organization trail. This makes sure that the organization trail captures all log information as configured for that organization. An organization trail’s configuration cannot be modified, enabled, or disabled by member accounts. For more information, see Creating a Trail for an Organization.

Why can’t I find a specific event that I’m looking for?

While the log files from multi-region trails contain events from all Regions, the events in Event history are specific to the AWS Region where they’re logged.

If you don’t see events that you expect to find, double-check which AWS Region you’re logged into in the selector. If necessary, change the setting to the AWS Region where the event occurred.

Also, keep in mind that the console only shows you events that occurred up to 90 days ago in your AWS account. If you’re looking for an older event, you won’t see it. That’s one reason it’s so important to have a trail that logs events to an S3 bucket; that data stays there until you decide not to keep it.

What are some best practices for working with CloudTrail?

Be familiar with your CloudTrail logs. Having a general familiarity and understanding of your CloudTrail log file data and structure help you spot and troubleshoot any issues that might arise.

Here are some things to avoid doing under most circumstances:

Avoid creating trails that log events for a single AWS Region

Although CloudTrail supports this, we recommend against creating this kind of trail for several reasons.

Some AWS services appear as “global” (the action can be called locally, but is run in another AWS Region), but they do not log global service events to CloudTrail. A trail that logs events in all AWS Regions shows data about all events logged for your AWS account, regardless of the AWS Region in which they occur.

For example, Organizations is a global service, but it only logs events in the US East (N. Virginia) Region. If you create a trail that only logs events in US East (Ohio), you do not see events for this service in the log files delivered to your S3 bucket.

Also, a trail that logs events in a single AWS Region can be confusing when it comes to cost management. Only the first instance of a logged event is free. If you have a trail that logs events in a single AWS Region, and you create a multi-region trail, it incurs costs for the second and any subsequent trails. For more information, see AWS CloudTrail Pricing.

Avoid using the create-subscription and update-subscription commands to create and manage your trails

We recommend that you do not use the create-subscription or update-subscription commands, because these commands are on a deprecation path, and might be removed in a future release of the CLI. Instead, use the create-trail and update-trail commands. If you’re programmatically creating trails, use an AWS CloudFormation template.

What else should I know?

Talk to us! We always want to hear from you! Tell us what you think about CloudTrail, and let us know features that you want to see or content that you’d like to have. You can reach us through the following resources:

from AWS Management Tools Blog

Auto-populate instance details by integrating AWS Config with your ServiceNow CMDB

Auto-populate instance details by integrating AWS Config with your ServiceNow CMDB

Introduction

Many AWS customers either integrate ServiceNow into their existing AWS services or set up both ServiceNow and AWS services for simultaneous use. One challenge in this use case is the need to update your configuration management database (CMDB) when a new spin-up instance appears in AWS.

This post demonstrates how to integrate AWS Config and ServiceNow so that when a new Amazon EC2 instance is created, Amazon SNS triggers a notification. This notification creates a server record in the CMDB and tests your setup by creating an EC2 instance from a sample AWS CloudFormation stack.

Overview

Use AWS CloudFormation to provision infrastructure resources from a template automatically, and use AWS Config to monitor these resources. SNS provides topics for pushing messages for these resources. Use AWS Config to provide the information to ServiceNow, enabling it to create a CMDB record automatically.

This is done in five stages:

  1. Configure ServiceNow.
  2. Create an SNS topic and subscription.
  3. Confirm the SNS subscription in ServiceNow.
  4. Create a handler for the subscription in ServiceNow.
  5. Configure AWS Config.

Configure ServiceNow

Use a free ServiceNow developer instance to do the work. If you already have one, feel free to use your own.

  1. Log in to the ServiceNow Developer page, and request a developer instance.
  2. Log in to the developer instance as an administrator. Make sure to remember your login credentials. These are used later when configuring SNS topic subscription URLs.
  3. Navigate to System Applications. Choose Studio, then Import From Source Control.
  4. On the Import Application screen, enter the following URL:
    • https://github.com/byukich/x_snc_aws_sns.
  5. Leave both the User name and Password fields empty, and then choose Import.
  6. Close the Studio browser tab.
  7. Refresh your ServiceNow browser tab and navigate to SNS. Notice in the left pane that there are now three new navigation links.

Note: in the above image, “AWS SNS” refers to the app name, not to Amazon SNS.

Create an SNS topic and subscription

Perform the following procedures to create an SNS topic and subscription:

  1. Log in to the SNS console, and select the US-East (N. Virginia) Region.
  2. In the left pane, choose Topics, Create New Topic.
  3. Give the topic a name, make the display name ServiceNow, and choose Create Topic.
  4. Select the Amazon Resource Name (ARN) link for the topic that you just created.
  5. Choose Create Subscription.
  6. Choose HTTPS protocol.
  7. For Endpoint, use the administrator password that you received when you acquired the free ServiceNow developer instance. Then enter the developer instance link, which is rendered like the following:
    • https://admin:<ServiceNow admin password>@<your developer instance>.service-now.com/api/x_snc_aws_sns/aws_sns
  8. Choose Create Subscription.
    Your new subscription is pending confirmation.

Confirm the SNS subscription in ServiceNow

Before allowing SNS to send messages to ServiceNow, confirm the subscription on ServiceNow. At this point, AWS already sent a handshake request, which is awaiting confirmation inside your ServiceNow instance.

  1. On your ServiceNow browser tab, navigate to SNS, then choose Subscriptions. Notice that AWS created a new record.
  2. Open the subscription by choosing ServiceNow, then choose Confirm Subscription. Stay on this page to create a handler in the next section.

Create a handler for the subscription in ServiceNow

Now, set up ServiceNow to be able to absorb received messages from AWS. Create a handler that’s able to create a new record in the CMDB Server table (cmdb_ci_server) whenever a new EC2 instance is created from a sample AWS CloudFormation stack.

To set up the handler, follow these steps:

  1. At the bottom of the Subscriptions form, for Handler Related , choose New and then provide a name for the handler, such as Create CMDB Server from EC2.
  2. Enter the following code inside the function:
    var webserver = new GlideRecord("cmdb_ci_server"); 
    webserver.initialize(); 
    webserver.name = "AWS WebServer "+message.configurationItem.configuration.launchTime ; 
    webserver.short_description = "Monitoring is "+message.configurationItem.configuration.monitoring.state+" and Instance Type is "+message.configurationItem.configuration.instanceType ; 
    webserver.asset_tag = message.configurationItem.configuration.instanceId ; 
    webserver.insert();
  3. Choose Submit
  4. Configure AWS Config

  1. In the SNS console, select the US-East (N. Virginia) Region.
  2. In the left navigation pane, choose Settings. For Recording, make sure that the value is On.
  3. Under Resources Type to Record, for All Resources, select both check boxes:
    • Record all resources supported in this region
    • Include global resources (including IAM resources)
  4. Choose Choose a topic from your account.
  5. Select the Amazon Resource Name (ARN) link for the topic that you just created.
  6. Choose Save.

Testing the integration

You can test this integration by creating a stack from the AWS CloudFormation sample templates, which trigger recording in AWS Config. This process then creates SNS notifications, which creates a configuration item in the ServiceNow CMDB.

  1. In the AWS CloudFormation console, choose Create stack.
  2. Select a sample template.
  3. Under Specify Details, enter the following information:

    Note: the above image, shows sample information.

  4. Choose Next.
  5. In the left navigation pane, choose Options, provide tags if needed, and then choose Next.
  6. At the bottom of the review page, choose Create. Wait for the stack creation to complete.
  7. Navigate to ServiceNow, then Server to check whether a server was created.

If you see a new server entry, you successfully integrated AWS Config with the ServiceNow CMDB.

Conclusion

This post shows one way to integrate AWS Config with your ServiceNow CMDB. When an instance is created in AWS using AWS CloudFormation, the details are captured as configuration items in the CMDB Server table.

With this process, you can use Handlers in ServiceNow to update the record with instance details. This handler can be customized to provide you with the option to scale this integration. You can get updated instance details as well as additional details that you may want.

You can use this mechanism as a trigger to send notifications and perform actions including discovery, workflow, and more. By making a small change (for example, adding a tag) across a list of resource types, you can use this solution to bypass discovery needs and discover existing resources. This triggers change recording in AWS Config and then creates those resources in the CMDB.

Additionally, we have AWS Service Catalog Connector for ServiceNow:

How to install and configure the AWS Service Catalog Connector for ServiceNow

How to enable self-service Amazon WorkSpaces by using AWS Service Catalog Connector for ServiceNow

About the Author

Rahul Goyal is a New York-based Senior Consultant for AWS Professional Services in Global Specialty Practice. He has been working in cloud technologies for more than a decade. Rahul has been leading Operations Integration engagements to help various AWS customers be production ready with their cloud operations. When he is not with a Customer he takes his Panigale to Track Days for racing in summers and enjoys skiing in winters.

 

from AWS Management Tools Blog

Enhancing configuration management at Verizon using AWS Systems Manager

Enhancing configuration management at Verizon using AWS Systems Manager

In large enterprise organizations, it’s challenging to maintain standardization across environments. This is especially true if these environments are provisioned in a self-service manner—and even more so when new users access these provisioning services.

In this post, I describe how we at Verizon found a balance operating between agility, governance, and standardization for our AWS resources. I walk you through one of the solutions that we use to enable new users to provision AWS resources and configure application software. The solution uses ServiceNow and the following AWS services:

  • Systems Manager
  • AWS Service Catalog
  • AWS CloudFormation

Overview
Verizon seeks to provide a standardized AWS resource-provisioning service to new users. We needed a solution that incorporates auditing best practices and post-deployment configuration management to any newly provisioned environment. These best practices must work within a fully auditable self-service model and require that:

  • All appropriate resource-provisioning service requests are life-cycle appropriate.
  • The configuration management is defined and automatically applied as needed.

Solution
We wanted to provide a better user experience for our new users and help them provision resources in compliance with Verizon’s Governance and Security practices.

Shopping cart experience using AWS Service Catalog and ServiceNow
To accomplish these requirements, we use AWS Service Catalog to manage all our blueprint AWS CloudFormation templates (after being cleared through CFN-Nag). We then publish them as products in ServiceNow using the AWS Service Catalog Connector for ServiceNow (for example, EC2 CloudFormation as a product).

End users get a shopping cart-like experience whenever they provision resources in their account. This process helps us maintain provisioned resources consistent across all accounts and meet our compliance requirements.

The products or AWS CloudFormation templates are published to AWS Service Catalog using an automated Jenkins pipeline triggered from a Git repository, as shown in the following diagram.

 

 

All the products or AWS CloudFormation templates are retrieved from the AWS Service Catalog using the AWS Service Catalog Connector for ServiceNow and display as products. Users see the following list of compliant products from the Service Portal UI on ServiceNow.

 

 

When the user selects a product and provisions it in their account, ServiceNow makes backend calls to Verizon applications to do compliance checks. Then, it makes a call to AWS Service Catalog to provision the product. After the provisioning is successful, the user sees the list of provisioned products. The user can also use the API to provision the product.

 

 

Configuration management using Systems Manager
After the product is provisioned, users need the ability to configure their instances in a secure way using native AWS services. As shown earlier, a user uses the EC2 product and provisions it using the AWS Service Catalog. The user has an EC2 instance to configure his application.

At Verizon, we use Ansible for post-provisioning the configuration management of EC2 instances. After evaluating several options, we decided that Systems Manager was a perfect fit to use as an AWS native configuration-management solution. We leveraged Systems Manager agents already baked into our AMIs. For example, we use the Systems Manager Run Command with a run ansible document to execute Ansible playbooks and a run a shell-script document to run bash commands. For more information, see Running Ansible Playbooks using EC2 Systems Manager Run Command and State Manager.

The flow
In the previous provisioning section, you saw how users provision resources using AWS CloudFormation. ServiceNow maintains information on what types of resources users try to provision. For example, if there’s a product with an EC2 resource, you can enable the Systems Manager Run Command to deploy the EC2 product from the ServiceNow UI, as shown in the following screenshot.

 

 

When a user selects the Systems Manager Run Command, it allows users to include inline shell scripts or an Ansible Playbook. They can then submit the script as part of the configuration management, as shown in the following sample script:

---

- hosts: local

  tasks:

   - name: Install Nginx

     apt: pkg=nginx state=installed update_cache=true

     notify:

      - Start Nginx

  handlers:

   - name: Start Nginx

     service: name=nginx state=started

 

ServiceNow stores the information in its database for audit before it makes a Systems Manager API call to run the command on the selected EC2 instance. ServiceNow fetches the output using the command id from the previous command and shows it on the UI, as shown in the following screenshot.

 

 

We call this a post-provisioning workflow in ServiceNow, because it lets users do configuration actions after the provisioning is successful.

Summary
This solution is just one of many ways that Verizon helps users provision Verizon-compliant resources and deploy their applications in the AWS Cloud. We want to empower new cloud users to provision resources faster, with fewer clicks, but also in a secure manner that follows audit and compliance requirements.

About the Author

Krishna Gadiraju (GK) is an architect for the Cloud Governance and Cloud User Experience product teams at Verizon. He actively assists development teams with the migration of on-premise applications to the cloud while ensuring that the Verizon AWS accounts meet all security and other compliances. GK has AWS DevOps Professional and GCP Associate certifications. He is an active presenter at cloud conferences and can be reached at https://www.linkedin.com/in/chaitanya-gk/.

from AWS Management Tools Blog

Creating and hydrating self-service data lakes with AWS Service Catalog

Creating and hydrating self-service data lakes with AWS Service Catalog

Organizations are evolving IT processes to include data lakes and supporting services. Your organization might start by looking to extend the self-service portals you built using AWS Service Catalog to create data lakes as well. A self-service portal lets users vend required AWS resources within the guardrails defined by your cloud center of excellence (CCOE) team. This removes the heavy lifting from the CCOE team and lets users build their own environments. With AWS Service Catalog, you can also define the constraints on which AWS resources your users can and can’t deploy.

For example, with an appropriately configured self-service portal that supports creation and hydration of data lakes for structured relational data, your users could do the following:

  • Vend an Amazon RDS database that they can launch only in private subnets.
  • Create an Amazon S3 bucket with versioning and encryption enabled.
  • Create an AWS DMS task that can hydrate only the chosen S3 bucket.
  • Launch an AWS Glue crawler that populates an AWS Glue Data Catalog for data from that chosen S3 bucket.

With adequately configured constraints and templates, you can be confident that your users follow the best practices around these services (that is, private subnets, encrypted buckets, and specific security groups).

In this post, I show you how to use AWS Service Catalog to create an IT self-service portal that lets you create a data lake and populate your Data Catalog.

Data lake basics

A data lake is a central repository of your structured and unstructured data that you use to store information in a separate environment from your active or compute space. A data lake enables diverse query capabilities, data science use cases, and discovery of new information models. For more information on data lakes, see Data Lakes and Analytics on AWS. Amazon S3 is an excellent choice for building your data lake on AWS because it offers multiple integrations. For more information, see Amazon S3 as the Data Lake Storage Platform.

Before you use your data lake for analytics and machine learning, you must first hydrate it—fill it with data—and create a Data Catalog containing metadata. AWS DMS works well for hydrating your data lake with structured data from your database. With that in place, you can use AWS Glue to automatically discover and categorize data, making it immediately searchable and queryable across data sources such as Glue ETL, Amazon Athena, Amazon Redshift Spectrum, and Amazon EMR.

Other AWS services such as Amazon Data pipeline can be used for hydrating a data lake from a structured or unstructured database in addition to AWS DMS. This post demonstrates a specific solution that uses AWS DMS. 

The manual data lake hydration process

The following diagram shows the typical data lake hydration and cataloging process for databases.

  1. Create a database, which various applications populate with data.
  2. Create an S3 bucket to which you can export a copy of the data.
  3. Create a DMS replication task that migrates the data from your database to your S3 bucket. You can also create an ongoing replication task that captures ongoing changes after you complete your initial migration. This process is called ongoing replication or change data capture (CDC).
  4. Run the DMS replication task.
  5. Create an AWS Glue crawler to crawl your S3 bucket and populate your AWS Glue Data Catalog. AWS Glue can crawl RDS too, for populating your Data Catalog; in this example, I focus on a data lake that uses S3 as its primary data source.
  6. Run the crawler.

For more information, see to following resources:

The automated, self-service data lake hydration process

Using AWS Service Catalog, you can set up a self-service portal that lets your end users request components from your data lake, along with tools to hydrate it with data and create a Data Catalog.

The diagram below shows the data lake hydration process using a self-service portal.

The automated hydration process using AWS Service Catalog consists of the following:

  1. An Amazon RDS database product. Because CCOE team controls the CloudFormation template that enables resource vending, your organization can maintain appropriate security measures. You can also tag subnets for specific teams and configure catalog such that self-service portal users only choose from an allowed list of subnets. You can codify RDS database best practices (such as Multi-AZ) in your CloudFormation template and simultaneously leave the decision points such as the size, engine, and number of read replicas for the database up to the user. With self-service actions, you can also further extend the RDS product to enable your users to start, stop, restart, and otherwise manage their own RDS database.
  2. An S3 bucket. By controlling the CloudFormation template that creates the S3 bucket, you can enable encryption at the source, as well as versioning, replication, and tags. Along with the S3 bucket, you can also allow your users to vend service-specific IAM roles configured to grant access to the S3 bucket. Your users can then use these roles for tasks such as:
    1. AWS Glue crawler task
    2. DMS replication task
    3. Amazon SageMaker execution role with access only to this bucket
    4. Amazon Elastic Compute Cloud (Amazon EC2) role for Amazon EMR
  3. A DMS replication task, which copies the data from your database into your S3 bucket. Users can then go to the console and start the replication task to hydrate the data lake at will.
  4. An AWS Glue crawler to populate the AWS Glue Data Catalog with metadata of files read from the S3 bucket.

To allow users from specific teams to vend these resources, you must associate an AWSServiceCatalogEndUserFullAccess managed policy with them. Your users also need IAM permissions to stop or start a crawler and a DMS task.

You can also configure the catalog to use launch constraints, which assume the appropriate, pre-configured IAM roles you configured and execute your CloudFormation template whenever your users activate specific resource. This provides your users capability to execute specific tasks, such as creating a DMS task or S3 bucket within guardrails you define.

After creating these resources, users can run the DMS task and AWS Glue crawler using the AWS console, finally hydrating and populating the Data Catalog.

You can try the above solution by deploying a sample catalog. The sample catalog solution creates a VPC, subnets, and IAM roles. It sets up a sample catalog with service products such as AWS Glue crawlers, DMS tasks, RDS, S3, and corresponding IAM roles for the AWS Glue crawler and DMS target. It also creates an end user and demonstrates how to allow that user to deploy RDS and DMS tasks using only the subnets created for them. The sample catalog also teaches you to configure launch constraints, so you don’t have to grant additional permissions to users. It contains an S3 product that vends service-specific IAM roles with access restricted to a specific S3 bucket.

Best practices for data lake hydration at scale

By configuring catalog in such a manner, you can make implementing the following best practices at scale easier by standardizing and automating:

  • Grant the least amount of privilege possible. IAM users should have an appropriate level of permissions to only do the task they must do.
  • Create resources such as S3 buckets with appropriate read/write permissions, with encryption and versioning enabled.
  • Use a team-specific key for database and DMS replication tasks, and do not spin up either in public subnets.
  • Give team-specific DMS and AWS Glue roles access to only the S3 bucket created for their individual team.
  • Do not enable users to spin up RDS and DMS resources in VPCs or subnets that do not belong to their teams.

With AWS CloudFormation, you can automate the manual work by writing a CloudFormation template. With AWS Service Catalog, you can make templates available to end users like data curators, who might not know all the AWS services in detail. With the self-service portal, your users would vend AWS resources only using the CloudFormation template that you standardize and into which you implement your security best practices.

With a self-service portal created using AWS Service Catalog, you can automate the process and leave decision points like RDS engine type, size of the database, VPC, and other configurations to your users. This helps you maintain an appropriate level of security, keeping the nuts and bolts of that security automated behind the scene.

Make sure that you understand how to control the AWS resource you deploy using AWS Service Catalog as well as the general vocabulary before you begin. In this post, you populate a self-service portal using a sample RDS database blueprint from AWS Service Catalog reference blueprints.

How to deploy a sample catalog

To deploy the sample catalog solution discussed earlier, follow these steps.

Prerequisites

To deploy this solution, you need  administrator access to the AWS account.

Step 1: Deploy the AWS CloudFormation template

A CloudFormation template handles most of the heavy lifting of sample catalog setup:

  1. Download the sample CloudFormation template to your computer.
  2. Log in to your AWS account using a user account or role that has administrator access.
  3. In the AWS CloudFormation console, create a new stack in AWS CloudFormation in the us-east-1 Region.
  4. Under Choose a template section, choose Choose File, and select the yaml file that you downloaded earlier. Choose Next.
  5. Complete the wizard and choose Create.
  6. When the stack status changes to CREATE COMPLETE, select the stack and choose Outputs. Note the link in output (SwitchRoleSCEndUser) for switching to the AWS Service Catalog end-user role.

The templates this post provides are samples and not intended for production use. However, you can review the CloudFormation template to understand the infrastructure it creates.

Step 2: View the catalog

Next, you can view the catalog:

  1. In the console, in the left navigation pane, under Admin, choose Portfolio List.
  2. Choose Analytics Team portfolio.
  3. The sample catalog automatically populates the following for you:
    • An RDS database (MySQL) for vending a database instance
    • An S3 bucket and appropriate roles for vending the S3 bucket and IAM roles for the AWS Glue crawler and the DMS task
    • AWS Glue crawler
    • A DMS task

Step 3: Create an RDS database using AWS Service Catalog

For this post, set up an RDS database. To do so:

  1. Switch to my_service_catalog_end_user role by launching the link you noted in the output section during Step 1.
  2. Open this console to see the products available for you to launch as an end-user.
  3. Choose RDS Database (Mysql).
  4. Choose Launch Product. Specify the name as my-db.
  5. Choose v1.0, and choose Next.
  6. On the Parameters page, specify the following parameters and choose Next.
    • DBVPC: Choose the one with SC-Data-lake-portfolio in its name.
    • DBSecurityGroupName: Specify dbsecgrp.
    • DBSubnets: Choose private subnet 1 and private subnet 2.
    • DBSubnetGroupName: Specify dbsubgrp.
    • DBInputCIDR: Specify CIDR of the VPC. If you did not modify defaults in step 1, then this value is 10.0.0.0/16.
    • DBMasterUsername: master.
    • DBMasterUserPassword: Specify a password that is at least 12 characters long. The password must include an uppercase letter, a lowercase letter, a special character, and a number. For example, dAtaLakeWorkshop123_.
    • Leave the remaining parameters as they are.
  7. On the Tag options page, choose Next (AWS Service Catalog automatically generates a mandatory tag here).
  8. On the Notifications page, choose Next.
  9. On the Review page, choose Launch.
  10. The status changes to Under Change/In progress. After AWS Service Catalog provisions the database, the status changes to Available/Succeeded. You can see the RDS connection string available in the output.

The output contains MasterJDBCConnectionString connection string, which includes the RDS endpoint (the underlined portion in the following example). You can use the same endpoint to connect to the database and create sample data.

Sample output:

jdbc:mysql://<u>XXXX.XXXX.us-east-1.rds.amazonaws.com:</u>3306/mysqldb

This example vends an RDS database, but you can also automate the creation of an Amazon DynamoDB, Amazon Redshift cluster, Amazon Kinesis Data Firehose delivery stream, and other necessary AWS resources.

Step 4: Load sample data into your database (optional)

For security reasons, I provisioned the database in a private subnet. You must set up a bastion host (unless you have VPN or DirectConnect access) to connect it to your database. You can provision an Amazon Linux 2.0-based bastion host in the public subnet and then log on to the same. For more information, see Launch an Amazon EC2 instance.

The my_service_catalog_end_user does not have access to the Amazon EC2 console. Do this step with an alternate user that has permissions to launch an EC2 instance. After you launch an EC2 instance, connect to your EC2 instance.

Next, execute the following commands to create a simple database and a table with two rows:

  1. Install the MySQL client:
sudo yum install mysql
  1. Connect to the RDS that you provisioned:

mysql -h <RDS_endpoint_name> -P 3306 -u master -p

  1. Create a database:
create database mydb;
  1. Use the newly created database:
use mydb;
  1. Create a table called client_balance and populate it with two rows:


CREATE TABLE `client_balance` ( `client_id` varchar(36) NOT NULL, `balance_amount` float NOT NULL DEFAULT '0', PRIMARY KEY (`client_id`) );
INSERT INTO `client_balance` VALUES ('123',0),('124',1.0);
COMMIT;
SELECT * FROM client_balance;
EXIT

Step 5: Create an S3 bucket using AWS Service Catalog

Switch to the my_service_catalog_end_user role and follow the process outlined in Step 3 to provision a product from the S3 bucket and appropriate roles. If you are using a new AWS account that does not have dms-vpc-role and dms-cloudwatch-logs-role IAM roles, you can select N as parameters; otherwise, you can leave default values for parameters.

After you provision the product, you can see the output and find details of the S3 bucket and DMS/AWS Glue roles that can access the newly created bucket. Make a note of the output, as you need the S3 bucket information and IAM roles in subsequent steps.

Step 6: Launch a DMS task

Follow the process identical to Step 3 and provision a product from DMS Task product. When you provision the DMS task, on the Parameters page:

  • Specify the S3 bucket from the output that you noted earlier.
  • Specify servername as the server endpoint of your RDS database (for example, XXX.us-east-1.rds.amazonaws.com).
  • Specify data as bucketfolder.
  • Specify mydb as the database.
  • Specify S3TargetDMSRole from the output you noted earlier.
  • Specify Private subnet 1 as DBSubnet1.
  • Specify Private subnet 2 as DBSubnet2.
  • Specify S3TargetDMSRole from the output you noted earlier.
  • Specify the dbUsername and dbPassword you noted after creating the RDS database.

Next, open the tasks section of DMS console and locate the newly created task; its status should read Ready. Select the task and choose Restart/Resume. This starts your DMS replication task, which hydrates the S3 bucket you specified earlier with the extract of database chosen.

I granted the my_service_catalog_end_user IAM role additional permissions – dms:StartReplicationTask and dms:StopReplicationTask, to allow users to start and stop DMS tasks. This shows how you can combine minimal permissions outside AWS Service Catalog to enable your users to perform tasks.

After the task completes, its status changes to Load Complete and the S3 bucket you created earlier now contains files filled with data from your database.

Step 7: Launch an AWS Glue crawler task

Now that you have hydrated your data lake with sample data, you can run AWS Glue crawler to populate your AWS Glue Data Catalog. To do so, follow the process outlined in Step 3 and provision a product from AWS Glue crawler. On the Parameters page, specify the following parameters:

  • For S3Path, specify the complete path to your S3 bucket, for example: s3://<your_bucket_name>/data
  • Specify IAMRoleARN as the value of GlueCrawlerIAMRoleARN from the output you noted at the end of Step 5.
  • Specify the DatabaseName as mydb.

Next, open the crawlers section of the AWS Glue catalog to locate a crawler created by the AWS Service Catalog. The crawler’s status should read Ready. Select the task and then choose Run crawler. When the crawler finishes, you can review what data populated your database from the database console.

As shown in following diagram, you can select the mydb database and see tables to explore the AWS Glue Data Catalog populated by the AWS Glue crawler.

The principles discussed in this post can be extended to logs, streams, and files. You can use various BI tools to extract useful knowledge from your data lake. For information about how to visualize your data, see Harmonize, Query, and Visualize Data from Various Providers using AWS Glue, Amazon Athena, and Amazon QuickSight. You can query your data lake from Amazon SageMaker notebooks. For more information, see Access Amazon S3 data managed by AWS Glue Data Catalog from Amazon SageMaker notebooks.

Conclusion

AWS Service Catalog enables you to build and distribute catalogs of IT services to your organization. In this post, I demonstrated how you can set up a catalog that lets your users vend tools to support creation and hydration of data lakes and maintain tight security standards. You can extend this idea of self-service by supporting resources such as DynamoDB databases, Kinesis Data Firehose delivery streams, Amazon Redshift clusters, and Amazon SageMaker notebooks, granting your users more flexibility and utility in their data lakes within guardrails you define.

If you have questions about implementing the solution described in this post, you can start a new thread on the AWS Service Catalog Forum or contact AWS Support.

About the Author

Kanchan Waikar is a Senior Solutions Architect at Amazon Web Services. She enjoys helping customers build architectures using AWS Marketplace for machine learning, AWS Service catalog, and other AWS services.

from AWS Management Tools Blog

Analyzing Amazon VPC Flow Log data with support for Amazon S3 as a destination

Analyzing Amazon VPC Flow Log data with support for Amazon S3 as a destination

In a world of highly distributed applications and increasingly bespoke architectures, data monitoring tools help DevOps engineers stay abreast of ongoing system problems. This post focuses on one such feature: Amazon VPC Flow Logs.
In this post, I explain how you can deliver flow log data to Amazon S3 and then use Amazon Athena to execute SQL queries on the data. This post also shows you how to visualize the logs in near real-time using Amazon QuickSight. All these steps together create useful metrics to help synthesize and analyze the terabytes of flow log data in a single, approachable view.
Before I start explaining the solution in detail, I review some basic concepts about flow logs and Amazon CloudWatch Logs.

What are flow logs, and why are they important?
Flow logs enable you to track and analyze the IP address traffic going to and from network interfaces in your VPC. For example, if you have a content delivery platform, flow logs can profile, analyze, and predict customer patterns of the content access, and track down top talkers and malicious calls.

Some of the benefits of flow logs include:

  • You can publish flow log data to CloudWatch Logs and S3, and query or analyze it from either platform.
  • You can troubleshoot why specific traffic is not reaching an instance, which helps you diagnose overly restrictive security group rules.
  • You can use flow logs as an input to security tools to monitor the traffic reaching your instance.
  • For applications that run in multiple AWS Regions or use multi-account architecture, you can analyze and identify the account and Region where you receive more traffic.
  • You can predict seasonal peaks based on historical data of incoming traffic.

Using CloudWatch to analyze flow logs
AWS originally introduced VPC Flow Logs to publish data to CloudWatch Logs, a monitoring and observability service for developers, system operators, site reliability engineers, and IT managers. CloudWatch integrates itself into more than 70 log-generating AWS services—such as Amazon VPC, AWS Lambda, and Amazon Route 53, providing you a single place to monitor all your AWS resources, applications, and services that run on AWS and on-premises servers.

CloudWatch Logs publishes your flow log data to a log group, with each network interface generating a unique log stream in the log group. Log streams contain flow log records. You can create multiple flow logs that publish data to the same log group. For example, you can use cross-account log data sharing with subscriptions to send multiple flow logs from different accounts in your organization to the same log group. This lets you audit accounts for real-time intrusion detection.

You can also use CloudWatch to get access to a real-time feed of flow logs events from CloudWatch Logs. You can then deliver the feed to other services such as Amazon Kinesis, Kinesis Data Firehose, or AWS Lambda for custom processing, transformations, analysis, or loading to other systems.

Publishing to S3 as a new destination
With the recent launch of a new feature, flow logs can now be directly delivered to S3 using the AWS CLI or through the Amazon EC2 or VPC consoles. You can now deliver flow logs to both S3 and CloudWatch Logs.

CloudWatch is a good tool for system operators and SREs to capture and monitor the flow log data. But you might want to store copies of your flow logs for compliance and audit purposes, which requires less frequent access and viewing. By storing your flow log data directly into S3, you can build a data lake for all your logs.

From this data lake, you can integrate the flow log data with other stored data, for example, joining flow logs with Apache web logs for analytics. You can also take advantage of the different storage classes of S3, such as Amazon S3 Standard-Infrequent Access, or write custom data processing applications.

Solution overview
The following diagram shows a simple architecture to send the flow log data directly to an S3 bucket. It also creates tables in Athena for an ad hoc query, and finally connects the Athena tables with Amazon QuickSight to create an interactive dashboard for easy visualization.

Now I show you the steps to move flow log data to S3 and analyze it using Amazon QuickSight.

The following steps provide detailed information on how the architecture defined earlier can be deployed in minutes using AWS services.

1. Create IAM policies to generate and store flow logs in an S3 bucket.
2. Enable the new flow log feature to send the data to S3.
3. Create an Athena table and add a date-based partition.
4. Create an interactive dashboard with Amazon QuickSight.

Step 1: Create IAM policies to generate and store flow logs in an S3 bucket
Create and attach the appropriate IAM policies. The IAM role associated with your flow log must have permissions to publish flow logs to the S3 bucket. For more information about implementing the required IAM policies, see the documentation on Publishing Flow Logs to Amazon S3

Step 2: Enable the new flow log feature to send the data to S3
You can create the flow log from the AWS Management Console, or using the AWS CLI.

To create the flow log from the Console:

1. In the VPC console, select the specific VPC for which to generate flow logs.

2. Choose Flow Logs, Create flow log.

3. For Filter, choose the option based on your needs. For Destination, select Send to an S3 bucket. For S3 bucket ARN*, provide the ARN of your destination bucket.

To create the flow log from the CLI:

1. Use the following example command to return the flow log in JSON format:

186590dfd865:~ avijitg$ aws ec2 create-flow-logs --resource-type VPC --resource-ids <your VPC id> --traffic-type <ACCEPT/REJECT/ALL>  --log-destination-type s3 --log-destination <Your S3 ARN> --deliver-logs-permission-arn <ARN of the IAM Role>
{
    "ClientToken": "gUk0TEGdf2tFF4ddadVjWoOozDzxxxxxxxxxxxxxxxxx=",
    "FlowLogIds": [
        "fl-xxxxxxx"
    ],
    "Unsuccessful": []
}

2. Check the status and description of the flow log by running the following command with a filter and providing the flow log ID that you received during creation:

186590dfd865:~ avijitg$ aws ec2 describe-flow-logs --filter "Name=flow-log-id,Values="fl-xxxxxxx""
{
    "FlowLogs": [
        {
            "CreationTime": "2018-08-15T05:30:15.922Z",
            "DeliverLogsPermissionArn": "arn:aws:iam::acctid:role/rolename",
            "DeliverLogsStatus": "SUCCESS",
            "FlowLogId": "fl-xxxxxxx",
            "FlowLogStatus": "ACTIVE",
            "ResourceId": "vpc-xxxxxxxx",
            "TrafficType": "REJECT",
            "LogDestinationType": "s3",
            "LogDestination": "arn:aws:s3:::aws-flowlog-s3"
        }
    ]
}

3. You can check the S3 bucket and ensure that your flow logs output correctly with the following specific structure:

AWSLogs/<account-id>/vpcflowlogs/<region>/2018/08/15/<account-id>_vpcflowlogs_us-west-2_fl-xxxxxxx_20180815T0555Z_14dc5cfd.log.gz"

Step 3: Create an Athena table and add a date-based partition
In the Athena console, create a table on your flow log data.

Use the following DDL to create a table in Athena:

CREATE EXTERNAL TABLE IF NOT EXISTS vpc_flow_logs (
  version int,
  account string,
  interfaceid string,
  sourceaddress string,
  destinationaddress string,
  sourceport int,
  destinationport int,
  protocol int,
  numpackets int,
  numbytes bigint,
  starttime int,
  endtime int,
  action string,
  logstatus string
)  
PARTITIONED BY (dt string)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ' '
LOCATION 's3://<your bucket location with object keys>/'
TBLPROPERTIES ("skip.header.line.count"="1");

After creating the table, you can partition it based on the ingestion date. Doing this helps speed queries of the flow log data for specific dates.

Be aware that the folder structure created by a flow log is different from the Hive partitioning format. You can manually add partitions and map them to portions of the keyspace using ALTER TABLE ADD PARTITION. Create multiple partitions based on the ingestion date.

Here is an example with a partition for ingestion date 2019-05-01:

ALTER TABLE vpc_flow_logs  ADD PARTITION (dt = '2019-05-01') location 's3://aws-flowlog-s3/AWSLogs/<account id>/vpcflowlogs/<aws region>/2019/05/01';

Step 4: Create an interactive dashboard with Amazon QuickSight
Now that your data is available in Athena, you can quickly create an Amazon QuickSight dashboard to visualize the log in near real time.

First, go to Amazon QuickSight and choose New Analysis, New datasets, Athena. For Data Source Name, enter a name for your new data source.

Next, for Database: contain sets of tables, choose your new table. Under Tables: contain the data you can visualize, select the source to monitor.

You can start creating dashboards based on the metrics to monitor.

Conclusion
In the past, to store flow log data cost-effectively, you had to use a solution involving Lambda, Kinesis Data Firehose, or other sophisticated processes to deliver the logs to S3. In this post, I demonstrated the speed and ease of importing flow logs to S3 using recent VPC updates and Athena to satisfy your analytics needs. For more information about controlling and monitoring your flog logs, see the documentation on working with flow logs.

If you have comments or feedback, please leave them below, or reach out on Twitter!

References:

Amazon CloudWatch
Documentation on CloudWatch Logs
Amazon VPC Flow Logs can now be delivered to S3

About the Author


Avijit Goswami is a Sr. Solutions Architect helping AWS customers to build their Infrastructure and Application on Cloud conforming to AWS Well Architected methodologies including Operational Excellence, Security, Reliability, Performance, and Cost Optimizations. When not at work, Avijit likes to travel, watch sports and listening to music.

 

 

from AWS Management Tools Blog