Tag: EBS

Automatically Remediate Noncompliant AWS Resources using Lambda

Automatically Remediate Noncompliant AWS Resources using Lambda

While enterprises are capable of rapidly scaling their infrastructure in the cloud, there’s a corresponding increase in the demand for scalable mechanisms to meet security and compliance requirements based on corporate policies, auditors, security teams, and others.

For example, we can easily and rapidly launch hundreds of resources – such as EC2 instances – in the cloud, but we also need to have approaches for managing the security and compliance of these resources along with the surrounding infrastructure. It’s not good enough to simply passively monitor noncompliant resources; you need to automatically fix the configuration that led to the noncompliant resources.

Using a collection of AWS services, you can detect non-compliant resources and automatically remediate these resources to maintain compliance without human intervention.

In this post, you’ll learn how to automatically remediate non-compliant AWS resources as code using AWS services such as AWS Config Rules, Amazon CloudWatch Event Rules, and AWS Lambda. You’ll learn the step-by-step instructions for configuring automated remediation using the AWS Console.

The diagram below shows the key AWS resources and relationships you’ll be creating.

Let’s get started!

Create an S3 Bucket for CloudTrail

In this section, you’ll create an Amazon S3 bucket for use with CloudTrail. If you’ve already established CloudTrail, this section is optional. Here are the steps:

  1. Go to the S3 console
  2. Click the Create bucket button
  3. Enter ccoa-cloudtrail-ACCOUNTID in the Bucket name field (replacing ACCOUNTID with your account id)
  4. Click Next on the Configure Options screen
  5. Click Next on the Set Permissions screen
  6. Click Create bucket on the Review screen

Create a CloudTrail Trail

In this section, you’ll create a trail for AWS CloudTrail. If you’ve already established CloudTrail, this section is optional. Here are the steps:

  1. Go to the CloudTrail console
  2. Click the Create trail button
  3. Enter ccoa-cloudtrail in the Trail name field
  4. Choose the checkbox next to Select all S3 buckets in your account in the Data events section
  5. Choose the No radio button for the Create a new S3 bucket field in the Storage location section.
  6. Choose the S3 bucket you just created from the S3 bucket dropdown.
  7. Click the Create button

Create an AWS Config Recorder

In this section, you’ll configure the settings for AWS Config which includes turning on the Conifig recorder along with a delivery channel. If you’ve already configured AWS Config, this section is optional. Here are the steps:

  1. Go to the AWS Config console
  2. If it’s your first time using Config, click the Get Started button
  3. Select the Include global resources (e.g., AWS IAM resources) checkbox
  4. In the Amazon SNS topic section, select the Stream configuration changes and notifications to an Amazon SNS topic. checkbox
  5. Choose the Create a topic radio button in the Amazon SNS topic section
  6. In the Amazon S3 bucket section, select the Create a bucket radio button
  7. In the AWS Config role section, select the Use an existing AWS Config service-linked role radio button
  8. Click the Next button
  9. Click the Skip button on the AWS Config rules page
  10. Click the Confirm button on the Review page

Create an S3 Bucket in Violation of Compliance Rules

In this section, you’ll create an S3 bucket that allows people to put files into the bucket. We’re doing this for demonstration purposes since you should not grant any kind of public access to your S3 bucket. Here are the steps:

  1. Go to the S3 console
  2. Click the Create bucket button
  3. Enter ccoa-s3-write-violation-ACCOUNTID in the Bucket name field (replacing ACCOUNTID with your account id)
  4. Click Next on the Configure Options screen
  5. Unselect the Block all public access checkbox and click Next on the Set Permissions screen
  6. Click Create bucket on the Review screen
  7. Select the ccoa-s3-write-violation-ACCOUNTID bucket and choose the Permissions tab
  8. Click on Bucket Policy and paste the contents from below into the Bucket policy editor text area (replace both MYBUCKETNAME values with the ccoa-s3-write-violation-ACCOUNTID bucket you just created)
  9. Click the Save button

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
      "Resource": [

You’ll receive this message: You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.

Create an IAM Policy and Role for Lambda

In this section, you’ll create an IAM Policy and Role that established the permissions that the Lambda function will use. Here are the steps:

  1. Go to the IAM console
  2. Click on Policies
  3. Click Create policy
  4. Click the JSON tab
  5. Copy and replace the contents below into the JSON text area
  6. Click the Review policy button
  7. Enter ccoa-s3-write-policy in the *Name field
  8. Click the Create policy button
    "Version": "2012-10-17",
    "Statement": [
            "Effect": "Allow",
            "Action": [
            "Resource": "*"
  1. Click on Roles
  2. Click the Create role button
  3. Click Lambda from the Choose the service that will use this role section
  4. Click the Next: Permissions button
  5. Click ccoa-s3-write-policy in the Filter policies search field
  6. Select the checkbox next to ccoa-s3-write-policy and click on the Next: Tags button
  7. Click the Next: Review button
  8. Enter ccoa-s3-write-role in the Role name field
  9. Click the Create role button

Create a Lambda Function to Auto-remediate S3 Buckets

In this section, you’ll create a Lambda function that is written in Node.js and performs the automatic remediation by deleting the S3 Bucket Policy associated with the bucket. Here are the steps:

  1. Go to the Lambda console
  2. Click the Create function button
  3. Keep the Author from scratch radio button selected and enter ccoa-s3-write-remediation in the Function name field
  4. Choose Node.js 10.x for the Runtime
  5. Under Permissions choose the Choose or create an execution role
  6. Under Execution role, choose Use an existing role
  7. In the Existing role dropdown, choose ccoa-s3-write-role
  8. Click the Create function button
  9. Scroll to the Function code section and within the index.js pane, copy and replace the code from below
var AWS = require('aws-sdk');

exports.handler = function(event) {
  console.log("request:", JSON.stringify(event, undefined, 2));

    var s3 = new AWS.S3({apiVersion: '2006-03-01'});
    var resource = event['detail']['requestParameters']['evaluations'];
    console.log("evaluations:", JSON.stringify(resource, null, 2));
for (var i = 0, len = resource.length; i < len; i++) {
  if (resource[i]["complianceType"] == "NON_COMPLIANT")
      var params = {
        Bucket: resource[i]["complianceResourceId"]

      s3.deleteBucketPolicy(params, function(err, data) {
        if (err) console.log(err, err.stack); // an error occurred
        else     console.log(data);           // successful response

  1. Click the Save button

Create an AWS Config Rule

In this section, you’ll create an AWS Config Rule that uses a Managed Config Rule to detect when there are S3 buckets that allow public writes. The Managed Config Rule runs a Lambda function to detect when S3 buckets on not in compliance. Here are the steps:

  1. Go to the Config console
  2. Click Rules
  3. Click the Add rule button
  4. In the filter box, type s3-bucket-public-write-prohibited
  5. Choose the s3-bucket-public-write-prohibited rule
  6. Click on the Remediation action dropdown within the Choose remediation action section
  7. Choose the AWS-PublishSNSNotification remediation in the dropdown
  8. Click Yes in the Auto remediation field
  9. In the Parameters field, enter arn:aws:iam::ACCOUNTID:role/aws-service-role/ssm.amazonaws.com/AWSServiceRoleForAmazonSSM in the AutomationAssumeRole field (replacing ACCOUNTID with your AWS account id)
  10. In the Parameters field, enter s3-bucket-public-write-prohibited violated in the Message field
  11. In the Parameters field, enter arn:aws:sns:us-east-1:ACCOUNTID:ccoa-awsconfig-ACCOUNTID in the TopicArn field (replacing ACCOUNTID with your AWS account id)
  12. Click the Save button

Cloudwatch Event Rule

In this section, you’ll create an Amazon CloudWatch Event Rule which monitors when the S3_BUCKET_PUBLIC_WRITE_PROHIBITED Config Rule is deemed noncompliant. Here are the steps:

  1. Go to the CloudWatch console
  2. Click on Rules
  3. Click the Create rule button
  4. Choose Event pattern in the Event Source section
  5. In the Event Pattern Preview section, click Edit
  6. Copy the contents from below and replace in the Event pattern text area
  7. Click the Save button
  8. Click the Add target button
  9. Choose Lambda function
  10. Select the ccoa-s3-write-remediation function you’d previously created.
  11. Click the Configure details button
  12. Enter ccoa-s3-write-cwe in the Name field
  13. Click the Create rule button



View Config Rules

In this section, you’ll verify that the Config Rule has been triggered and that the S3 bucket resource has been automatically remediated:

  1. Go to the Config console
  2. Click on Rules
  3. Select the s3-bucket-public-write-prohibited rule
  4. Click the Re-evaluate button
  5. Go back Rules in the Config console
  6. Go to the S3 console and choose the ccoa-s3-write-violation-ACCOUNTID bucket that the bucket policy has been removed.
  7. Go back Rules in the Config console and confirm that the s3-bucket-public-write-prohibited rule is Compliant


In this post, you learned how to setup a robust automated compliance and remediation infrastructure for non-compliant AWS resources using services such as S3, AWS Config & Config Rules, Amazon CloudWatch Event Rules, AWS Lambda, IAM, and others. By leveraging this approach, your AWS infrastructure is capable of rapidly scaling resources while ensuring these resources are always in compliance without humans needing to manually intervene.

This general approach can be replicated for many other types of security and compliance checks using managed and custom config rules along with custom remediations. This way your compliance remains in lockstep with the rest of your AWS infrastructure.


The post Automatically Remediate Noncompliant AWS Resources using Lambda appeared first on Stelligent.

from Blog – Stelligent

Continuous Compliance on AWS Workflow

Continuous Compliance on AWS Workflow

It’s 7:37 AM on a Sunday.

You’re in the Security Operations Center (SOC) and alarms and emails are seemingly being triggered everywhere. You and a colleague are combing through dashboards and logs to determine what is causing these alerts.

After running around with your “hair on fire” for around 30 minutes, you finally determine that someone leaked administrator access keys by committing them to a public Git repository. In obtaining these keys the attacker was able to launch about $500 in Amazon EC2 spend within a half an hour for what was likely cryptomining purposes.

Ironically, you learned of this compromise not from AWS or logging and monitoring systems but through unusual changes in your billing activity.

You delete the access keys and view the AWS CloudTrail logs to determine other types of activity performed by any users on the AWS account (fortunately, you’d previously enabled CloudTrail log file integrity so you’re confident you’re viewing all the recent and valid AWS API calls).

You and your colleague come to the quick realization that it could’ve been much, much worse.

You walk through the remaining steps of your incident response and remediation workflow. You poke around to see if there are other resources that are vulnerable to an attack like this. While you ultimately perform a much more exhaustive post mortem, one of the first things you notice is that your Amazon DynamoDB tables were not encrypted. By viewing the CloudTrail logs, you notice the attacker did not access these tables.

Unfortunately, this is an all too common scenario in which there are humans involved in detecting and remediating a security incident like this. In this post, you’ll learn ways in which you can use automation to detect and remediate for these types of scenarios so that humans are essentially removed from the detection and remediation of incidents like these.


AWS has published and regularly updates the AWS Cloud Adoption Framework (AWS CAF). In the CAF, there are three business perspectives (Business, People, and Governance) and three technical perspectives (Platform, Security, and Operations). In the Security perspective, there are five core pillars. They are:

  • Identity & Access Management
  • Detective Controls
  • Infrastructure Security
  • Data Protection
  • Incident Response

As you’ll learn, you can use concepts based on the Detective Controls pillar to help prevent, detect, and respond automatically to anomalies by leveraging a deployment pipeline that creates resources to monitor and remediate these incidents.

Encrypting all the things

Let’s imagine that your organization has a directive control that all AWS resources must be encrypted in transit and at rest. While we could also discuss the lack of detective controls to notice the breach described in the introduction, for the purposes of this example, we’ll focus on the fact that during the post investigation, you’d noticed that the DynamoDB tables were not encrypted. Of course, this violates your encryption directive.

Figure 1 illustrates a workflow for detecting when resources are not encrypted. In this case, we’ll focus only on DynamoDB but the same approach can be used for other resources too.

Figure 1 – Workflow for Encryption Detection and Incident Response on AWS


Here are the high-level steps in the workflow:

  • Step 1 – An engineer creates a DynamoDB resource within an automated provisioning tool like AWS CloudFormation along with other AWS resources. They commit their changes to a Git source code repository.
  • Step 2 – An AWS CodePipeline pipeline (which is automated through a bootstrapping process in CloudFormation as well) starts and runs preventive checks against all CloudFormation templates in the Git repository using the open source cfn_nag framework which notifies engineers on security vulnerabilities. Most notably, cfn_nag provides rules for detecting whether encryption has been defined as part of provisioning certain AWS resources
  • Step 3 – CodePipeline calls CloudFormation to configure the DynamoDB resources within the AWS account
  • Step 4 AWS Config Rules monitor changes to AWS services. Running one of the AWS Managed Config Rules, it calls an AWS Lambda function which discovers changes to DynamoDB resources and flags it as a non-compliant resource because some of the tables are not encrypted.
  • Step 5 – Once Config Rules notices a non-compliant resource, an AWS Lambda function is called to send Slack messages (via AWS Chatbot) to developers who have recently committed code related to DynamoDB provisioning
  • Step 6 – The Slack messages received by the developers contain detailed “best practices” implementation so that the engineer can ensure that this process is committed as code to the Git repository and applied as part of the next change to the deployment pipeline that provisions the DynamoDB tables.

This scenario provides an automated incident response workflow for data protection across an AWS account. Similar solutions can be deployed across multiple AWS accounts using a combination of services such as Amazon CloudWatch Events, Amazon GuardDuty, Amazon Macie, Amazon Inspector, AWS CloudFormation StackSets, AWS Organizations, and so on.

Preventive Control: cfn_nag

cfn_nag is an open source static analysis framework for discovering security vulnerabilities in AWS CloudFormation templates. cfn_nag provides the following features:

  • Allows developers to find obvious security flaws in CloudFormation templates before doing a deployment
  • Provides flexible controls for rule application including whitelists, blacklists, and fine-grained suppressions
  • Supports custom rule development for enterprise-specific security violations

In this example encryption detection workflow, cfn_nag is called from a deployment pipeline defined in AWS CodePipeline. This way, with every code change, the pipeline can notify team members when problems are discovered and even before the infrastructure is launched which prevents security vulnerabilities.

Other Preventive Controls

There are other automated controls you can enable as part of a deployment pipeline to detect and prevent vulnerabilities from entering your infrastructure. Tools like CheckMarx and SonarQube can run thousands of static analysis rules including things like SQL injection and cross-site scripting.

Detective Control: Config Rules

AWS Config is a service that detects state changes across multiple supported AWS services. You can also define AWS Config Rules that run rules defined in AWS Lambda. AWS Config Rules monitor changes to AWS services. In this scenario, it discovers changes to DynamoDB resources and flags it as a non-compliant resource using the dynamodb-table-encryption-enabled managed AWS Config Rule.

AWS Config Rules provides 86+ managed rules that are predefined and managed by AWS. There’s also a curated repository of Config Rules developed by the community that you can leverage. Finally, you can define custom Config Rules in Lambda or generate them using the Rule Development Kit.

Other Detective Controls

You can also leverage AWS services such as Amazon CloudWatch Events, Amazon GuardDuty, Amazon Inspector, and Amazon Macie to detect security and compliance issues as part of a detection and remediation workflow that can be enabled through a deployment pipeline.

Responsive Controls: AWS Chatbot and AWS Lambda

Once Config Rules notices a non-compliant resource, an AWS Lambda function is called to notify Slack via AWS Chatbot. Slack receives a message from the AWS Chatbot and displays a detailed “best practice” implementation so that the engineer can ensure that this process is committed as code to the Git repository and applied as part of the next change to the deployment pipeline. Since DynamoDB only allows you to encrypt tables when you’re creating them, there’s no auto remediation scenario that can take place.

Alternatively, you can directly leverage Amazon Lex for chat capabilities, contextually link to detailed knowledge bases, and make automated smart choices on what to automatically remediate and/or in which of the scenarios you provide detailed code snippets to engineers.


I provided an example showing how you can completely rethink your approach to security and compliance. By thinking from a fully automated perspective, you can focus human effort on designing these systems to detect and remediate problems and then recommend solutions in order to prevent it from happening again.

You learned which AWS services can be used to create a fully automated preventive and detection workflow for a data protection scenario. These services include cfn_nag, AWS Config Rules, AWS CodePipeline, AWS CloudFormation, and others.


The post Continuous Compliance on AWS Workflow appeared first on Stelligent.

from Blog – Stelligent

DevOps on AWS Radio: Automating AWS IoT (Episode 25)

DevOps on AWS Radio: Automating AWS IoT (Episode 25)

In this episode, we chat with Michael Neil a DevOps Automation Engineer here at Mphasis Stelligent about the  AWS IoT platform. AWS IoT consists of many products and services, it can be difficult to know where to start when piecing together each of the offerings to create an IoT solution. Paul Duvall and Michael Neil will give you an overview of the AWS IoT platform, guide you in how to get started with AWS IoT, teach you how to automate it, and walk through a use case using AWS IoT. Listen here:

DevOps on AWS News

Episode Topics

  1. Michael Neil Intro & Background 
  2. Overview of AWS IoT and AWS IoT Services
    1. Device software
      1. IoT Greengrass, IoT Device SDK
    2. Control services
      1. AWS IoT Core,  Device Defender, AWS IoT Things Graph
    3. Data services
      1. AWS IoT Analytics, AWS IoT Events
  3. Continuous Delivery with AWS IoT
    1. How is CD different when it comes to embedded devices and AWS IoT?
    2. How do you provision devices at the edge, MCUExpresso IDE?
    3. How to do CD w/ IoT via AWS CodePipeline and  AWS CodeBuild.
    4. How to do just-in-time provisioning, give it the right permissions.
  4. Bootstrapping Automation
    1. Bootstrapping process
    2. How started automating via the SDK
  5. Automating and provisioning  AWS IoT Services
    1. IoT Greengrass
    2. IoT Things
  6.  Integrations with other AWS Services 
    1. Amazon Simple Storage Service (Amazon S3)
    2. AWS Lambda
    3. Amazon Simple Queue Service (SQS)
    4. Amazon DynamoDB
    5. Amazon Kinesis Data Firehose
    6. Amazon QuickSight
  7. Amazon FreeRTOS
  8. Automobile Assembly Line Use Case 
    1. How might they employ AWS IoT?
    2. How to do Continuous Delivery?
    3. Machine Learning

Additional Resources


About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners in and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…


The post DevOps on AWS Radio: Automating AWS IoT (Episode 25) appeared first on Stelligent.

from Blog – Stelligent

Dance like Nobody’s Watching; Encrypt like Everyone Is

Dance like Nobody’s Watching; Encrypt like Everyone Is

While AWS is making computing easier, it can be challenging to know how to effectively use encryption. In this screencast, we provide an overview of the encryption landscape on AWS. This includes services like AWS Certificate Manager, AWS Key Management Service, and the Encryption SDK, which provide encryption in transit and at rest. In addition, we share how to enable encryption for services such as Amazon DynamoDB, Amazon EBS, Amazon RDS, and Amazon S3. Finally, we show you how to automate encryption provisioning for some of these services using AWS CloudFormation.

Below, I’ve included a screencast of the talk I gave last week at the AWS NYC Summit in July 2019 along with a transcript (generated by Amazon Transcribe).

This is Paul Duvall – founder and CTO Mphasis Stelligent. I gave this talk at the AWS New York City Summit in 2019 so just sharing this as a screencast as well. At Stelligent, we’ve been helping companies apply DevOps practices on AWS for over a decade now. So what I’ll be sharing here is based on that perspective. When you think about encryption, encryption is about protecting your data from unauthorized access so you’re gonna learn how to apply encryption in a practical way and doing that through automation. But, before we get into this, I want to share a perspective on what we often see when it comes to security and compliance, in general, at most enterprises. 

What we often see is that security is something that’s the responsibility of a separate team or might be multiple teams that are further downstream in the software development life cycle. So, if you imagine you have a development team and they’re writing code, writing tests, maybe they’re performing continuous integration. They might be doing this over a period of a few weeks or so and then they’re ready to release, so let’s just say it goes to QA, then it might go to a change advisory board, internal audit, and in some cases, it might go to a separate security team that gets involved, and the problem is that there’s often a significant amount of time between, say, when a developer commits the code and then when there might be any kind of security and compliance checks that are applied – that are comprehensive in nature. And, so even if there are some security control directives that are well documented, it doesn’t mean that they’re always run for every release and doesn’t mean that they’re run the exact same way every single time. And then there’s also the fact that it could be weeks before the developer made that change and the developer’s going to lack context if (the security team) brings it up, and there’s gonna be pressure to release and things like that. The reason this occurs in many organizations is because of the cost as the result of processes that might require human intervention, even if it’s just one thing that they have to do so (these compliance checks) often get batched and they get scheduled across all the different application service teams and that these central security, operations, and audit teams have to support. Another reason, and when you think about from an AWS perspective, because (even though) you can automate all of these things, there might simply be the lack of knowledge that you can (perform) this (automation). So it’s not just sort of the old style data centers, non-cloud types of companies, but even companies that are using AWS might lack the knowledge that they can actually just check a box or automate this through the SDK or through CloudFormation. 

And so, as Werner Vogels talks about, the bottom line of all this is that security is everyone’s job, and the beauty of this now is that AWS gives you the tools to bake the security and compliance into every step of the software development process. So, from the standpoint of encryption, you can automate all the encryption as a part of your software development process. You can also automate things like static analysis and runtime checks against your software in order to ensure that you’re always in compliance with your encryption directives. And so, as Werner also says, you can “dance like nobody’s watching, but we encrypt like everyone is”. AWS announced – and this is as of July 2019 – that 117 AWS services now integrate with KMS. As of now, there are about 175-180 total services on the AWS platform and so much more than half of these services provide that provide (this integration with KMS). These might be storage services like S3, EBS, database services like RDS and DynamoDB. The plan is to eventually have all the services have this capability. 

In terms of I’ll be covering, I’ll be talking about how do you automate all of this. How do you incorporate this into the software development lifecycle, how to use things like AWS CloudFormation? How do you use the SDK or how’d you get access to the API ultimately in order to make this a part of that software development process. When developers need to apply things like client-side encryption or they need to manage secrets – things like that. We’ll go over that a little bit in terms of client-side encryption. Once that you need to send (data) over the wire, you need to encrypt in transit we’ll be talking about things like AWS Certificate Manager and CloudFront along with ELB. And then how do you encrypt that data at rest, either through database services like RDS, DynamoDB, EBS, S3 and so forth. And then the underlying service that allows you to encrypt all these (resources) is the Key Management Service. So we’ll go over that and also how to give fine-grained permission to keys and fine-grained permission to the service itself. Then when it gets into production and you want to detect whether or not encryption is enabled or not against all your AWS accounts, we’ll cover AWS Config Rules and CloudWatch Event Rules as well. AWS just recently announced that they provide encryption over VPC for certain instances – the Nitro instances – so briefly cover that. Then finally we’ll talk about logging. You can log all of your API calls, but then – from an encryption standpoint – how do you know when those keys are used and then any of the mitigations you might have to go through as a result of that monitoring and logging. 

So there’s a heuristic that we use when we look at really anything that we’re building in deploying, testing, and releasing into production. And, there are three steps to this, and the first is to codify: codify all the things. So whether it’s in an AWS service or it’s application code, configuration, infrastructure, or the data: we can codify all that. We can use things like AWS CloudFormation to automate the provisioning configuration of these services: whether it’s database or storage, or it’s the pipeline itself, containers and things like that. How do we codify all of that? And the next thing we consider as a part of this heuristic is then how do you create a pipeline out of this? And not just how do you code it and version it, but how do you put it through a series of stages and actions in which your building, testing, deploying, and getting it out to end users and the users might not just be end users of the services and applications that your customers consume but it also might be internally within AWS and some of the security services or even AWS accounts. You might put (these services) through a deployment pipeline as well. And, the last part of this is then how do you secure it? How do you ensure that security is run as a part of your pipelines? How do you ensure that you have security of the pipeline through hardened-build images and that you’re ensuring that everything goes through the proper checks, how to give fine-grained permission to all the resources in your AWS accounts. So these are the three steps that we consider and, from an encryption standpoint, we’re gonna look at how do you codify that encryption. How do you put the encryption through a pipeline and then how do you secure that? 

So let’s take a look at a brief example of doing this from an automation lens, and so AWS CloudFormation is the service that allows you to automate that provisioning. So let’s imagine you have a bootstrap template that you create a stack out of. And so you have sort of the core (and you might be running this from the command line) services that you wanna have set up. Whether that’s KMS – the key management service – AWS Config Rules, and Identity and Access Management and finally, the pipeline itself. And, then in the pipeline itself, you can put stages and actions and services that you might be automating as a part of that. In this example, let’s imagine that we have a directive that we want to have encryption enabled for all of our AWS services and so we’re gonna look at that from a couple of different perspectives. One is every time you build up the software system, you want to make sure that they’re not going to introduce any security vulnerabilities. In the context of encryption, we want to make sure that anything that we build that needs encryption has it turned on, and so we can use a static analysis tool like cfn_nag, which is an open source tool that has 45 built-in rules to it and you can look for encryption on certain AWS resources. And if it doesn’t have the encryption, we can fail the build, give notification and, before we even launch the infrastructure, we can have remediation to that and committed back to the version control repository and then we’re on our way. But then we can also set up detection controls as well with automated detection mechanisms and we can do that through AWS Config Rules. AWS Config notices any state change to a number of different resources in your AWS account, so we could set up a Config Rule that looks for changes to say that your DynamoDB tables or your RDS databases or ensuring your EBS volumes are encrypted. So, we can do that at it from a static perspective as part of the pipeline. But then we can also automate the provisioning of these resources and deploy these rules and, under the hood, these rules are running these checks in something like AWS Lambda. You know, we can write that in a number of different languages: Python, Ruby, and so forth. We can run this based on schedule or based on whatever the event is could be based on schedule, whether there’s a state change and then we perform remediation actions based on that could be to slack developers and say, you wanna automate this and here’s how you automate it where it might be to disable that resource or to automatically say, turn encryption on – if that’s the rule that we’re looking at, that’s a kind of overall view of how you might do this. 

So let’s get into the first part of this and that is automation. Automation is providing a number of different ways on AWS. So, AWS CloudFormation provides this common language you can define in JSON or YAML or you could have higher level tools such as the  AWS CDK (Cloud development kit) that provides an abstraction layer that generates CloudFormation templates for you, but this way you can define this in code. You can define this in a template, you can version it, you can test the CloudFormation template itself, just as you would any other software asset. You can also use the SDKs, there’s AWS SDKs for all the common languages that are used, whether it’s JavaScript, Java, .NET, Python, Ruby, so forth any of those common languages that you might be using. You get access to all the APIs as a result of that and CloudFormation provides support for most AWS resources and so, from an encryption standpoint, you’ll see some of these examples is there might be a property like a KMS key, or server-side encryption (SSE) that you can turn on which is just a property/boolean you make true and then you’re on your way. You’re able to automate that process. It’s great to have that checkbox in the console, but if you want a repeatable, testable process then AWS CloudFormation is one of the great ways of doing that. 


So this is generic – having nothing to do with encryption – has to do with more about CloudFormation just to give you a high-level view if you haven’t seen it before. You have things like parameters and resources, and outputs. These are some of the common constructs that you’ll see in a CloudFormation template. And so you generally will see multiple parameters that you be able to provide to customize the behavior of the stack that you’re going to create from this template. Then you might have multiple (AWS) resources. Typically you’re going to see a CloudFormation template that has hundreds of lines of configuration code, it’s declarative in nature, and so it determines how it applies these changes and you can set dependencies and things like that. It can definitely get a more complex and simple example that you see but the idea is that you were we’re defining all over you can define all your AWS resources in code using this CloudFormation language. 


From a development standpoint, especially when it comes to client-side encryption, AWS provides the encryption SDK to support client-side encryption. It provides support for a number of different programming languages like Java, C, Python, and provides CLI support so you can really run it with any language that you happen to be using. The other service you come across as developers is that you often have the need to store secrets somewhere, and that could be the user name and password for the database, it could be API keys and so forth. The AWS Secrets Manager is a fully managed service that allows you to create these secrets, generate random secrets, and provide cross-account access and then integrates with a number of different services. So, for example, it integrates with DynamoDB and RDS, and RedShift automatically, so you can generate username and password randomly, never even see (these secrets) and rotate through that automatically using the Secrets Manager manager service. And we’ll touch on this a little bit later when it comes to the management as well. 

And so this is an example of using the (AWS) Encryption SDK. And this is a Python example in which we’re taking some plain text source data. We want to encrypt it and then we convert into ciphertext and then we can decrypt it as well. One thing to note, when people look at encryption, they always kind of think back two decades ago when there was a 20 to 40% hit on their performance. With the use of KMS, there is not that performance hit as KMS actually performs the encryption so you don’t have the hit on the service that you used on the compute side or the database, or whatever you happen to be using so keep that in mind when it comes to encryption, because often people believe that there’s a performance degradation, and that’s not the case. 

So we’ve talked about encrypting on the client-side. Then when we send (data) over the wire (i.e. in transit) we want to have the capability to secure the connection between the end user and the endpoint that they’re hitting and AWS provides a number of different services around that. One is the AWS certificate manager or ACM and what this does is allows you to generate public certificates, part of a certificate authority, and it also rotates that certificate for you and so you don’t get into the bind of, say, having a certificate expired and then that means your website’s down. And if you’ve ever been through that before, you know the troubles that ensue as a part of that. And so (ACM) manages a lot for you. It’s freely available, there are 25 certificates that you apply to an Application Load Balancer. Another service (that supports in transit encryption) that’s really useful is CloudFront, which is a CDN so it provides the performance caching and things like that. But what it also does is it gives you access to AWS Shield for volumetric DDOS protection, and so you, and you just get that built in for with CloudFront. In fact, if you take a look at the Well-Architected Framework in the Security Pillar – (AWS goes) over some examples that in the GitHub repositories, some quickstarts and so they have some CloudFormation code so forth in which it uses CloudFront with Certificate Manager and you can get all that set up in less than an hour, so definitely have a look at that. 

This is a simple ACM example, in CloudFormation, in which we’re generating a certificate against TLD against our top-level domain, which is example.com. And so you can see it’s pretty simple to set this up. So, we’re assuming maybe you created the domain using Route53, you’ve set up your DNS and you attach that certificate to your Load Balancer, CloudFront, and then you’re often running with this CloudFormation code. 

The endpoint could be any number of things, but, in the case of encryption in transit, we have this example where you’re hitting a website. So when we go to this website, we see the lock and we can look at more information as well. But we know that there’s a certificate authority that’s identified that the connection between the end user and this website has been encrypted, so it’s secure in transit. 

So, at rest, how do we encrypt at rest? And there’s a number of different services of those 117 services that help you encrypt your data. One is EBS, Amazon Elastic Block Store. In fact, recently, EBS now provides the ability to encrypt your volumes by default. So all the volumes that you create with EBS or you can select them one by one to encrypt it. But again, basically, it’s pretty much a checkbox for all these. You have a KMS key and you associate that KMS key with that particular service, or with that the resource in that particular service. Other ones are RDS and DynamoDB. We’ll take a look at an example of DynamoDB in a moment. Amazon S3 for objects storage – you can encrypt that on the server side as well. And so the underpinnings of all this, as I mentioned, is the Key Management Service, which we’ll get to that in a moment, but that allows you to create, manage these keys and rotate them and things like that. But what we’re doing is we’re creating these keys and then attaching that key id to the resource of those particular services so that were able to encrypt our data at rest. 

Here’s an example of encrypting data at rest in CloudFormation for DynamoDB. So we have a DynamoDB table resource in CloudFormation, we have some of the other properties and attributes that were specifying, but the one here is we have the SSESpecification property, we have a boolean and we turn that on and now for this particular table that we’ve defined using CloudFormation, we have encryption turned on at rest using this property. As we discussed, you could have a checkbox but with this allows you do is make that part of the software development process and codify it. 

And then you see the results of that in the DynamoDB console. 

The underpinnings of everything I’m talking about, from an encryption standpoint, is the key management service or KMS, and you can create and manage these keys and then these keys – what are known as customer master keys. There are (also) some built-in AWS-managed keys for specific services as well – such as S3 and others. But you can also create your own CMKs these customer master keys and these customer master keys then allow you to generate data keys and then with these encrypted data keys, then you can encrypt the plain text for any of the services or any of the data that’s used by some of the services that you need to encrypt and that has KMS support. You can also get fine-grained access through something known as a key policy as well and you’ll see an example of that. The other ability you have is automatic rotation, so you can check a box or you can automate, of course, but you can check a box in KMS to say I want an annual automatic rotation of this particular key or these keys. For fine-grained access to the KMS service itself, we can do that through an Identity and Access Management service – through IAM. The other service is AWS CloudHSM. So this is a cloud-based hardware security module. It provides asymmetric encryption and it’s a single tenant. If you look at KMS, it provides is symmetric encryption. This means that you’re using the same key to encrypt and decrypt with asymmetric encryption using different keys on encryption/decryption used the same math but different keys. Both are FIPS 140-2 compliant. This is the NIST standard that it adheres to but KMS is Level 2 of FIPS 140-2. CloudHSM gives that extra level when it comes to asymmetric and using the single tenant hardware security module, it’s level 3 and KMS is multi-tenant. 

We discussed this before but here’s Secrets Manager, which enables you to create secrets across all your services you need a state for, you can use the secrets manager for rotating these secrets, generating random secrets, and giving you cross-account access to these secrets as well. 

Here’s an example of defining a KMS key in CloudFormation. I’m able to enable that key rotation. We talked about how you enable the checkbox in the console but in this case, we’re defining it as code. Enabling key rotation means this custom master key that I’m creating here will automatically rotate once a year by doing it this way. I can also set the pending window for deletion. So if I want to delete this key, I can set anywhere from 7 days to 30 days, 30 days being the default between the time when I disable a key and then when that key automatically gets deleted. And this just gives us time because once a key is gone, it’s gone. You can’t use it. If you created a key, you attached it to a resource and that key ultimately gets deleted, you’re never going to get it back. So, it gives you a window to make sure and we’ll talk about how you find the use of your KMS keys using CloudTrail a little bit later. So the rest of this is that key policy I talked about this is giving fine-grained permission on the key itself. Identity and Access Management gives you fine-grained permission on the use of KMS. And then this gives you fine-grained permission on the key here that we’re creating in CloudFormation. So this indicates which IAM user has access, and then the second policy allows the administration of this key. Lastly, the use of that key is defined as well. And you can see how you’re able to define the principal on which actions they’re able to create, there is actually a number of other actions that you’re typically defining your confirmation template. 

And then you see that in the Key Management (Service) console. And so if I went over to say the key rotation tab, I would see a checkbox there that would indicate that it’s automatically being rotated. 

So, I mentioned how we can statically look for whether or not encryption is going to be enabled, at least from an automation standpoint, through CloudFormation and tools like cfn_nag. But from a runtime perspective, how do you detect these changes, or how do you detect that encryption may have been turned off, and so you have the directive that everything needs to be encrypted, but maybe it didn’t get automated or maybe someone turned it off after it went into production. Well, you can have these detective controls in place using something like AWS Config Rules and Config Rules notices any state changes and, in the case of encryption, let’s imagine that you know, someone turned off or never enabled encryption for DynamoDB, it notices that change and then you can configure it in such a way so that it runs through some kind of remediation workflow: it might slack the developer and let them know how to codify that, it could automatically enable it or it could just disable that resource altogether or at least maybe give some kind of warning that will let them know that they won’t be able to use that resource if it’s not encrypted. There are lots of different ways you can do this, but ultimately these Config Rules are defined in AWS Lambda so you can write your own custom rules but then there’s also there’s 86 managed rules that Config Rules comes with of which six are encryption related. So if you want to extend that to say all the AWS services that KMS supports, you could do that through the custom rules. And then there are some other services that are relevant as well, such as Cloudwatch event rules gives you this near real-time stream events, since it can perform actions based on that. Then Inspector also helps you from a detection standpoint as well. 

And so here’s an example of defining a Config Rule. We’re provisioning this Config Rule in AWS CloudFormation. We’re using one of those managed Config Rules for encryption. And this one is saying that we want to make sure that any CloudTrail that gets created needs to have encryption enabled. If it’s not, it’s going to let us know and then we might have a workflow process, as I talked about before, in terms of auto-remediation or notification, it goes to a knowledge base. However, we decide that the best way to inform and be responsive to developers in terms of what that directive the overall control directive is. 

And then we see the Config Rules dashboard that lets us know that we’re not compliant with the particular rule that we’ve set up. 

On the networking side, AWS announced encryption across the VPC for all its new Nitro instances on and then you can also encrypt between your data center and AWS using the AWS VPN. You can define that code as well. 

The last thing I want to talk about is logging. So AWS CloudTrail logs all API calls not just securing, compliance, and encryption, but it’s gonna log all the calls that you make to the AWS API. But when it comes to encryption, what CloudTrail helps us with is the use of, say, the Key Management Service or CloudHSM and it’s going to notify us when particular keys or use. This becomes useful if you wanted to table a key and you want to know which users are using that (key), then how they’re accessing things like that, they can also encrypt the CloudTrail trails themselves. You saw the detection check for that with AWS Config Rules. 

And then here’s how we do that in CloudFormation. We’re defining the AWS CloudTrail Trail resource and then we have a KMS ID. You might imagine that maybe elsewhere in this CloudFormation template we’ve automated the provisioning of the KMS Key and we’ve automated the provisioning of the KMS alias (and that’s what we’re referring to here). So the alias points back to that key and then we’re using able to use that key ID and attach that to this trail. We can enable things like log file validation to ensure that nothing got modified. But this is the way that we have to encrypt that trail and then be in compliance with that (encryption) directive and then be in compliance – operationally – with that Config Rule that we’ve set up to run. 

And so this is a JSON Payload from CloudTrail. We can see the KMS API action get called, it’s trying to decrypt, at what time it happened, against this key, against this EBS volume, and from this resource. And this is the user that was making that request and so it can help us troubleshoot in a case that we need to disable a key. It’ll give us some time to hunt down the use of that key before we actually go through that action. 

Overall, these are the takeaways. Don’t write the crypto yourself – AWS provides AES 256-bit GCM encryption, so you definitely don’t need the write the crypto yourself. If you want to look at the third-party attestations in terms of SOC compliance and FIPS 140-2 standard, PCI, and so forth. You can actually use AWS Artifact for that – if your auditors are looking for that and you have that requirement. (With this), you have that level of trust to know that the third party has looked at this and they understand how the service works and within the AWS data centers and so forth. The other thing we went over is how encryption becomes part of that software development life cycle using CloudFormation, you can use other tools for that, you can build in static analysis checks to ensure that encryption is occurring prior to launching the resources as a part of your software systems, as a part of your infrastructure. You can automate all these things as a part of a deployment pipeline. You can get encryption in transit through the use of CloudFront, through the use of the AWS Certificate Manager – to get that transport layer of encryption with CloudFront, you can integrate that with AWS Shield to get that DDOS protection. Of course, KMS is the underpinning of all this. KMS allows us to create keys and delete them grant access to them, get the fine-grained permission. You can rotate keys. You’re assured it doesn’t go outside the hardware-security module on which it’s running. You can also use Secrets Manager to store secrets for things like usernames and passwords, things that you needed a state for and you need to have encrypted, it will perform the rotation for you and allowing you to generate random secrets. Likewise, with ACM, it performs this certificate rotation as well. We also run detective controls for runtime encryption checks using AWS Config Rules or CloudWatch Event Rules, so that once it’s in use (whether it’s preproduction or production) we can run those checks to ensure that we’re always in compliance. We can use CloudTrail and we encrypt CloudTrail logs, but we can also monitor key usage to ensure that we know how the keys are being used and any actions we might need to take before, say, we delete a key. And then finally, when it comes to internal or external audits that you need to perform – if you’re able to build this all into your end-to-end software development lifecycle, it makes that whole process easier and you’re always in compliance with the directives that you have in place. You’re always in compliance with any of the compliance regimes that are out there both inside the cloud that AWS provides but also inside the cloud because of the services and the way you’re able to use these services as a part of your overall software development lifecycle. 

Thanks very much. You can reach me on Twitter and you can reach me at the email address. And if you have any questions, feel free to reach out to us at Stelligent. 

The post Dance like Nobody’s Watching; Encrypt like Everyone Is appeared first on Stelligent.

from Blog – Stelligent

AWS re:Inforce: Novelties + Key Insights

AWS re:Inforce: Novelties + Key Insights

Are you a cloud security expert or enthusiast? Were you at the first-ever security-focused AWS conference in Boston? If your answers are Yes and No respectively, I have just one more question for you; Where were you?

The first-ever AWS re:Inforce was definitely a success by all means (aside from all the free t-shirts I got). It highlighted all the security components you need to properly secure your account, infrastructure, and application in AWS.

Here are my key takeaways that will highlight features to help you better secure your workload.

10 Security Pillars of AWS

Access Layer

Who has access to your account and what can they do?
  1. Federated Access
  2. Programmatic Key Rotation
  3. Enforce Multi-Factor Authentication
  4. Disable Root Account Programmatic Access
  5. Utilize IAM Groups to grant permissions
  6. Cognito – Identity management for your apps

Account Layer

Is my account exposed or compromised?
  1. Amazon GuardDuty to detect intrusion
  2. AWS Config to monitor changes to Account
  3. AWS Trusted Advisor to audit security best practices
  4. AWS Organizations to manage multiple accounts
  5. AWS Control Tower to secure and enforce security standards across accounts

Network Layer

Is my network properly secured?
  1. Network ACLs to control VPC incoming and outgoing traffic
  2. VPC to isolate cloud resources
  3. AWS Shield for DDoS protection
  4. Web Application Firewall (WAF): Filter malicious web traffic
  5. PrivateLink: Securely access services hosted on AWS
  6. Firewall Manager: Manage WAF rules across accounts

Compute Layer

Can my compute infrastructure be hacked for bitcoin mining?
  1. AWS Systems Manager for patching
  2. AMI Hardening using CIS Standards
  3. Security Groups to limit port access
  4. AWS Inspector to identify security vulnerabilities
  5. AWS CloudFront to limit exposure of your origin servers
  6. Application Load Balancers to limit direct traffic to your app servers

Application Layer

Can my application be compromised or brought down by hackers?
  1. AWS Shield and Shield Advanced for DDoS protection
  2. AWS X-Ray for application request tracing
  3. AWS Cloudwatch for application logs
  4. Application runtime monitoring – Contrast e.t.c.
  5. AWS Inspector to identify application vulnerabilities

Pipeline Layer

Am I enforcing security standards in my build and deploy systems?
  1. Infrastructure code analysis with cfn_nag
  2. Application code analysis – Spotbugs, Fortify
  3. Dependency vulnerability Checks – OWASP
  4. Docker image scanning (if using docker) – Twistlock, Anchore CLI

Storage Layer

Always encrypt everything!
  1. KMS encryption for EBS volumes
  2. Server-Side Encryption for S3 Buckets
  3. RDS Encryption

Data Layer

Is my data safe? Am I leaking secrets?
  1. AWS Secrets Manager to rotate and manage secrets
  2. Amazon Macie to discover and classify data
  3. Regular Data backups and replication across regions
  4. Data Integrity Checks
  5. Client-side encryption

Transport Layer

Am I securely moving my data?
  1. Enforce SSL/TLS Encryption of all traffic
  2. AWS Certificate Manager to generate SSL Certificates
  3. ACM Private CA to create and deploy private certificates

Operation Layer

Are my engineers ready for security threats and breaches?
  1. Use PlayBooks and Runbooks to plan and prepare for security threats and breaches
  2. Utilize Cloud Native services when possible to leverage AWS best security practices

Other Noteworthy Mentions

Nitro Innovation

Nitro allows micro-services concepts to be applied to hardware. This enables faster development and deployment of new instance types; while creating higher throughput and stability. Some security features include:

  • Utilizes nitro controller as the root of trust
  • Hardware acceleration of encryption
  • Firmware is cryptographically validated
  • Encryption keys are secured into nitro devices
  • No SSH, hence, no human access

Nitro with FireCracker

This is most notably being used for running serverless workload (Lambda) by enabling the sharing of hardware infrastructure between multiple accounts. The security features of Nitro makes this possible. Some features include:

  • Minimal device model reduces memory footprint and attack surface area
  • User-space code in <125ms, 150microVM per second per host
  • Low memory overhead with a high density of VMs on each server

AWS Control Tower

The easiest way to set up and govern a secure, compliant multi-account AWS environment. Features include

  • Prescriptive guidance on IAM, Landing Zones
  • Workflows to provision compliant accounts
  • Set up AWS with multi-account structure
  • Pre-configured architectures


That’s all folks! I’m looking forward to AWS re:Inforce 2020 in Houston. Until then, Stay Secured My Friends!

The post AWS re:Inforce: Novelties + Key Insights appeared first on Stelligent.

from Blog – Stelligent

AWS CodePipeline Approval Gate Tracking

AWS CodePipeline Approval Gate Tracking

With the pursuit of DevOps automation and CI/CD (Continuous Integration/Continuous Delivery), many companies are now migrating their applications onto the AWS cloud to take advantage of the service capabilities AWS has to offer. AWS provides native tools to help achieve CI/CD and one of the most core services they provide for that is AWS CodePipeline. CodePipeline is a service that allows a user to build a CI/CD pipeline for the automated build, test, and deployment of applications.

A common practice in using CodePipeline for CI/CD is to be able to automatically deploy applications into multiple lower environments before reaching production. These lower environments for deployed applications could be used for development, testing, business validation, and other use cases. As a CodePipeline progresses through its stages, it is often required by businesses that there are manual approval gates in between the deployments to further environments.

Each time a CodePipeline reaches one of these manual approval gates, a human is required to log into the console and manually either approve (allow pipeline to continue) or reject (stop the pipeline from continuing) the gate. Often times different teams or divisions of a business are responsible for their own application environments and, as a result of that, are also responsible for either allowing or rejecting a pipeline to continue deployment in their environment via the relative manual approval gate.

A problem that a business may run into is trying to figure out a way to easily keep track of who is approving/rejecting which approval gates and in which pipelines. With potentially hundreds of pipelines deployed in an account, it may be very difficult to keep track of and record approval gate actions through manual processes. For auditing situations, this can create a cumbersome problem as there may eventually be a need to provide evidence of why a specific pipeline was approved/rejected on a certain date and the reasoning behind the result.

So how can we keep a long term record of CodePipeline manual approval gate actions in an automated, scalable, and organized fashion? Through the use of AWS CloudTrail, AWS Lambda, AWS CloudWatch Events, AWS S3, and AWS SNS we can create a solution that provides this type of record keeping.

Each time someone approves/rejects an approval gate within an CodePipeline, that API call is logged in CloudTrail under the event name of “PutApprovalResult”. Through the use of an AWS CloudWatch event rule, we can configure that rule to listen for that specific CloudTrail API action and trigger a Lambda function to perform a multitude of tasks. This what that CloudTrail event looks like inside the AWS console.

    "eventVersion": "1.05",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AAAABBBCCC111222333:newuser",
        "arn": "arn:aws:sts::12345678912:assumed-role/IamOrg/newuser",
        "accountId": "12345678912",
        "accessKeyId": "1111122222333334444455555",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "true",
                "creationDate": "2019-05-23T15:02:42Z"
            "sessionIssuer": {
                "type": "Role",
                "principalId": "1234567093756383847",
                "arn": "arn:aws:iam::12345678912:role/OrganizationAccountAccessRole",
                "accountId": "12345678912",
                "userName": "newuser"
    "eventTime": "2019-05-23T16:01:25Z",
    "eventSource": "codepipeline.amazonaws.com",
    "eventName": "PutApprovalResult",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "",
    "userAgent": "aws-internal/3 aws-sdk-java/1.11.550 Linux/4.9.137-0.1.ac.218.74.329.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.212-b03 java/1.8.0_212 vendor/Oracle_Corporation",
    "requestParameters": {
        "pipelineName": "testing-pipeline",
        "stageName": "qa-approval",
        "actionName": "qa-approval",
        "result": {
            "summary": "I approve",
            "status": "Approved"
        "token": "123123123-abcabcabc-123123123-abcabc"
    "responseElements": {
        "approvedAt": "May 23, 2019 4:01:25 PM"
    "requestID": "12345678-123a-123b-123c-123456789abc",
    "eventID": "12345678-123a-123b-123c-123456789abc",
    "eventType": "AwsApiCall",
    "recipientAccountId": "12345678912"

When that CloudWatch event rule is triggered, the Lambda function that it executes can be configured to perform multiple tasks including:

  • Capture the CloudTrail event log data from that “PutApprovalResult” API call and log it into the Lambda functions CloudWatch log group.
  • Create a dated text file entry in a S3 bucket containing useful and unique information about the pipeline manual approval gate action.
  • Send out an email notification containing unique information about the pipeline manual approval gate action.

The CloudWatch Event Rule provides a way to narrow down and capture the specific CloudTrail event named “PutApprovalResult”. Below is a snippet of this event rule defined in AWS CloudFormation.

    Type: AWS::Events::Rule
      Description: Event Rule that tracks whenever someone approves/rejects an approval gate in a pipeline
          "source": [
          "detail-type": [
            "AWS API Call via CloudTrail"
          "detail": {
            "eventSource": [
            "eventName": [

The Lambda Function provides the automation and scalability needed to perform this type of approval gate tracking at any scale. The SNS topic provides the ability to send out email alerts whenever someone approves or rejects a manual approval gate in any pipeline.

The recorded text file entries in the S3 bucket provide the long term and durable storage solution to keeping track of CodePipeline manual approval gate results. To ensure an easy way to go back and discover those results, it is best to organize those entries in an appropriate manner such as by “pipeline_name/year/month/day/gate_name_timed_entry.txt“. An example of a recording could look like this:


Below is a diagram of a solution that can provide the features described above.

The source code and CloudFormation template for a fully built out implementation of this solution can be found here codepipeline-approval-gate-tracking.

To deploy this solution right now, click the Launch Stack button below.

The post AWS CodePipeline Approval Gate Tracking appeared first on Stelligent.

from Blog – Stelligent

Extending cfn_nag with custom rules

Extending cfn_nag with custom rules

Stelligent cfn_nag is an open source command-line tool that performs static analysis of AWS CloudFormation templates. The tool runs as a part of your pre-flight checks in your automated delivery pipeline and can be used to prevent a CloudFormation update from occurring that would put you in a compromised state. The core gem provides over 50 custom rules at the time of writing this blog post. These rules cover a wide range of AWS resources and are geared towards keeping your AWS account and resources secure.

The typical open source contribution model allows the community to propose additions to the core gem. This tends to be the most desirable outcome. Chances are that if you find something useful that someone else would find it useful too. However, there are instances where custom rules have company or project specific logic that may not make sense to put into the cfn_nag core gem. To accomplish this we recommend wrapping the cfn_nag core gem with a wrapper gem that contains your custom rules.

This article will walk you through the process necessary to create a wrapper gem. We have published an example wrapper gem which is a great starting point.

Adding custom rules with a gem wrapper

The following file structure is the bare minimum required for a cfn_nag wrapper gem. In this example the name of the gem is cfn-nag-custom-rules-example and it provides one custom rule called ExampleCustomRule. You will execute cfn_nag (or cfn_nag_scan) with your wrapper’s executable bin/cfn_nag_custom.

|- Gemfile
|- bin
|    \- cfn_nag_custom
|- cfn-nag-custom-rules-example.gemspec
|- lib
|    \- cfn-nag-custom-rules-example.rb
\- rules
     \- ExampleCustomRule.rb

The first of the important files is the Gemfile which is boilerplate.


# frozen_string_literal: true

source 'https://rubygems.org'


After that is the executable ruby script wrapper used to load up your custom rules on top of the core rules. It will pass through all arguments to the underlying cfn_nag (or cfn_nag_scan) command as you see fit.


#!/usr/bin/env ruby

args = *ARGV
path = Gem.loaded_specs['cfn-nag-custom-rules-example'].full_gem_path
command = "cfn_nag -r #{path}/lib/rules #{args.join(" ")}"

Up next is the gemspec. There is nothing to note here outside of requiring the core gem as a dependency. Feel free to pin it any way you would like but we would recommend not always grabbing the latest version.


# frozen_string_literal: true

Gem::Specification.new do |s|
  s.name          = 'cfn-nag-custom-rules-example'
  s.license       = 'MIT'
  s.version       = '0.0.1'
  s.bindir        = 'bin'
  s.executables   = %w[cfn_nag_custom]
  s.authors       = ['Eric Kascic']
  s.summary       = 'Example CFN Nag Wrapper'
  s.description   = 'Wrapper to show how to define custom rules with cfn_nag'
  s.homepage      = 'https://github.com/stelligent/cfn_nag'
  s.files         = Dir.glob('lib/**/*.rb')

  s.require_paths << 'lib' s.required_ruby_version = '>= 2.2'

  s.add_development_dependency('rspec', '~> 3.4')

  s.add_runtime_dependency('cfn-nag', '>= 0.3.73')

The lib/cfn-nag-custom-rules-example.rb is just a blank ruby file required as part of how gems work and are loaded. Finally, we have our example custom rule. Any file in lib/rules that ends in Rule.rb will be loaded as a custom rule in cfn_nag. The example rule here enforces all S3 buckets should be named ‘foo.’ Please note that custom rules have a rule id that starts with C for custom rule. Rule types can be one of the following.

  • Violation::FAILING_VIOLATION – Will result in a failure
  • Violation::WARNING – Informational message. Only causes a failure if –file_on_warnings is set to true


# frozen_string_literal: true

require 'cfn-nag/custom_rules/base'

class ExampleCustomRule < BaseRule
  def rule_text
    'S3 buckets should always be named "foo"'

  def rule_type

  def rule_id
    'C1' # Custom Rule #1

  def audit_impl(cfn_model)
    resources = cfn_model.resources_by_type('AWS::S3::Bucket')

    violating_buckets = resources.select do |bucket|
      bucket.bucketName != 'foo'


At this point, you can build, install, and execute your custom rules.

gem build cfn-nag-custom-rules-example.gemspec
gem install cfn-nag-custom-rules-example-0.0.1.gem
cfn_nag_custom buckets_with_insecure_acl.json

This results in:

  "failure_count": 3,
  "violations": [
      "id": "C1",
      "type": "FAIL",
      "message": "S3 buckets should always be named \"foo\"",
      "logical_resource_ids": [
      "id": "W31",
      "type": "WARN",
      "message": "S3 Bucket likely should not have a public read acl",
      "logical_resource_ids": [
      "id": "F14",
      "type": "FAIL",
      "message": "S3 Bucket should not have a public read-write acl",
      "logical_resource_ids": [

As you can see, it evaluated core cfn_nag rules as well as your custom rule.

Additional Resources

The post Extending cfn_nag with custom rules appeared first on Stelligent.

from Blog – Stelligent

Getting Started With The AWS IoT Platform

Getting Started With The AWS IoT Platform

The AWS IoT platform consists of many products and services: Greengrass, IoT Core, Amazon FreeRTOS, and Device Defender to name a few. It can be difficult to know where to start when piecing together each of the offerings to create an IoT solution. This guide covers automating Amazon FreeRTOS building and development as well as some of the Iot Core services.

A good place to start is with selecting a development device. Devices can take time to ship. Selecting a device before doing any other work will ensure that you have actual hardware to test on as soon as you are ready. This guide will use the NXP IoT module and an NXP LPC Link2 debugger.

Any hardware from the list of Amazon FreeRTOS supported devices can be automated. But, the setup for each device will vary dramatically. If you plan on following the rest of this guide it is recommended that you get the NXP IoT module. The debugger, while optional, is recommended.

The debugger makes flashing the device very simple. It also allows you to attach to the device and set breakpoints as well as step through the code if you have issues. The Link2 debugger will also make flashing the final Release build easier.

Getting Started With Amazon FreeRTOS

Amazon has done a great job providing documentation on getting started with any of the recommended hardware. Before you can automate building the firmware you’ll want to follow the Amazon guide here. The basic setup covered in that guide for the NXP module is:

  • Choose an IDE
  • Download Amazon FreeRTOS
  • Download NXP SDK’s for the device
  • Run a demo application on the device
  • Test the MQTT connection using IoT Core Test

Make sure to use the MCUXpresso steps. MCUXpresso is an IDE by NXP. This is the same manufacturer of the MCU and debugger.

The getting started guide from Amazon does not cover setting up a release build. After completing the guide you can create a Release build by copying the provided Debug build.

  • Right clicking on the project in the IDE
  • Select build configurations > manage.
  • Add a new configuration and name it Release.
  • Copy the settings from the debug build then click OK.
  • Select the Release configuration and create a new build.

Make sure your Release build finishes without issues before continuing.

Automating Our MCUXpresso Build

Automating the Release build early will save a lot of headache later. The ability to reproduce a build becomes harder as a project grows and dependencies are added. Automating the build is straightforward with CodePipeline and CodeBuild. Use the Launch Stack button below to create a new CloudFormation stack containing a CodePipeline pipeline with a CodeCommit source stage and CodeBuild project using a custom Docker image.

There are a few options to choose when launching the stack. It is recommended you stick with the defaults and leave all other options empty when getting started.

The CloudFormation stack creates an EC2 Container Registry for Docker images in addition to a pipeline and codebuild project.

Turn your project into a git repository. Push the contents to your private CodeCommit repository created by CloudFormation. Read the Amazon documentation here for detailed instructions on using Git and CodeCommit.

Warning: your project contains a certificate to connect to AWS IoT Core and may also have your wifi SSID and password. If you choose GitHub you should use a private repository only.

There are other source control options in the CloudFormation template. CodeCommit gives you an easy way to get started with private repositories. A private repository is a good way to ensure you do not expose any secrets. CodeCommit is the default source in the stack.

Create A Docker Image

Download the MCUXpresso IDE and SDK from the getting started guide from Amazon again. Download the MCUXpresso IDE for Linux as well as the SDK. These two files are used in a custom image for AWS Codebuild.

  1. Create a new folder on your computer
  2. Add the IDE .deb.bin file
  3. Add the SDK .zip file
  4. Download and modify the Dockerfile
  5. Change lines 5 and 6 to the SDK and IDE version you are using

View the code on Gist.

This Dockerfile will set up all the dependencies required to run MCUXpresso in a Docker container. It is possible to just use gcc-arm to make a Release build. It would reduce the size of the image considerably. However, using the IDE with a few, simple, cli flags will produce a working build. It uses the same tools you are already using. This should ensure the builds are consistent and work for both development and production releases.

Build the Docker image locally:

docker build -t nxp-lpc54018-iot-module .

Push Docker Image To ECR

To use the Docker image in CodeBuild it needs to be hosted in a registry. The Cloudformation Stack that was launched earlier created a registry for you. Get the value of ECRImageName in the Outputs tab of the CloudFormation stack. Next, use the AWS CLI to login to the registry and push the Docker image into it.

Login to the registry:

$(aws ecr get-login --no-include-email --region us-west-2)

Tag the local image:

docker tag nxp-lpc54018-iot-module:latest 000000000000.dkr.ecr.us-west-2.amazonaws.com/nxp-lpc54018-iot-module:latest

Push the image into the registry:

docker push 000000000000.dkr.ecr.us-west-2.amazonaws.com/nxp-lpc54018-iot-module:latest

Replace 000000000000.dkr.ecr.us-west-2.amazonaws.com/nxp-lpc54018-iot-module with the value of ECRImageName.

The image size is a few GB in size and will take some time to upload.

Trigger The Pipeline

CodePipeline handles all the orchestration for building code. It will take a source from a repository (S3, CodeCommit, or GitHub), build the code, and produce an artifact (build).

CodePipeline can do more than this though. Actions like testing, approvals, security, and using custom environments are all possible. The MCUXpresso Docker image you built and pushed to ECR earlier is the custom build environment. This was setup automatically through CloudFormation earlier.

Make a change to the local IoT project files. Add a new line to a single file or create a new empty file to test this process. Add, commit, and push your code to CodeCommit. Head over to the CodePipeline pipeline and the source stage should start to process.

Test The Build

Once the CodeBuild project has completed you can download the build artifacts and test it on the local device. Open the CodeBuild project and click on the latest successful build. Click on the link to the Output Artifacts in the build status box. The link takes you to S3. In S3 you can download the zip file CodeBuild created. Download the file and unzip it. In the archive is a file named aws_demos.axf. AXF is an arm executable file that normally needs to be converted to run on the NXP device. However, you can use MCUXpresso to deploy directly to the device if you have the LPC Link2 debugger.

In the editor there is a button that looks like a book that opens the GUI flash tool. Click the GUI flash tool icon to open up the probe discovery dialog box. Select the LPC Link2 debugger then click ok. The next screen has a lot of settings and options available. Fortunately, you only need to worry about a couple of them. In the Target Operations sections there is a subsection titled Options. Click the File System button to choose the file to program. Make sure the format to use is set to axf. Click Run to start the process of loading the axf file onto the board.

Once the build is flashed to the device a couple of new dialogs will open in the editor. Do not close these. Once the dialogs are closed the debugger will stop and the board will no longer send messages to IoT Core.

Open the AWS IoT core test console. Enter freertos/demos/echo in the subscription topic then subscribe to topic. After a few seconds you should start receiving ACK test messages from the device.

This article goes into more detail about flashing the device.

Next Steps

Here are some suggestions for more work that would improve this pipeline. The very first thing you should do is remove the secrets. Keeping secrets in source control is a huge security risk. The secrets are in plain text. If the repository code accidentally enters a public repository you could leak secrets to anyone. CodeBuild supports secrets from the SSM Parameter Store. After you secure the secrets you could add some unit tests.

Unit testing the code will improve the quality of the code. Also, unit testing will give you confidence when introducing new changes. Unit tests prevent bugs and help avoid regressions. And, you will gain increased stability of the deployed application. An automated pipeline is the perfect place to run unit tests.

After adding unit tests you might consider adding a buildspec file to your repository. The CloudFormation stack that was launched contained the CodeBuild stages in the stack. Separate concerns by moving the CodeBuild stages to the code. This will allow you to put the file under version control. Changes to your code may require new build steps, tests, or linting. The configuration for how to build logically belongs next to the code to be built.

One more thing to consider is switching to gcc-arm for builds. This is a minor optimization for the build process. However, the docker image with MCUXpresso is ~4.5gb. Updating, building, and pushing changes to an image that large takes time. If you can reduce the size of the image you will reduce the cycle time of updates to that image.

Further Reading



The post Getting Started With The AWS IoT Platform appeared first on Stelligent.

from Blog – Stelligent

DevOps on AWS Radio: DevOps Culture with Jeff Gallimore (Episode 24)

DevOps on AWS Radio: DevOps Culture with Jeff Gallimore (Episode 24)

In this episode, we chat with Excella Co-Founder and Partner Jeff Gallimore about all things DevOps culture. In this episode we take a departure from our technical deep dives to explore culture: how to measure culture, culture typology, psychological safety, how continuous delivery impacts culture, how culture affects performance, and more! We even get into a discussion around real-world experiences such as a joint venture called NUMMI. Listen here:

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. Jeff Gallimore Intro & Background
    1. The Excella Story – Founding/ Co-Founding
    2. How Excella helps customers 
    3. AWS, Cloud, DevOps 
  2. Culture and safety – why is it important, how to measure it, how to change it, burn out, etc.
    1. Culture 
    2. CALMS framework (Key aspects of DevOps) 
      1. Culture
      2. Automation
      3. Lean
      4. Measurement
      5. Sharing
    3. “Culture eats strategy for breakfast” – Peter Drucker
    4. In Gartner’s 2018 CIO Survey, “46% of respondents named culture as the biggest barrier to scaling digital transformation”. 
  3. The State of DevOps 2015
    1. Dr. Ron Westrum’s Culture Typology  – Three Cultures Model
      1. Pathological
      2. Bureaucratic
      3. Generative
    2. Implementing Westrum’s Three Cultures Model in real-world examples
  4. Continuous Delivery and Culture
    1. Mindset shift
    2. Technical practices
  5. Psychological safety
    1. Westrum’s Culture Typology
    2. Google re:Work Study
    3. Psychological safety
      1. “If I make a mistake on our team, it is not held against me”. 
    4. Accountability
    5. Just Culture – Sidney Dekker
      1. “Human Error”
  6. Amazon S3 Outage Incident
    1. AWS publishes their after action report
  7. Culture and performance 
    1. Burn out
  8. NUMMI Story 
    1. A joint venture between Toyota and GM
    2. Two main changes; culture and leadership
    3. Stopping the line
    4. Andon Cord
      1. The measure of Andon Cord pulls per day
      2. Virtual Andon Cord at Excella
  9. Coding – Clojure

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics

The post DevOps on AWS Radio: DevOps Culture with Jeff Gallimore (Episode 24) appeared first on Stelligent.

from Blog – Stelligent

Value Stream Mapping with Mock Pipeline

Value Stream Mapping with Mock Pipeline

Value stream mapping (VSM) is a technique for modeling process workflows. In software development, one of the key reasons for creating a VSM is determining the bottlenecks slowing down the delivery of value to end users. While VSM is used in many different industries (mostly related to physical goods), the topic of this post is how to create a VSM assuming familiarly with software delivery but not in value stream mapping.

Some organizations skip the step of mapping their current state. The most common reason is that they believe they clearly understand where their bottlenecks exist (e.g. “I already know it’s bad so why do we need to go through an exercise telling us what we already know”). Another reason is that – while often not admitted – they feel they don’t fully grasp how to create a value stream map or feel like it’s considerable effort without the commensurate return on investment. Another common complaint is that while the problems exist in other systems or organizations, their team might be working on a greenfield system so it’s unnecessary – in their opinion – to know the current state of the processes for other teams.

The common thread with this reluctance usually comes down to cognitive bias that their own view accurately depicts the overall view of the entire value stream. What’s more, when going through a transformative effort that requires some initial investment, you’ll need to be capable of  providing a consistent, validated depiction before and after the improvements applied in order to demonstrate the impact of the transformation.

In using VSM across your teams, you can reduce the time arguing over “facts” (i.e. “others” perspective on the value stream). You don’t need to be an expert in value stream mapping to be effective. Following Pareto’s 80/20 principle is an effective guide for focusing on the 20% that matters. Moreover, creating a mock pipeline better models software delivery value streams than a generic VSM.

In this post, you’ll learn the steps in creating a mock deployment pipeline using AWS CodePipeline and inline AWS CloudFormation. This mock deployment pipeline will represent a VSM using an open source tool we have (creatively!) called mock-pipeline. By utilizing CloudFormation, your VSM is defined as versioned code making it easy to iterate rapidly on changes to your VSM based upon feedback from other team members.


Broadly speaking, the idea of DevOps is about getting different functions (often, these are different teams at first) to work together toward common goals of accelerating speed while increasing stability (i.e. faster with higher quality). These accelerants typically get implemented through organizational, process, culture, and tooling improvements. In order to improve, you must know where you currently are. Otherwise, you might be improving the wrong things. It’s like trying to improve your health without basic metrics like your blood pressure, blood cholesterol, fasting blood glucose, or body mass index. The purpose of value stream mapping is to get basic knowledge of the current state so you know what and how to fix it. Moreover, if you can get a real-time view into your value stream and its key metrics (deployment frequency, lead time for changes, MTTR, and change failure rate), you’re in a much better position to effect change.

There are two primary approaches for measuring the lead time – either from origination of an idea until it gets delivered to users or from the time an engineer commits code to version control until it’s delivered to end users. Since it’s more consistent to measure from code commit to production, we’re choosing this approach.

Value Stream Mapping Terms

There’s some conflict among industry experts on the definitions of basic Lean terms so, unless otherwise noted, I’m using the definitions from the excellent book:Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation. The most important thing is to use consistent terminology among team members.

  • Process Time – “Typically expressed in minutes or hours, process time represents the hands-on “touch time” to do the work. It also includes “talk time” that may be regularly required to clarify or obtain additional information related to a task (including meetings), as well as “read and think time” if the process involves review or analysis [Source].”
  • Lead time (LT) – “also referred to as throughput time, response time, and turnaround time—is the elapsed time from the moment work is made available to an individual, work team, or department until it has been completed and made available to the next person or team in the value stream. Lead time is often expressed in hours, days, or even weeks or months [Source].” There are metrics within lead time (such as: work in process (WIP), batch size, queue time, and wait time) that help diagnose the source of bottlenecks in the process. Note that queue time (the time it takes for a person, signal, or thing to be attended to – which includes the time before work that adds value to a product is performed) takes about 90 percent of total lead time in most production organizations [1]
  • Percent Complete and Accurate (%C&A) – “obtained by asking downstream customers what percentage of the time they receive work that’s “usable as is,” meaning that they can do their work without having to correct the information that was provided, add missing information that should have been supplied, or clarify information that should have and could have been clearer” [Source].

In my post, Measuring DevOps Success with Four Key Metrics, I summarized the four software delivery metrics as described in the book, Accelerate:

  • Deployment frequency – the number of times in which software is deployed to production or to an app store. This also provides a proxy for batch size.
  • Lead time for changes – “the time it takes to go from code committed to code successfully running in production”. This is a key number you can obtain by VSM.
  • Time to restore service – the average time it takes to restore service.
  • Change failure rate – how often deployment failures occur in production that require immediate remedy (particularly, rollbacks). This measure has a strong correlation to the percentage complete and accurate (i.e. “rework”).

The act of value stream mapping while considering the four key DevOps metrics will help focus the effort on measuring and then improving speed and stability. You can think of value stream mapping as the technique used to determine the four DevOps metrics.

Mock Pipeline

Mock Pipeline is an open source tool for modeling value stream maps regardless of your tech stack, cloud provider, or data center. With Mock Pipeline, you can define your value stream map as code in order to visualize all the steps in your commit to production lifecycle. While it uses AWS services/tools such as AWS CloudFormation and AWS CodePipeline, it can model any technology platform.

Fork the Mock Pipeline Repo

These instructions assume you’re using AWS Cloud9. Adapt the instructions if you’re using a different IDE.

If you don’t have a GitHub account, create a free one by going to GitHub Signup. Make a note of the userid you created (will refer to as YOURGITHUBUSERID)

Login to your GitHub account.

Go to the mock-pipeline GitHub repository.

Click the Fork button. A message will display “Where do you want to fork this to?“.

Click on the button that displays Fork to YOURGITHUBUSERID.

From your Cloud 9 terminal, clone the newly forked repo (replacing YOURGITHUBUSERID in the example):

git clone https://github.com/YOURGITHUBUSERID/mock-pipeline.git
cd mock-pipeline
sudo su
sudo curl -s https://getmu.io/install.sh | sh

Note: The mock pipeline tool uses an open source framework called mu which generates CloudFormation templates that provision AWS resources.

Deploy Value Stream as a Pipeline

Make modifications to your local mu.yml to change the CodePipeline action names. For example, precede several of the action names with your initials or first name. Your doing this to ensure the changes get deployed.

Save the changes locally and commit them to your remote repository.

git commit -am "initial value stream" && git push

Run the mu pipeline upsert:

mu pipeline up -t GITHUBTOKEN

Your GITHUBTOKEN will look something like this: 2bdg4jdreaacc7gh7809543d4hg90EXAMPLE. To get or generate a token go to GitHub’s Token Settings.

After a few of the CloudFormation stacks have launched, go to theCodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.

Redeploy Changes

In this section, you will modify the action names and the order. In particular, you want to alter the model to change the order and name of InfrastructureAnalysis and ProvisionEnvironment actions so that the static analysis runs prior to provisioning the environments. When the two are shown running side by side, it represents actions running in parallel. To do this, you need to terminate the current pipeline. First, you need to get a list of service pipelines managed by mu by running this command:

mu pipeline list

Then, using the proper service_name obtained from the list command and include in the following command in order to terminate the pipeline.

mu pipeline terminate [<service_name>]

Wait several minutes before CloudFormation templates have terminated.

Now, you can make modifications to your local mu.yml to change the CodePipeline action order and names. An example is shown in the image below.

Once you’ve made changes, commit then to your remote repository.

git commit -am "modify action order in acceptance stage" && git push

Run the mu pipeline upsert again.

mu pipeline up -t GITHUBTOKEN

After a few of the CloudFormation stacks have launched, once again, go to the CodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.


You can use value stream mapping to obtain the four key software delivery metrics but just like with your health, knowing these metrics is only part of the battle. The other and crucial part is in improving them by incorporating capabilities into daily practices. In the Accelerate book, the authors describe 22 capabilities listed below on which to focus improvements based on the metrics.

Continuous Delivery Capabilities

  • Version control
  • Deployment automation
  • Continuous integration
  • Trunk-based development
  • Test automation
  • Test data management
  • Shift left on security (DevSecOps)
  • Continuous delivery (CD)

Architecture Capabilities

  • Loosely coupled architecture
  • Empowered teams

Product and Process Capabilities

  • Customer feedback
  • Value stream
  • Working in small batches
  • Team experimentation

Lean Management and Monitoring Capabilities

  • Change approval processes
  • Monitoring
  • Proactive notification
  • WIP limits
  • Visualizing work

Cultural Capabilities

  • Westrum organizational culture
  • Supporting learning
  • Collaboration among teams
  • Job satisfaction
  • Transformational leadership

For example, continuous delivery predicts lower change fail rates, less time spent on rework or unplanned work, including break/fix work, emergency software deployments and patches, etc. Moreover, keeping system and application configuration in version control was more highly correlated with software delivery performance than keeping application code in version control and teams using branches that live a short amount of time (integration times less than a day) combined with short merging and integration periods (less than a day) do better in terms of software delivery performance than teams using longer-lived branches. [Source] In other words, by incorporating or improving one or more of these capabilities, you will likely improve one or more of the four metrics which is correlated with better outcomes based on the data analysis.



In this post, we covered how to use a managed deployment pipeline workflow service (i.e. CodePipeline) to efficiently model a value stream map in order to assess the current state and accelerate speed and confidence in delivering software to end users in production.

The post Value Stream Mapping with Mock Pipeline appeared first on Stelligent.

from Blog – Stelligent