Tag: EBS

AWS CodePipeline Approval Gate Tracking

AWS CodePipeline Approval Gate Tracking

With the pursuit of DevOps automation and CI/CD (Continuous Integration/Continuous Delivery), many companies are now migrating their applications onto the AWS cloud to take advantage of the service capabilities AWS has to offer. AWS provides native tools to help achieve CI/CD and one of the most core services they provide for that is AWS CodePipeline. CodePipeline is a service that allows a user to build a CI/CD pipeline for the automated build, test, and deployment of applications.

A common practice in using CodePipeline for CI/CD is to be able to automatically deploy applications into multiple lower environments before reaching production. These lower environments for deployed applications could be used for development, testing, business validation, and other use cases. As a CodePipeline progresses through its stages, it is often required by businesses that there are manual approval gates in between the deployments to further environments.

Each time a CodePipeline reaches one of these manual approval gates, a human is required to log into the console and manually either approve (allow pipeline to continue) or reject (stop the pipeline from continuing) the gate. Often times different teams or divisions of a business are responsible for their own application environments and, as a result of that, are also responsible for either allowing or rejecting a pipeline to continue deployment in their environment via the relative manual approval gate.

A problem that a business may run into is trying to figure out a way to easily keep track of who is approving/rejecting which approval gates and in which pipelines. With potentially hundreds of pipelines deployed in an account, it may be very difficult to keep track of and record approval gate actions through manual processes. For auditing situations, this can create a cumbersome problem as there may eventually be a need to provide evidence of why a specific pipeline was approved/rejected on a certain date and the reasoning behind the result.

So how can we keep a long term record of CodePipeline manual approval gate actions in an automated, scalable, and organized fashion? Through the use of AWS CloudTrail, AWS Lambda, AWS CloudWatch Events, AWS S3, and AWS SNS we can create a solution that provides this type of record keeping.

Each time someone approves/rejects an approval gate within an CodePipeline, that API call is logged in CloudTrail under the event name of “PutApprovalResult”. Through the use of an AWS CloudWatch event rule, we can configure that rule to listen for that specific CloudTrail API action and trigger a Lambda function to perform a multitude of tasks. This what that CloudTrail event looks like inside the AWS console.


{
    "eventVersion": "1.05",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AAAABBBCCC111222333:newuser",
        "arn": "arn:aws:sts::12345678912:assumed-role/IamOrg/newuser",
        "accountId": "12345678912",
        "accessKeyId": "1111122222333334444455555",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "true",
                "creationDate": "2019-05-23T15:02:42Z"
            },
            "sessionIssuer": {
                "type": "Role",
                "principalId": "1234567093756383847",
                "arn": "arn:aws:iam::12345678912:role/OrganizationAccountAccessRole",
                "accountId": "12345678912",
                "userName": "newuser"
            }
        }
    },
    "eventTime": "2019-05-23T16:01:25Z",
    "eventSource": "codepipeline.amazonaws.com",
    "eventName": "PutApprovalResult",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "1.1.1.1",
    "userAgent": "aws-internal/3 aws-sdk-java/1.11.550 Linux/4.9.137-0.1.ac.218.74.329.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.212-b03 java/1.8.0_212 vendor/Oracle_Corporation",
    "requestParameters": {
        "pipelineName": "testing-pipeline",
        "stageName": "qa-approval",
        "actionName": "qa-approval",
        "result": {
            "summary": "I approve",
            "status": "Approved"
        },
        "token": "123123123-abcabcabc-123123123-abcabc"
    },
    "responseElements": {
        "approvedAt": "May 23, 2019 4:01:25 PM"
    },
    "requestID": "12345678-123a-123b-123c-123456789abc",
    "eventID": "12345678-123a-123b-123c-123456789abc",
    "eventType": "AwsApiCall",
    "recipientAccountId": "12345678912"
}

When that CloudWatch event rule is triggered, the Lambda function that it executes can be configured to perform multiple tasks including:

  • Capture the CloudTrail event log data from that “PutApprovalResult” API call and log it into the Lambda functions CloudWatch log group.
  • Create a dated text file entry in a S3 bucket containing useful and unique information about the pipeline manual approval gate action.
  • Send out an email notification containing unique information about the pipeline manual approval gate action.

The CloudWatch Event Rule provides a way to narrow down and capture the specific CloudTrail event named “PutApprovalResult”. Below is a snippet of this event rule defined in AWS CloudFormation.

  ApprovalGateEventRule:
    Type: AWS::Events::Rule
    Properties: 
      Description: Event Rule that tracks whenever someone approves/rejects an approval gate in a pipeline
      EventPattern: 
        {
          "source": [
            "aws.codepipeline"
          ],
          "detail-type": [
            "AWS API Call via CloudTrail"
          ],
          "detail": {
            "eventSource": [
              "codepipeline.amazonaws.com"
            ],
            "eventName": [
              "PutApprovalResult"
            ]
          }
        }

The Lambda Function provides the automation and scalability needed to perform this type of approval gate tracking at any scale. The SNS topic provides the ability to send out email alerts whenever someone approves or rejects a manual approval gate in any pipeline.

The recorded text file entries in the S3 bucket provide the long term and durable storage solution to keeping track of CodePipeline manual approval gate results. To ensure an easy way to go back and discover those results, it is best to organize those entries in an appropriate manner such as by “pipeline_name/year/month/day/gate_name_timed_entry.txt“. An example of a recording could look like this:

PipelineApprovalGateActions/testing-pipeline/2019/05/23/dev-approval-APPROVED-11:50:45-AM.txt

Below is a diagram of a solution that can provide the features described above.

The source code and CloudFormation template for a fully built out implementation of this solution can be found here codepipeline-approval-gate-tracking.

To deploy this solution right now, click the Launch Stack button below.

The post AWS CodePipeline Approval Gate Tracking appeared first on Stelligent.

from Blog – Stelligent

Extending cfn_nag with custom rules

Extending cfn_nag with custom rules

Stelligent cfn_nag is an open source command-line tool that performs static analysis of AWS CloudFormation templates. The tool runs as a part of your pre-flight checks in your automated delivery pipeline and can be used to prevent a CloudFormation update from occurring that would put you in a compromised state. The core gem provides over 50 custom rules at the time of writing this blog post. These rules cover a wide range of AWS resources and are geared towards keeping your AWS account and resources secure.

The typical open source contribution model allows the community to propose additions to the core gem. This tends to be the most desirable outcome. Chances are that if you find something useful that someone else would find it useful too. However, there are instances where custom rules have company or project specific logic that may not make sense to put into the cfn_nag core gem. To accomplish this we recommend wrapping the cfn_nag core gem with a wrapper gem that contains your custom rules.

This article will walk you through the process necessary to create a wrapper gem. We have published an example wrapper gem which is a great starting point.

Adding custom rules with a gem wrapper

The following file structure is the bare minimum required for a cfn_nag wrapper gem. In this example the name of the gem is cfn-nag-custom-rules-example and it provides one custom rule called ExampleCustomRule. You will execute cfn_nag (or cfn_nag_scan) with your wrapper’s executable bin/cfn_nag_custom.

.
|- Gemfile
|- bin
|    \- cfn_nag_custom
|- cfn-nag-custom-rules-example.gemspec
|- lib
|    \- cfn-nag-custom-rules-example.rb
\- rules
     \- ExampleCustomRule.rb

The first of the important files is the Gemfile which is boilerplate.

Gemfile:

# frozen_string_literal: true

source 'https://rubygems.org'

gemspec

After that is the executable ruby script wrapper used to load up your custom rules on top of the core rules. It will pass through all arguments to the underlying cfn_nag (or cfn_nag_scan) command as you see fit.

bin/cfn_nag_custom:

#!/usr/bin/env ruby

args = *ARGV
path = Gem.loaded_specs['cfn-nag-custom-rules-example'].full_gem_path
command = "cfn_nag -r #{path}/lib/rules #{args.join(" ")}"
system(command)

Up next is the gemspec. There is nothing to note here outside of requiring the core gem as a dependency. Feel free to pin it any way you would like but we would recommend not always grabbing the latest version.

cfn-nag-custom-rules-example.gemspec:

# frozen_string_literal: true

Gem::Specification.new do |s|
  s.name          = 'cfn-nag-custom-rules-example'
  s.license       = 'MIT'
  s.version       = '0.0.1'
  s.bindir        = 'bin'
  s.executables   = %w[cfn_nag_custom]
  s.authors       = ['Eric Kascic']
  s.summary       = 'Example CFN Nag Wrapper'
  s.description   = 'Wrapper to show how to define custom rules with cfn_nag'
  s.homepage      = 'https://github.com/stelligent/cfn_nag'
  s.files         = Dir.glob('lib/**/*.rb')

  s.require_paths << 'lib' s.required_ruby_version = '>= 2.2'

  s.add_development_dependency('rspec', '~> 3.4')
  s.add_development_dependency('rubocop')

  s.add_runtime_dependency('cfn-nag', '>= 0.3.73')
end

The lib/cfn-nag-custom-rules-example.rb is just a blank ruby file required as part of how gems work and are loaded. Finally, we have our example custom rule. Any file in lib/rules that ends in Rule.rb will be loaded as a custom rule in cfn_nag. The example rule here enforces all S3 buckets should be named ‘foo.’ Please note that custom rules have a rule id that starts with C for custom rule. Rule types can be one of the following.

  • Violation::FAILING_VIOLATION – Will result in a failure
  • Violation::WARNING – Informational message. Only causes a failure if –file_on_warnings is set to true

lib/rules/ExampleCustomRule.rb:

# frozen_string_literal: true

require 'cfn-nag/custom_rules/base'

class ExampleCustomRule < BaseRule
  def rule_text
    'S3 buckets should always be named "foo"'
  end

  def rule_type
    Violation::FAILING_VIOLATION
  end

  def rule_id
    'C1' # Custom Rule #1
  end

  def audit_impl(cfn_model)
    resources = cfn_model.resources_by_type('AWS::S3::Bucket')

    violating_buckets = resources.select do |bucket|
      bucket.bucketName != 'foo'
    end

    violating_buckets.map(&:logical_resource_id)
  end
end

At this point, you can build, install, and execute your custom rules.

gem build cfn-nag-custom-rules-example.gemspec
gem install cfn-nag-custom-rules-example-0.0.1.gem
cfn_nag_custom buckets_with_insecure_acl.json

This results in:

{
  "failure_count": 3,
  "violations": [
    {
      "id": "C1",
      "type": "FAIL",
      "message": "S3 buckets should always be named \"foo\"",
      "logical_resource_ids": [
        "S3BucketRead",
        "S3BucketReadWrite"
      ]
    },
    {
      "id": "W31",
      "type": "WARN",
      "message": "S3 Bucket likely should not have a public read acl",
      "logical_resource_ids": [
        "S3BucketRead"
      ]
    },
    {
      "id": "F14",
      "type": "FAIL",
      "message": "S3 Bucket should not have a public read-write acl",
      "logical_resource_ids": [
        "S3BucketReadWrite"
      ]
    }
  ]
}

As you can see, it evaluated core cfn_nag rules as well as your custom rule.

Additional Resources

The post Extending cfn_nag with custom rules appeared first on Stelligent.

from Blog – Stelligent

Getting Started With The AWS IoT Platform

Getting Started With The AWS IoT Platform

The AWS IoT platform consists of many products and services: Greengrass, IoT Core, Amazon FreeRTOS, and Device Defender to name a few. It can be difficult to know where to start when piecing together each of the offerings to create an IoT solution. This guide covers automating Amazon FreeRTOS building and development as well as some of the Iot Core services.

A good place to start is with selecting a development device. Devices can take time to ship. Selecting a device before doing any other work will ensure that you have actual hardware to test on as soon as you are ready. This guide will use the NXP IoT module and an NXP LPC Link2 debugger.

Any hardware from the list of Amazon FreeRTOS supported devices can be automated. But, the setup for each device will vary dramatically. If you plan on following the rest of this guide it is recommended that you get the NXP IoT module. The debugger, while optional, is recommended.

The debugger makes flashing the device very simple. It also allows you to attach to the device and set breakpoints as well as step through the code if you have issues. The Link2 debugger will also make flashing the final Release build easier.

Getting Started With Amazon FreeRTOS

Amazon has done a great job providing documentation on getting started with any of the recommended hardware. Before you can automate building the firmware you’ll want to follow the Amazon guide here. The basic setup covered in that guide for the NXP module is:

  • Choose an IDE
  • Download Amazon FreeRTOS
  • Download NXP SDK’s for the device
  • Run a demo application on the device
  • Test the MQTT connection using IoT Core Test

Make sure to use the MCUXpresso steps. MCUXpresso is an IDE by NXP. This is the same manufacturer of the MCU and debugger.

The getting started guide from Amazon does not cover setting up a release build. After completing the guide you can create a Release build by copying the provided Debug build.

  • Right clicking on the project in the IDE
  • Select build configurations > manage.
  • Add a new configuration and name it Release.
  • Copy the settings from the debug build then click OK.
  • Select the Release configuration and create a new build.

Make sure your Release build finishes without issues before continuing.

Automating Our MCUXpresso Build

Automating the Release build early will save a lot of headache later. The ability to reproduce a build becomes harder as a project grows and dependencies are added. Automating the build is straightforward with CodePipeline and CodeBuild. Use the Launch Stack button below to create a new CloudFormation stack containing a CodePipeline pipeline with a CodeCommit source stage and CodeBuild project using a custom Docker image.

There are a few options to choose when launching the stack. It is recommended you stick with the defaults and leave all other options empty when getting started.

The CloudFormation stack creates an EC2 Container Registry for Docker images in addition to a pipeline and codebuild project.

Turn your project into a git repository. Push the contents to your private CodeCommit repository created by CloudFormation. Read the Amazon documentation here for detailed instructions on using Git and CodeCommit.

Warning: your project contains a certificate to connect to AWS IoT Core and may also have your wifi SSID and password. If you choose GitHub you should use a private repository only.

There are other source control options in the CloudFormation template. CodeCommit gives you an easy way to get started with private repositories. A private repository is a good way to ensure you do not expose any secrets. CodeCommit is the default source in the stack.

Create A Docker Image

Download the MCUXpresso IDE and SDK from the getting started guide from Amazon again. Download the MCUXpresso IDE for Linux as well as the SDK. These two files are used in a custom image for AWS Codebuild.

  1. Create a new folder on your computer
  2. Add the IDE .deb.bin file
  3. Add the SDK .zip file
  4. Download and modify the Dockerfile
  5. Change lines 5 and 6 to the SDK and IDE version you are using

View the code on Gist.

This Dockerfile will set up all the dependencies required to run MCUXpresso in a Docker container. It is possible to just use gcc-arm to make a Release build. It would reduce the size of the image considerably. However, using the IDE with a few, simple, cli flags will produce a working build. It uses the same tools you are already using. This should ensure the builds are consistent and work for both development and production releases.

Build the Docker image locally:

docker build -t nxp-lpc54018-iot-module .

Push Docker Image To ECR

To use the Docker image in CodeBuild it needs to be hosted in a registry. The Cloudformation Stack that was launched earlier created a registry for you. Get the value of ECRImageName in the Outputs tab of the CloudFormation stack. Next, use the AWS CLI to login to the registry and push the Docker image into it.

Login to the registry:

$(aws ecr get-login --no-include-email --region us-west-2)

Tag the local image:

docker tag nxp-lpc54018-iot-module:latest 000000000000.dkr.ecr.us-west-2.amazonaws.com/nxp-lpc54018-iot-module:latest

Push the image into the registry:

docker push 000000000000.dkr.ecr.us-west-2.amazonaws.com/nxp-lpc54018-iot-module:latest

Replace 000000000000.dkr.ecr.us-west-2.amazonaws.com/nxp-lpc54018-iot-module with the value of ECRImageName.

The image size is a few GB in size and will take some time to upload.

Trigger The Pipeline

CodePipeline handles all the orchestration for building code. It will take a source from a repository (S3, CodeCommit, or GitHub), build the code, and produce an artifact (build).

CodePipeline can do more than this though. Actions like testing, approvals, security, and using custom environments are all possible. The MCUXpresso Docker image you built and pushed to ECR earlier is the custom build environment. This was setup automatically through CloudFormation earlier.

Make a change to the local IoT project files. Add a new line to a single file or create a new empty file to test this process. Add, commit, and push your code to CodeCommit. Head over to the CodePipeline pipeline and the source stage should start to process.

Test The Build

Once the CodeBuild project has completed you can download the build artifacts and test it on the local device. Open the CodeBuild project and click on the latest successful build. Click on the link to the Output Artifacts in the build status box. The link takes you to S3. In S3 you can download the zip file CodeBuild created. Download the file and unzip it. In the archive is a file named aws_demos.axf. AXF is an arm executable file that normally needs to be converted to run on the NXP device. However, you can use MCUXpresso to deploy directly to the device if you have the LPC Link2 debugger.

In the editor there is a button that looks like a book that opens the GUI flash tool. Click the GUI flash tool icon to open up the probe discovery dialog box. Select the LPC Link2 debugger then click ok. The next screen has a lot of settings and options available. Fortunately, you only need to worry about a couple of them. In the Target Operations sections there is a subsection titled Options. Click the File System button to choose the file to program. Make sure the format to use is set to axf. Click Run to start the process of loading the axf file onto the board.

Once the build is flashed to the device a couple of new dialogs will open in the editor. Do not close these. Once the dialogs are closed the debugger will stop and the board will no longer send messages to IoT Core.

Open the AWS IoT core test console. Enter freertos/demos/echo in the subscription topic then subscribe to topic. After a few seconds you should start receiving ACK test messages from the device.

This article goes into more detail about flashing the device.

Next Steps

Here are some suggestions for more work that would improve this pipeline. The very first thing you should do is remove the secrets. Keeping secrets in source control is a huge security risk. The secrets are in plain text. If the repository code accidentally enters a public repository you could leak secrets to anyone. CodeBuild supports secrets from the SSM Parameter Store. After you secure the secrets you could add some unit tests.

Unit testing the code will improve the quality of the code. Also, unit testing will give you confidence when introducing new changes. Unit tests prevent bugs and help avoid regressions. And, you will gain increased stability of the deployed application. An automated pipeline is the perfect place to run unit tests.

After adding unit tests you might consider adding a buildspec file to your repository. The CloudFormation stack that was launched contained the CodeBuild stages in the stack. Separate concerns by moving the CodeBuild stages to the code. This will allow you to put the file under version control. Changes to your code may require new build steps, tests, or linting. The configuration for how to build logically belongs next to the code to be built.

One more thing to consider is switching to gcc-arm for builds. This is a minor optimization for the build process. However, the docker image with MCUXpresso is ~4.5gb. Updating, building, and pushing changes to an image that large takes time. If you can reduce the size of the image you will reduce the cycle time of updates to that image.

Further Reading

References

AWS
Hardware

The post Getting Started With The AWS IoT Platform appeared first on Stelligent.

from Blog – Stelligent

DevOps on AWS Radio: DevOps Culture with Jeff Gallimore (Episode 24)

DevOps on AWS Radio: DevOps Culture with Jeff Gallimore (Episode 24)

In this episode, we chat with Excella Co-Founder and Partner Jeff Gallimore about all things DevOps culture. In this episode we take a departure from our technical deep dives to explore culture: how to measure culture, culture typology, psychological safety, how continuous delivery impacts culture, how culture affects performance, and more! We even get into a discussion around real-world experiences such as a joint venture called NUMMI. Listen here:

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. Jeff Gallimore Intro & Background
    1. The Excella Story – Founding/ Co-Founding
    2. How Excella helps customers 
    3. AWS, Cloud, DevOps 
  2. Culture and safety – why is it important, how to measure it, how to change it, burn out, etc.
    1. Culture 
    2. CALMS framework (Key aspects of DevOps) 
      1. Culture
      2. Automation
      3. Lean
      4. Measurement
      5. Sharing
    3. “Culture eats strategy for breakfast” – Peter Drucker
    4. In Gartner’s 2018 CIO Survey, “46% of respondents named culture as the biggest barrier to scaling digital transformation”. 
  3. The State of DevOps 2015
    1. Dr. Ron Westrum’s Culture Typology  – Three Cultures Model
      1. Pathological
      2. Bureaucratic
      3. Generative
    2. Implementing Westrum’s Three Cultures Model in real-world examples
  4. Continuous Delivery and Culture
    1. Mindset shift
    2. Technical practices
  5. Psychological safety
    1. Westrum’s Culture Typology
    2. Google re:Work Study
    3. Psychological safety
      1. “If I make a mistake on our team, it is not held against me”. 
    4. Accountability
    5. Just Culture – Sidney Dekker
      1. “Human Error”
  6. Amazon S3 Outage Incident
    1. AWS publishes their after action report
  7. Culture and performance 
    1. Burn out
  8. NUMMI Story 
    1. A joint venture between Toyota and GM
    2. Two main changes; culture and leadership
    3. Stopping the line
    4. Andon Cord
      1. The measure of Andon Cord pulls per day
      2. Virtual Andon Cord at Excella
  9. Coding – Clojure

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics

The post DevOps on AWS Radio: DevOps Culture with Jeff Gallimore (Episode 24) appeared first on Stelligent.

from Blog – Stelligent

Value Stream Mapping with Mock Pipeline

Value Stream Mapping with Mock Pipeline

Value stream mapping (VSM) is a technique for modeling process workflows. In software development, one of the key reasons for creating a VSM is determining the bottlenecks slowing down the delivery of value to end users. While VSM is used in many different industries (mostly related to physical goods), the topic of this post is how to create a VSM assuming familiarly with software delivery but not in value stream mapping.

Some organizations skip the step of mapping their current state. The most common reason is that they believe they clearly understand where their bottlenecks exist (e.g. “I already know it’s bad so why do we need to go through an exercise telling us what we already know”). Another reason is that – while often not admitted – they feel they don’t fully grasp how to create a value stream map or feel like it’s considerable effort without the commensurate return on investment. Another common complaint is that while the problems exist in other systems or organizations, their team might be working on a greenfield system so it’s unnecessary – in their opinion – to know the current state of the processes for other teams.

The common thread with this reluctance usually comes down to cognitive bias that their own view accurately depicts the overall view of the entire value stream. What’s more, when going through a transformative effort that requires some initial investment, you’ll need to be capable of  providing a consistent, validated depiction before and after the improvements applied in order to demonstrate the impact of the transformation.

In using VSM across your teams, you can reduce the time arguing over “facts” (i.e. “others” perspective on the value stream). You don’t need to be an expert in value stream mapping to be effective. Following Pareto’s 80/20 principle is an effective guide for focusing on the 20% that matters. Moreover, creating a mock pipeline better models software delivery value streams than a generic VSM.

In this post, you’ll learn the steps in creating a mock deployment pipeline using AWS CodePipeline and inline AWS CloudFormation. This mock deployment pipeline will represent a VSM using an open source tool we have (creatively!) called mock-pipeline. By utilizing CloudFormation, your VSM is defined as versioned code making it easy to iterate rapidly on changes to your VSM based upon feedback from other team members.

DevOps

Broadly speaking, the idea of DevOps is about getting different functions (often, these are different teams at first) to work together toward common goals of accelerating speed while increasing stability (i.e. faster with higher quality). These accelerants typically get implemented through organizational, process, culture, and tooling improvements. In order to improve, you must know where you currently are. Otherwise, you might be improving the wrong things. It’s like trying to improve your health without basic metrics like your blood pressure, blood cholesterol, fasting blood glucose, or body mass index. The purpose of value stream mapping is to get basic knowledge of the current state so you know what and how to fix it. Moreover, if you can get a real-time view into your value stream and its key metrics (deployment frequency, lead time for changes, MTTR, and change failure rate), you’re in a much better position to effect change.

There are two primary approaches for measuring the lead time – either from origination of an idea until it gets delivered to users or from the time an engineer commits code to version control until it’s delivered to end users. Since it’s more consistent to measure from code commit to production, we’re choosing this approach.

Value Stream Mapping Terms

There’s some conflict among industry experts on the definitions of basic Lean terms so, unless otherwise noted, I’m using the definitions from the excellent book:Value Stream Mapping: How to Visualize Work and Align Leadership for Organizational Transformation. The most important thing is to use consistent terminology among team members.

  • Process Time – “Typically expressed in minutes or hours, process time represents the hands-on “touch time” to do the work. It also includes “talk time” that may be regularly required to clarify or obtain additional information related to a task (including meetings), as well as “read and think time” if the process involves review or analysis [Source].”
  • Lead time (LT) – “also referred to as throughput time, response time, and turnaround time—is the elapsed time from the moment work is made available to an individual, work team, or department until it has been completed and made available to the next person or team in the value stream. Lead time is often expressed in hours, days, or even weeks or months [Source].” There are metrics within lead time (such as: work in process (WIP), batch size, queue time, and wait time) that help diagnose the source of bottlenecks in the process. Note that queue time (the time it takes for a person, signal, or thing to be attended to – which includes the time before work that adds value to a product is performed) takes about 90 percent of total lead time in most production organizations [1]
  • Percent Complete and Accurate (%C&A) – “obtained by asking downstream customers what percentage of the time they receive work that’s “usable as is,” meaning that they can do their work without having to correct the information that was provided, add missing information that should have been supplied, or clarify information that should have and could have been clearer” [Source].

In my post, Measuring DevOps Success with Four Key Metrics, I summarized the four software delivery metrics as described in the book, Accelerate:

  • Deployment frequency – the number of times in which software is deployed to production or to an app store. This also provides a proxy for batch size.
  • Lead time for changes – “the time it takes to go from code committed to code successfully running in production”. This is a key number you can obtain by VSM.
  • Time to restore service – the average time it takes to restore service.
  • Change failure rate – how often deployment failures occur in production that require immediate remedy (particularly, rollbacks). This measure has a strong correlation to the percentage complete and accurate (i.e. “rework”).

The act of value stream mapping while considering the four key DevOps metrics will help focus the effort on measuring and then improving speed and stability. You can think of value stream mapping as the technique used to determine the four DevOps metrics.

Mock Pipeline

Mock Pipeline is an open source tool for modeling value stream maps regardless of your tech stack, cloud provider, or data center. With Mock Pipeline, you can define your value stream map as code in order to visualize all the steps in your commit to production lifecycle. While it uses AWS services/tools such as AWS CloudFormation and AWS CodePipeline, it can model any technology platform.

Fork the Mock Pipeline Repo

These instructions assume you’re using AWS Cloud9. Adapt the instructions if you’re using a different IDE.

If you don’t have a GitHub account, create a free one by going to GitHub Signup. Make a note of the userid you created (will refer to as YOURGITHUBUSERID)

Login to your GitHub account.

Go to the mock-pipeline GitHub repository.

Click the Fork button. A message will display “Where do you want to fork this to?“.

Click on the button that displays Fork to YOURGITHUBUSERID.

From your Cloud 9 terminal, clone the newly forked repo (replacing YOURGITHUBUSERID in the example):

git clone https://github.com/YOURGITHUBUSERID/mock-pipeline.git
cd mock-pipeline
sudo su
sudo curl -s https://getmu.io/install.sh | sh
exit

Note: The mock pipeline tool uses an open source framework called mu which generates CloudFormation templates that provision AWS resources.

Deploy Value Stream as a Pipeline

Make modifications to your local mu.yml to change the CodePipeline action names. For example, precede several of the action names with your initials or first name. Your doing this to ensure the changes get deployed.

Save the changes locally and commit them to your remote repository.

git commit -am "initial value stream" && git push

Run the mu pipeline upsert:

mu pipeline up -t GITHUBTOKEN

Your GITHUBTOKEN will look something like this: 2bdg4jdreaacc7gh7809543d4hg90EXAMPLE. To get or generate a token go to GitHub’s Token Settings.

After a few of the CloudFormation stacks have launched, go to theCodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.

Redeploy Changes

In this section, you will modify the action names and the order. In particular, you want to alter the model to change the order and name of InfrastructureAnalysis and ProvisionEnvironment actions so that the static analysis runs prior to provisioning the environments. When the two are shown running side by side, it represents actions running in parallel. To do this, you need to terminate the current pipeline. First, you need to get a list of service pipelines managed by mu by running this command:

mu pipeline list

Then, using the proper service_name obtained from the list command and include in the following command in order to terminate the pipeline.

mu pipeline terminate [<service_name>]

Wait several minutes before CloudFormation templates have terminated.

Now, you can make modifications to your local mu.yml to change the CodePipeline action order and names. An example is shown in the image below.

Once you’ve made changes, commit then to your remote repository.

git commit -am "modify action order in acceptance stage" && git push

Run the mu pipeline upsert again.

mu pipeline up -t GITHUBTOKEN

After a few of the CloudFormation stacks have launched, once again, go to the CodePipeline console and look for a pipeline with something like mock-pipeline in its name. Select this pipeline and ensure the local changes you made are visible in the pipeline.

Capabilities

You can use value stream mapping to obtain the four key software delivery metrics but just like with your health, knowing these metrics is only part of the battle. The other and crucial part is in improving them by incorporating capabilities into daily practices. In the Accelerate book, the authors describe 22 capabilities listed below on which to focus improvements based on the metrics.

Continuous Delivery Capabilities

  • Version control
  • Deployment automation
  • Continuous integration
  • Trunk-based development
  • Test automation
  • Test data management
  • Shift left on security (DevSecOps)
  • Continuous delivery (CD)

Architecture Capabilities

  • Loosely coupled architecture
  • Empowered teams

Product and Process Capabilities

  • Customer feedback
  • Value stream
  • Working in small batches
  • Team experimentation

Lean Management and Monitoring Capabilities

  • Change approval processes
  • Monitoring
  • Proactive notification
  • WIP limits
  • Visualizing work

Cultural Capabilities

  • Westrum organizational culture
  • Supporting learning
  • Collaboration among teams
  • Job satisfaction
  • Transformational leadership

For example, continuous delivery predicts lower change fail rates, less time spent on rework or unplanned work, including break/fix work, emergency software deployments and patches, etc. Moreover, keeping system and application configuration in version control was more highly correlated with software delivery performance than keeping application code in version control and teams using branches that live a short amount of time (integration times less than a day) combined with short merging and integration periods (less than a day) do better in terms of software delivery performance than teams using longer-lived branches. [Source] In other words, by incorporating or improving one or more of these capabilities, you will likely improve one or more of the four metrics which is correlated with better outcomes based on the data analysis.

Resources

Summary

In this post, we covered how to use a managed deployment pipeline workflow service (i.e. CodePipeline) to efficiently model a value stream map in order to assess the current state and accelerate speed and confidence in delivering software to end users in production.

The post Value Stream Mapping with Mock Pipeline appeared first on Stelligent.

from Blog – Stelligent

DevOps on AWS Radio: AWS Serverless Adoption with Tom McLaughlin (Episode 23)

DevOps on AWS Radio: AWS Serverless Adoption with Tom McLaughlin (Episode 23)

In this episode, Paul Duvall covers recent DevOps on AWS news along with a discussion with Tom McLaughlin, founder of the consultancy ServerlessOps. The two dive deep into all things serverless including; use cases, serverless adoption curve, organization structures, serverless security, and more! Listen here: 

Here are the show notes:

DevOps on AWS News

Episode Topics

  1. Tom McLaughlin intro & background
  2. Basics of serverless
    1. AWS 4 points
      1. No servers, vms, or containers to provision or manage
      2. Pay Per Use Model
      3. Scales with usage
      4. Availability and fault tolerance built-in
  3. Use cases (ops cases)
    1. Anything and everything I want to automate
    2. Periodic (cron) jobs
    3. Compliance checks
    4. “Information direction”
  4. How/when do you see Serverless being adopted by enterprises? Where are we on the adoptive curve?
    1. Hybrid adoption
      1. One of many approaches within the Org
      2. Hockey stick v. rocket ship (Kubernetes)
  5. Startups
  6. Deployment pipelines for Serverless web applications?
  7. How does the operations function change when developing Serverless apps?
  8. How do you expect team/org structures to change when it comes to Serverless?
    1. “Product pods”, “feature teams”
    2. Multi-disciplinary teams
      1. PM, design, dev, and ops
  9. Serverless and Security
  10. FinDev/ Simon Wardley
  11. Serverless – what are good/not so good use cases for it?
  12. How can people learn more about you?
    1. Ebook!
    2. Newsletter
    3. Twitter: 
      1. @tmclaughbos
      2. @serverlessopsIO
    4. ServerlessDays Atlanta

Additional Resources

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners into our roundtables and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics

The post DevOps on AWS Radio: AWS Serverless Adoption with Tom McLaughlin (Episode 23) appeared first on Stelligent.

from Blog – Stelligent