Tag: Services

Automatically Remediate Noncompliant AWS Resources using Lambda

Automatically Remediate Noncompliant AWS Resources using Lambda

While enterprises are capable of rapidly scaling their infrastructure in the cloud, there’s a corresponding increase in the demand for scalable mechanisms to meet security and compliance requirements based on corporate policies, auditors, security teams, and others.

For example, we can easily and rapidly launch hundreds of resources – such as EC2 instances – in the cloud, but we also need to have approaches for managing the security and compliance of these resources along with the surrounding infrastructure. It’s not good enough to simply passively monitor noncompliant resources; you need to automatically fix the configuration that led to the noncompliant resources.

Using a collection of AWS services, you can detect non-compliant resources and automatically remediate these resources to maintain compliance without human intervention.

In this post, you’ll learn how to automatically remediate non-compliant AWS resources as code using AWS services such as AWS Config Rules, Amazon CloudWatch Event Rules, and AWS Lambda. You’ll learn the step-by-step instructions for configuring automated remediation using the AWS Console.

The diagram below shows the key AWS resources and relationships you’ll be creating.

Let’s get started!

Create an S3 Bucket for CloudTrail

In this section, you’ll create an Amazon S3 bucket for use with CloudTrail. If you’ve already established CloudTrail, this section is optional. Here are the steps:

  1. Go to the S3 console
  2. Click the Create bucket button
  3. Enter ccoa-cloudtrail-ACCOUNTID in the Bucket name field (replacing ACCOUNTID with your account id)
  4. Click Next on the Configure Options screen
  5. Click Next on the Set Permissions screen
  6. Click Create bucket on the Review screen

Create a CloudTrail Trail

In this section, you’ll create a trail for AWS CloudTrail. If you’ve already established CloudTrail, this section is optional. Here are the steps:

  1. Go to the CloudTrail console
  2. Click the Create trail button
  3. Enter ccoa-cloudtrail in the Trail name field
  4. Choose the checkbox next to Select all S3 buckets in your account in the Data events section
  5. Choose the No radio button for the Create a new S3 bucket field in the Storage location section.
  6. Choose the S3 bucket you just created from the S3 bucket dropdown.
  7. Click the Create button

Create an AWS Config Recorder

In this section, you’ll configure the settings for AWS Config which includes turning on the Conifig recorder along with a delivery channel. If you’ve already configured AWS Config, this section is optional. Here are the steps:

  1. Go to the AWS Config console
  2. If it’s your first time using Config, click the Get Started button
  3. Select the Include global resources (e.g., AWS IAM resources) checkbox
  4. In the Amazon SNS topic section, select the Stream configuration changes and notifications to an Amazon SNS topic. checkbox
  5. Choose the Create a topic radio button in the Amazon SNS topic section
  6. In the Amazon S3 bucket section, select the Create a bucket radio button
  7. In the AWS Config role section, select the Use an existing AWS Config service-linked role radio button
  8. Click the Next button
  9. Click the Skip button on the AWS Config rules page
  10. Click the Confirm button on the Review page

Create an S3 Bucket in Violation of Compliance Rules

In this section, you’ll create an S3 bucket that allows people to put files into the bucket. We’re doing this for demonstration purposes since you should not grant any kind of public access to your S3 bucket. Here are the steps:

  1. Go to the S3 console
  2. Click the Create bucket button
  3. Enter ccoa-s3-write-violation-ACCOUNTID in the Bucket name field (replacing ACCOUNTID with your account id)
  4. Click Next on the Configure Options screen
  5. Unselect the Block all public access checkbox and click Next on the Set Permissions screen
  6. Click Create bucket on the Review screen
  7. Select the ccoa-s3-write-violation-ACCOUNTID bucket and choose the Permissions tab
  8. Click on Bucket Policy and paste the contents from below into the Bucket policy editor text area (replace both MYBUCKETNAME values with the ccoa-s3-write-violation-ACCOUNTID bucket you just created)
  9. Click the Save button

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": "*",
      "Action": [
        "s3:Abort*",
        "s3:DeleteObject",
        "s3:GetBucket*",
        "s3:GetObject",
        "s3:List*",
        "s3:PutObject"
      ],
      "Resource": [
        "arn:aws:s3:::MYBUCKETNAME",
        "arn:aws:s3:::MYBUCKETNAME/*"
      ]
    }
  ]
}

You’ll receive this message: You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket.

Create an IAM Policy and Role for Lambda

In this section, you’ll create an IAM Policy and Role that established the permissions that the Lambda function will use. Here are the steps:

  1. Go to the IAM console
  2. Click on Policies
  3. Click Create policy
  4. Click the JSON tab
  5. Copy and replace the contents below into the JSON text area
  6. Click the Review policy button
  7. Enter ccoa-s3-write-policy in the *Name field
  8. Click the Create policy button
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:DeleteBucketPolicy",
                "logs:CreateLogGroup",
                "logs:CreateLogStream",
                "logs:PutLogEvents"
            ],
            "Resource": "*"
        }
    ]
}
  1. Click on Roles
  2. Click the Create role button
  3. Click Lambda from the Choose the service that will use this role section
  4. Click the Next: Permissions button
  5. Click ccoa-s3-write-policy in the Filter policies search field
  6. Select the checkbox next to ccoa-s3-write-policy and click on the Next: Tags button
  7. Click the Next: Review button
  8. Enter ccoa-s3-write-role in the Role name field
  9. Click the Create role button

Create a Lambda Function to Auto-remediate S3 Buckets

In this section, you’ll create a Lambda function that is written in Node.js and performs the automatic remediation by deleting the S3 Bucket Policy associated with the bucket. Here are the steps:

  1. Go to the Lambda console
  2. Click the Create function button
  3. Keep the Author from scratch radio button selected and enter ccoa-s3-write-remediation in the Function name field
  4. Choose Node.js 10.x for the Runtime
  5. Under Permissions choose the Choose or create an execution role
  6. Under Execution role, choose Use an existing role
  7. In the Existing role dropdown, choose ccoa-s3-write-role
  8. Click the Create function button
  9. Scroll to the Function code section and within the index.js pane, copy and replace the code from below
var AWS = require('aws-sdk');

exports.handler = function(event) {
  console.log("request:", JSON.stringify(event, undefined, 2));

    var s3 = new AWS.S3({apiVersion: '2006-03-01'});
    var resource = event['detail']['requestParameters']['evaluations'];
    console.log("evaluations:", JSON.stringify(resource, null, 2));
    
  
for (var i = 0, len = resource.length; i < len; i++) {
  if (resource[i]["complianceType"] == "NON_COMPLIANT")
  {
      console.log(resource[i]["complianceResourceId"]);
      var params = {
        Bucket: resource[i]["complianceResourceId"]
      };

      s3.deleteBucketPolicy(params, function(err, data) {
        if (err) console.log(err, err.stack); // an error occurred
        else     console.log(data);           // successful response
      });
  }
}


};
  1. Click the Save button

Create an AWS Config Rule

In this section, you’ll create an AWS Config Rule that uses a Managed Config Rule to detect when there are S3 buckets that allow public writes. The Managed Config Rule runs a Lambda function to detect when S3 buckets on not in compliance. Here are the steps:

  1. Go to the Config console
  2. Click Rules
  3. Click the Add rule button
  4. In the filter box, type s3-bucket-public-write-prohibited
  5. Choose the s3-bucket-public-write-prohibited rule
  6. Click on the Remediation action dropdown within the Choose remediation action section
  7. Choose the AWS-PublishSNSNotification remediation in the dropdown
  8. Click Yes in the Auto remediation field
  9. In the Parameters field, enter arn:aws:iam::ACCOUNTID:role/aws-service-role/ssm.amazonaws.com/AWSServiceRoleForAmazonSSM in the AutomationAssumeRole field (replacing ACCOUNTID with your AWS account id)
  10. In the Parameters field, enter s3-bucket-public-write-prohibited violated in the Message field
  11. In the Parameters field, enter arn:aws:sns:us-east-1:ACCOUNTID:ccoa-awsconfig-ACCOUNTID in the TopicArn field (replacing ACCOUNTID with your AWS account id)
  12. Click the Save button

Cloudwatch Event Rule

In this section, you’ll create an Amazon CloudWatch Event Rule which monitors when the S3_BUCKET_PUBLIC_WRITE_PROHIBITED Config Rule is deemed noncompliant. Here are the steps:

  1. Go to the CloudWatch console
  2. Click on Rules
  3. Click the Create rule button
  4. Choose Event pattern in the Event Source section
  5. In the Event Pattern Preview section, click Edit
  6. Copy the contents from below and replace in the Event pattern text area
  7. Click the Save button
  8. Click the Add target button
  9. Choose Lambda function
  10. Select the ccoa-s3-write-remediation function you’d previously created.
  11. Click the Configure details button
  12. Enter ccoa-s3-write-cwe in the Name field
  13. Click the Create rule button

 

{
  "source":[
    "aws.config"
  ],
  "detail":{
    "requestParameters":{
      "evaluations":{
        "complianceType":[
          "NON_COMPLIANT"
        ]
      }
    },
    "additionalEventData":{
      "managedRuleIdentifier":[
        "S3_BUCKET_PUBLIC_WRITE_PROHIBITED"
      ]
    }
  }
}

View Config Rules

In this section, you’ll verify that the Config Rule has been triggered and that the S3 bucket resource has been automatically remediated:

  1. Go to the Config console
  2. Click on Rules
  3. Select the s3-bucket-public-write-prohibited rule
  4. Click the Re-evaluate button
  5. Go back Rules in the Config console
  6. Go to the S3 console and choose the ccoa-s3-write-violation-ACCOUNTID bucket that the bucket policy has been removed.
  7. Go back Rules in the Config console and confirm that the s3-bucket-public-write-prohibited rule is Compliant

Summary

In this post, you learned how to setup a robust automated compliance and remediation infrastructure for non-compliant AWS resources using services such as S3, AWS Config & Config Rules, Amazon CloudWatch Event Rules, AWS Lambda, IAM, and others. By leveraging this approach, your AWS infrastructure is capable of rapidly scaling resources while ensuring these resources are always in compliance without humans needing to manually intervene.

This general approach can be replicated for many other types of security and compliance checks using managed and custom config rules along with custom remediations. This way your compliance remains in lockstep with the rest of your AWS infrastructure.

Resources

The post Automatically Remediate Noncompliant AWS Resources using Lambda appeared first on Stelligent.

from Blog – Stelligent

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

2019 Week in Review 4

GitLab Commit, held in New York last week, brought us news that GitLab completed a $268 million Series E round of fundraising. The company reports that it is now valued at $2.75 billion and that it plans to invest the cash infusion in its DevOps platform offerings — including monitoring, security, and planning.

In addition, the firm announced GitLab 12.3 which seeks to underscore that point which includes a WAF built into the GitLab SDLC platform for monitoring and reporting of security concerns related to Kubernetes clusters. It also includes new analytics features and enhanced compliance capabilities.

To stay up-to-date on DevOps best practices, cloud security, and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

DevOps News

  • GitHub announced that they have integrated the Checks API with GitHub Page, allowing operators to easily understand why a GitHub Page build failed. And, as Pages is now a GitHub App, users are able to see build status via the Checks interface.
  • And in other Git news, Semmle revealed that it is joining GitHub. According to the companies, security researchers use Semmle to find vulnerabilities in code with simple declarative queries which they then share the Semmle community to improve the safety of code in other codebases.
  • At FutureStack in New York last week, New Relic announced the “industry’s first observability platform that is open, connected and programmable, enabling companies to create more perfect software.” The new capabilities include New Relic Logs, New Relic Traces, New Relic Metrics, and New Relic AI. In addition, the company unveiled the ability for customers and partners to build new applications via programming on the New Relic One Platform.
  • Kubernetes has delivered Kubernetes 1.16, which it reports consists of 31 enhancements including custom resources, a metrics registry, and significant changes to the Kubernetes API.

AWS News

  • Amazon has unveiled a new Step Functions feature in all regions where Step Functions is offered, support for dynamic parallelism. According to AWS, this was probably the most requested feature for Step Functions as it unblocks the implementation of new use cases and can help optimize existing ones. Specifically, now Step Functions support a new Map state type for dynamic parallelism.
  • Heavy CloudFormation users will be happy to see that Amazon has expanded its capabilities; now operators can use CloudFormation templates to configure and provision additional features for Amazon EC2, Amazon ECS, Amazon ElastiCache, Amazon ES, and more. You can see the full list of new capabilities here.
  • AWS has brought to market a new Amazon WorkSpaces feature that will now restore a WorkSpace to a last known healthy state, allowing you to easily recover from the impact of inaccessible WorkSpaces caused by incompatible 3rd party updates on Workspaces.
  • AWS continues to evolve its IoT solution set with AWS IoT Greengrass 1.9.3 . Now available, AWS has added support for ARMv6 and new machine learning inference capabilities.
  • AWS introduced in preview the NoSQL Workbench for Amazon DynamoDB. The application to help operators design and visualize data models, run queries on data, and generate code for applications is free, client-side, and available for Windows and macOS.

Flux7 News
Flux7 has several upcoming educational opportunities. Please join us at:

  • Our October 9, 2019 Webinar, DevOps as a Foundation for Digital Transformation. This free 1-hour webinar from GigaOm Research brings together experts in DevOps, featuring GigaOm analyst Jon Collins and a special guest from Flux7, CEO and co-founder Aater Suleman. The discussion will focus on how to scale DevOps efforts beyond the pilot and deliver a real foundation for innovation and digital transformation.
  • The High-Performance Computing Immersion Day on October 11, 2019, in Houston, TX where attendees will gain in-depth, hands-on training with services such as Batch, Parallel Cluster, Elastic Fabric Adapter (EFA), FSX for Lustre, and more in an introductory session. Register Here Today.
  • The AWS Container Services workshop October 17, 2019 in San Antonio, TX. Designed for infrastructure administrators, developers, and architects, this workshop is designed as an introductory session that provides a mix of classroom training and hands-on labs. Register Here.

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

DevOps Foundation for Digital Transformation: Live GigaOm Webinar

DevOps Foundation for Digital Transformation: Live GigaOm Webinar

Gigaom Webinar DevOps FoundationsJoin us on October 9th at Noon from the comfort of your desk as we bring you a free 1-hour webinar on how to scale DevOps efforts beyond the pilot and deliver a real foundation for innovation and digital transformation. Hosted by GigaOm Research analyst Jon Collins and Aater Suleman, CEO and co-founder of DevOps consulting firm Flux7, the discussion will share how to effectively create a DevOps foundation and scale for success.

Specifically, attendees to the Webinar will learn:

  • Causes underlying some of the key challenges to scaling DevOps today
  • A starting baseline for achieving the benefits of an enterprise DevOps implementation
  • How to link DevOps improvements with digital transformation goals
  • Trade-offs between technical, process automation and skills improvements
  • Steps to delivering on the potential of DevOps and enterprise agility
  • How to make a real difference to their organizations, drawing from first-hand in the field experience, across multiple transformation projects.

Register now to join GigaOm and Flux7 for this free expert webinar.

We all know the strategy — transform the enterprise to use digital technologies and deliver significantly increased levels of customer engagement and new business value through innovation. Key to this is DevOps effectiveness, that is, how fast an organization can take new ideas, translate them into software and deploy them into a live environment.

But many organizations struggle to get beyond the starting blocks, coming up against a legion of challenges from skills to existing systems and platforms. Innovation speed and efficiency suffer, costs rise and the potential value does not materialize. So, what to do? Join our Webinar and learn new skills for scaling DevOps efforts beyond the pilot to deliver a real foundation for innovation and digital transformation.

Join us and GigaOm as we explore how to scale a strong DevOps foundation across the enterprise to achieve key business benefits. Interested in additional reading before the presentation? Enjoy these resources on AWS DevOps, DevOps automation and Agile DevOps and be sure to subscribe to our DevOps blog below to stay on top of the latest trends and industry news.

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

2019 Week in Review 4

GitLab Commit, held in New York last week, brought us news that GitLab completed a $268 million Series E round of fundraising. The company reports that it is now valued at $2.75 billion and that it plans to invest the cash infusion in its DevOps platform offerings — including monitoring, security, and planning.

In addition, the firm announced GitLab 12.3 which seeks to underscore that point which includes a WAF built into the GitLab SDLC platform for monitoring and reporting of security concerns related to Kubernetes clusters. It also includes new analytics features and enhanced compliance capabilities.

To stay up-to-date on DevOps best practices, cloud security, and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

DevOps News

  • GitHub announced that they have integrated the Checks API with GitHub Page, allowing operators to easily understand why a GitHub Page build failed. And, as Pages is now a GitHub App, users are able to see build status via the Checks interface.
  • And in other Git news, Semmle revealed that it is joining GitHub. According to the companies, security researchers use Semmle to find vulnerabilities in code with simple declarative queries which they then share the Semmle community to improve the safety of code in other codebases.
  • At FutureStack in New York last week, New Relic announced the “industry’s first observability platform that is open, connected and programmable, enabling companies to create more perfect software.” The new capabilities include New Relic Logs, New Relic Traces, New Relic Metrics, and New Relic AI. In addition, the company unveiled the ability for customers and partners to build new applications via programming on the New Relic One Platform.
  • Kubernetes has delivered Kubernetes 1.16, which it reports consists of 31 enhancements including custom resources, a metrics registry, and significant changes to the Kubernetes API.

AWS News

  • Amazon has unveiled a new Step Functions feature in all regions where Step Functions is offered, support for dynamic parallelism. According to AWS, this was probably the most requested feature for Step Functions as it unblocks the implementation of new use cases and can help optimize existing ones. Specifically, now Step Functions support a new Map state type for dynamic parallelism.
  • Heavy CloudFormation users will be happy to see that Amazon has expanded its capabilities; now operators can use CloudFormation templates to configure and provision additional features for Amazon EC2, Amazon ECS, Amazon ElastiCache, Amazon ES, and more. You can see the full list of new capabilities here.
  • AWS has brought to market a new Amazon WorkSpaces feature that will now restore a WorkSpace to a last known healthy state, allowing you to easily recover from the impact of inaccessible WorkSpaces caused by incompatible 3rd party updates on Workspaces.
  • AWS continues to evolve its IoT solution set with AWS IoT Greengrass 1.9.3 . Now available, AWS has added support for ARMv6 and new machine learning inference capabilities.
  • AWS introduced in preview the NoSQL Workbench for Amazon DynamoDB. The application to help operators design and visualize data models, run queries on data, and generate code for applications is free, client-side, and available for Windows and macOS.

Flux7 News
Flux7 has several upcoming educational opportunities. Please join us at:

  • Our October 9, 2019 Webinar, DevOps as a Foundation for Digital Transformation. This free 1-hour webinar from GigaOm Research brings together experts in DevOps, featuring GigaOm analyst Jon Collins and a special guest from Flux7, CEO and co-founder Aater Suleman. The discussion will focus on how to scale DevOps efforts beyond the pilot and deliver a real foundation for innovation and digital transformation.
  • The High-Performance Computing Immersion Day on October 11, 2019, in Houston, TX where attendees will gain in-depth, hands-on training with services such as Batch, Parallel Cluster, Elastic Fabric Adapter (EFA), FSX for Lustre, and more in an introductory session. Register Here Today.
  • The AWS Container Services workshop October 17, 2019 in San Antonio, TX. Designed for infrastructure administrators, developers, and architects, this workshop is designed as an introductory session that provides a mix of classroom training and hands-on labs. Register Here.

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 10

With the HPC User Forum this past week, we saw several High-Performance Computing (HPC) related news announcements. Starting off, Hyperion, who established the Forum in 1999, shared that HPC in the cloud is gaining traction, with new major growth areas coming from AI/ML/DL, big data analytics, and non-traditional HPC users from the enterprise space.

Univa, in turn, introduced Navops Launch 2.0. The newest version of its platform is focused on simplifying enterprise HPC workloads migration to the cloud. It also announced the expansion of its Navops Launch HPC cloud-automation platform to now support the Slurm workload scheduler. And, HPE announced ML Ops, a container-based solution that supports ML workflows and lifecycles across on-premises, public cloud and hybrid cloud environments.

To stay up-to-date on DevOps best practices, cloud use cases like HPC, and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

DevOps News

  • HashiCorp announced the beta version of Clustering for HashiCorp Terraform Enterprise. According to a blog announcement, the new Clustering functionality enables users to easily install and manage a scalable cluster that can meet their performance and availability requirements. The clustering capability in Terraform Enterprise includes the ability to scale to meet workload demand, enhanced availability and an easier installation and management process.
  • HashiCorp is partnering with VMware to support the Service Mesh Federation Specification. A new service mesh integration between Consul Enterprise and NSX-SM will allow traffic to flow securely beyond the boundary of each individual mesh, enabling flexibility and interoperability.
  • While we’re discussing service mesh, Kong announced a new open source project called Kuma. In a press release, Kuma is described as a universal control plane that addresses the limitations of first-generation service mesh technologies by enabling seamless management of any service on the network. Kuma runs on any platform – including Kubernetes, containers, virtual machines, bare metal, and other legacy environments.
  • In other news, ScyllaDB announced a new project — Alternator. The firm describes the open-source software in a press release as enabling application- and API-level compatibility between Scylla and Amazon’s NoSQL cloud database, Amazon DynamoDB, allowing DynamoDB users to migrate to an open-source database that runs anywhere — on any cloud platform, on-premises, on bare-metal, virtual machines or Kubernetes.

AWS News

  • First introduced at re:Invent last year, AWS just announced GA of Amazon Quantum Ledger Database (QLDB). QLDB is a ledger database that is intended as a system of record for stored data. According to Amazon, it maintains a complete, immutable history of all committed changes to the data that cannot be updated, altered, or deleted. The QLDB API allows you to cryptographically verify that the history is accurate and legitimate, making it ideal for finance, ecommerce, manufacturing, and more.
  • To gain better understanding of network flow and avoid legwork typically associated with this, Amazon has introduced the availability of additional metadata that can now be included in Flow Log records. Amazon notes that enriched Flow Logs allow operators to simplify their scripts or remove the need for post-processing altogether, by reducing the number of computations or look-ups required to extract meaningful information from the log data. For example, operators can choose to add metadata such as vpc-ic, subnet-id, instance-id, or tcp-flags.
  • AWS Service Catalog introduced the ability to get visibility of portfolio and product budgets with integration to AWS Budgets. The newly added feature means that users can now make and connect budgets with portfolios and products and track spend to them.
  • Having worked recently on a QuickSight project for a customer, our DevOps consultants enjoyed these two articles on how to Federate Amazon QuickSight access with Okta for single-sign on to QuickSight and Create advanced insights using Level Aware Aggregations in Amazon QuickSight which illustrates several examples how to perform calculations on data to derive advanced and meaningful insights.

Flux7 News

  • Read Flux7’s newest article, Flux7 Case Study: Technology’s Role in the Agile Enterprise, in which we share our journey to becoming an Agile Enterprise. In this story of how we at Flux7 have moved through the process, this article shares how we have adopted specific supporting technologies to further our agile goals.
  • Join us at Flux7 as we and AWS Present a High Performance Computing Immersion Day on October 11, 2019, in Houston, TX. Attendees to the hands-on training session will learn about services such as Batch, Parallel Cluster, Elastic Fabric Adapter (EFA), FSX for Lustre, and more in an introductory session. Register Here Today.

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

Join Flux7 and AWS for an In-Depth Container Workshop

Join Flux7 and AWS for an In-Depth Container Workshop

Oct_17_San_Antonio_Containers_Workshop_Blog

Join Flux7 and AWS Solutions Architects as they present an in-depth, one day workshop on AWS Container Services. Designed for infrastructure administrators, developers, and architects, this workshop is designed as an introductory session that provides a mix of classroom training and hands-on labs. Attendees will learn about AWS services with a focus on using Kubernetes.  Container technologies like Docker and Kubernetes help organizations meet the needs of both IT operations and development teams as they enable service delivery processes to be consolidated, reducing coordination and hand-offs. This, in turn, drives greater IT and developer productivity.

Understanding the options for cloud containers and how they can be used throughout application and software development lifecycles can increase agility and shorten time to results. Join our workshop to help learn more about the container environment and how you can help your organization use containers to improve agility.

When: October 17, 2019
Where: Norris Centers – San Antonio
618 NW Loop 410, Suite 207
San Antonio, TX 78216

Find additional information and register here.

Attendees will: 

  • Gain an understanding of containers in the AWS environment, the EKS architecture, networking and storage on EKS, and logging and monitoring for containers in AWS.
  • Learn how to leverage AWS container services within their organization.
  • Get hands-on experience with AWS services like EKS and ECR.

Whether you have no prior experience with containers or some time under your belt, join us and AWS as we explore containers in the AWS environment, learning more about the architecture in-depth. Can’t make it to the workshop, but interested in how container technology is impacting business, insights on new trends in DevOps, AWS and more? Subscribe to our DevOps blog below and stay on top of the latest trends and industry news.

Subscribe to the Flux7 Blog
 

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

Flux7 and AWS Present High Performance Computing Immersion Day

Flux7 and AWS Present High Performance Computing Immersion Day

Array ( [0] => WP_Term Object ( [term_id] => 90 [name] => Blog [slug] => blog [term_group] => 0 [term_taxonomy_id] => 90 [taxonomy] => category [description] => [parent] => 0 [count] => 413 [filter] => raw ) [1] => WP_Term Object ( [term_id] => 4468 [name] => Uncategorized [slug] => uncategorized [term_group] => 0 [term_taxonomy_id] => 4468 [taxonomy] => category [description] => [parent] => 0 [count] => 413 [filter] => raw ) )

from Flux7 DevOps Blog

Flux7 Case Study: Technology’s Role in the Agile Enterprise

Flux7 Case Study: Technology’s Role in the Agile Enterprise

Technology in the Agile EnterpriseThe transition to becoming an Agile Enterprise is one that touches every part of the organization — from strategy to structure and process to technology. In our journey to share the story of how we at Flux7 have moved through the process, today we will discuss how we have adopted specific supporting technologies to further our agile goals. (In case you missed them, check out our first two articles on choosing a Flatarchy and our OKR journey.)

While achieving an Agile Enterprise must be rooted in the business and must be accompanied by an agile culture (more on that in our next article in the series), a technology platform that supports agility can be a key lever to successful Agile transformation.

At Flux7, this means both technologies that support communication and learning for teams to be agile and agile technology automation. Flux7 uses a variety of tools, each with its own specialty for helping us communicate, collaborate and stay transparent. We’ll first take a look at each of these tools and the role it plays, and then we’ll share a couple of ways in which some of these tools come together to create agility.

Agile Communication

As a 100% remote organization, communication is vital to corporate success. As a result, we use several tools to communicate, share files, documents, ideas and more.

  • Slack enables us to communicate in near real-time sharing files, updates, links and so much more. Slack is a go-to resource for everything from quick questions to team updates and accolades.
  • OfficeVibe allows employees to communicate feedback to the organization anonymously. At Flux7 we take feedback gathered from the OfficeVibe LeoBot very seriously and aim for top scores as a measure of our success in creating a thriving culture.
  • Gmail is used for less real-time communication needs, and for communicating with external parties (though we also use Slack channels with our customers); Google Calendar communicates availability and; Google Meet is used widely for internal and external meetings.
Agile Collaboration

Working closely together from a distance may sound antithetical, but with the help of several tools, our teams are able to collaborate effectively, boosting efficiency and productivity. Our favored tools for collaboration are:

  • Trello helps us collaborate on OKRs and customer engagements and is where our teams are able to visualize, plan, organize and facilitate short term and long term tasks.
  • Google Drive allows us to collaborate in real-time as our documents are automatically saved so that nothing can ever be lost. In fact, Flux7 has a main door to Google Drive called the Flux7 Library, which is where all of our non-personnel resources and documents are stored. This is just one way we ensure resources are at employees’ fingertips, helping us to stay transparent, agile and innovative.
  • Zapier automates workflows for us. For example, we make extensive use of its Trello PowerUps to automate things like creating new Trello cards from Gmail messages or updating Trello cards with new Jira issues.
  • GitHub Repositories host and track changes to our code-related project files and GitHub’s Wiki tools allow us to host documentation for others to use and contribute. In fact, the Flux7 Wiki page is hosted in a Git Wiki. The Flux7 Wiki is home to a wide variety of resources — from a Flux7 Glossary to book reviews, PeopleOps tools and more.
  • HubSpot is a marketing automation and CRM solution where sales and marketing teams communicate and collaborate on everything from new sales leads to sharing sales collateral.

Agile Metrics and Measurements
At Flux7 our mantra is to experiment more, fail cheap, and measure results accurately. Helping us to measure accurately are:

  • Google analytics gives Flux7 valuable detail about our website visitors, giving us clear insights into what our visitors care most about. HubSpot analytics also gives us website data. As our CRM, when this data is paired with sales pipeline activity data, it gives us an incredibly rich view of the customer journey, helping us hone business strategy.
  • Slack analytics give Flux7 insight into how the team uses Slack. For example, how many messages were sent over the last 30 days, where conversations are happening and more.
Agile Management & More

Continuous learning and growth are central to Flux7’s culture and values of innovation, humbleness, and transparency. As such, we also have the technology to facilitate ongoing learning with:

  • The Flux7 internal e-book library where employees can check out e-books and audiobooks for ongoing education. Flux7 utilizes Overdrive to secure our online Internal Library. Topics range from Marketing and Business to IT and DevOps Instructional Resources. (For more on our Library, please refer to our blog: Flux7 Library Drives Culture of Learning, Sharing)
  • Flux7 also uses BambooHR to store peer feedback; anyone can initiate feedback by asking another peer to provide it. The feedback is stored in BambooHR and only the recipient can see it and turn the feedback into actionable results. BambooHR also contains important files like team assignments, who is on vacation, and recorded All-Hands meetings.
  • We use Okta for single sign-on, LastPass for password management, HubStaff for tracking time on projects, QuickBooks for finance, and more.
Bringing It All Together

IT automation is core to all we do at Flux7 and is instrumental in bringing together many of these tools. To give you an example, forecasting data from HubSpot is automatically sent to Slack with a Zapier integration that allows us to automatically see just-in-time forecasting data. We can share newly closed deals with the broader Flux7 team over Slack this way, too.

We have also integrated Git with Trello such that change notifications are sent as updates to the appropriate Trello card(s), keeping the right team members updated. Trello, in turn, notifies all relevant team members of the updated card information, automatically keeping all team members updated.

At Flux7, we believe in the value of cloud computing and removing levels of management from the process altogether. In fact, we are extremely serverless as a company — with only one server for our website — which allows us to focus less on IT tasks like managing servers and more on delivering value to our customers and employees.

While there are many elements to becoming an Agile Enterprise, technology plays a pivotal role in communication, collaboration, and productivity. As the pace of the market continues to accelerate, agility can only be driven through flexible technologies that help us better anticipate and react to change. Don’t miss the fourth article in our series on the role of culture in building an Agile Enterprise. Subscribe to our blog below and get it direct to your inbox.

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 13


Last week Werner Vogels, AWS CTO, spelled out in an interview with WSJ CIO Today the Company’s Path to the Cloud Business. A fascinating read, Vogels shares how Amazon’s early IT challenges led to Amazon Web Services, which has become a revenue engine for Amazon. According to the company, AWS has generated $8.4 billion in sales last quarter. A good read if you’re on your own cloud evolution. 

To stay up-to-date on DevOps best practices, CI/CD and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

DevOps News

  • A couple weeks ago we brought you news of Splunk’s SignalFX acquisition. Splunk this week continued its buying spree with an announcement that it will acquire Omnition. Described as a stealth-mode operation, Omnition will give Splunk distributed tracing capabilities for microservice application monitoring.
  • According to a blog announcement, Atlassian is making several updates to its cloud platform including:
    • New premium plans for many of our cloud products with advanced features and support
    • New free plans of Jira Software, Confluence, Jira Service Desk, and Jira Core
    • Discounted prices for academic and non-profit customers
    • New tools that strengthen control, security, and trust
  • Pulumi announced GA of version 1.0 of its Infrastructure as Code platform. Pulumi 1.0 includes new capabilities to help development and operations teams overcome organizational silos and grow productivity, reliability and security with familiar programming languages and open-source tools and frameworks.

AWS News

  • AWS launched a large improvement to how AWS Lambda functions work with Amazon VPC networks, updating the service to change the way that functions connect to VPCs. AWS is now leveraging Hyperplane to provide NAT capabilities from the Lambda VPC to customer VPCs. The change in mapping will result in improvements to function startup performance and more efficient usage of elastic network interfaces. The update is rolling out to all existing and new VPC functions, at no additional cost.
  • Amazon EKS made two announcements. The service now allows operators to assign IAM permissions to Kubernetes service accounts, giving pod level access control when running clusters with multiple co-located services. And, Amazon EKS now supports Kubernetes version 1.14.6 for all clusters.
  • Amazon announced that its AWS Config solution now provides automatic remediation. Giving operators the ability to associate remediation actions with AWS Config rules, remediation activities can be automatically taken without manual intervention.
  • Amazon introduced a price reduction for EFS infrequent access storage class. Last week AWS dropped its storage prices for EFS IA by 44%. According to a blog announcement, EFS IA storage prices now start at $0.025/GB-month.
  • Amazon QuickSight has new features for improved asset organization; enhanced anomaly detection capabilities; a new Word Cloud chart type to represent categorical fields; and has added support for Favorites that allow you to bookmark your dashboards and analyses.

Flux7 News

  • Read Flux7’s newest article on, The CIO’s Role in the Making of an Agile Enterprise. Lead, follow or get out of the way. A phrase famously coined by George Patton and later by Chrysler’s Lee Iacocca, it is also an apt prescription for the challenges CIOs face as organizations look to become agile enterprises. With the path to agility often running directly through IT, the CIO stands front and center in an organization’s transition to enterprise-wide agility. Read how CIOs can make the most of this situation.
  • For CIOs and technology leaders looking to dive deeper into leading the transition to an Agile Enterprise, Flux7 has also published a new paper on How CIOs Can Prepare an IT Platform for the Agile Enterprise. Download it today to learn how an agile enterprise architecture that supports agility with IT automation and DevOps best practices can be a key lever to helping IT engage with and improve the business.

Download the Paper Today

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

The CIO’s Role in the Making of an Agile Enterprise

The CIO’s Role in the Making of an Agile Enterprise

CIO Role Agile Enterprise

This article originally appeared on Forbes.

Lead, follow or get out of the way. A phrase famously coined by George Patton and later by Chrysler’s Lee Iacocca, it is also an apt prescription for the challenges CIOs face as organizations look to become agile enterprises. With the path to agility often running directly through IT (as it enables those digital projects the CEO sees as driving competitiveness), the CIO stands front and center in an organization’s transition to enterprise-wide agility.

As the first area to adopt change with the aim of helping the business become an agile enterprise, CIOs have three options:

1. Follow the organization’s lead. These CIOs generally follow hesitantly, resisting change where possible as change is seen as a threat.

2. Get out of the way by moving to a different organization altogether where change does not threaten business as usual.

3. Lead by adopting an agile, servant-leader mindset. These CIOs recognize the incredible opportunity in front of them to guide their company’s digital transformation.

A Seat At The Table

To effectively lead the agile enterprise, CIOs need a seat at the table. Digital transformation can help carve this path in two important ways. First, it can create greater unity between IT and business units — as well as between the CIO and executives who lead these departments. By partnering with other executives to successfully achieve alignment, IT can more directly deliver results to business goals and objectives. As a result, this helps the company gain critical market advantages and exponentially grows the value of the CIO in the process. With a hat tip to Mark Schwartz and his book, A Seat at the Table, we see how the CIO can be a critical part of the value creation engine that helps them attain a seat at the table.

Second, helping the enterprise effectively navigate in today’s VUCA (volatile, uncertain, complex and ambiguous) world contributes dividends to the broader organizational value chain. Digital transformation can help the CIO create business results like reduced project risk and costs, increased software quality and predictability, and, ultimately, faster delivery of products and services to market.

Agile Elements For Success

An agile organization requires both an agile culture and agile technology automation that supports business goals, striking a balance between stability and flexibility. CIOs may operate in a market where they have fallen behind, or they may simply seek to navigate changes happening today. In either case, they need a platform that can quickly adapt to future market forces.

Let’s examine these two elements and how CIOs can embrace them, leading by example:

• Agile culture: McKinsey found in a recent survey that, “The greatest enablers of — or barriers to — a successful agile transformation are leadership and culture.” Seventy-six percent of executives McKinsey surveyed found that transforming the culture was their No. 1 challenge during an agile transformation.

CIOs play an incredibly important leadership role here, setting the right tone from the start. According to a PricewaterhouseCooper survey, 73% of respondents felt that tone from the top influences people in a positive way, and 61% reported that it both enhances the quality of decision making and supports the overall value of the brand. Further reinforcing an agile culture is an agile team, which the CIO should hand-pick.

CIOs should look to create a cross-departmental team with individuals who exhibit traits like cooperation, humility, openness and self-motivation. The team should be empowered to create value for customers (both internal and external) in tight alignment with the business goals. The team should hold frequent reviews with emphasis given to measurable outputs that illustrate business (vs. technical) value.

• Agile IT Platform: As noted earlier, the path to greater agility often begins with digitization projects. A technology platform to support these efforts, and that can expand to address still ambiguous future needs, can be a critical tool for helping IT engage with and improve the business. (It’s worth noting here that a lack of such a platform can hinder business agility, cause shadow IT and undermine an agile culture, which can prevent a company from becoming an agile enterprise.)

The CIO must be holistically engaged when it comes to building an IT Platform due to the strategic importance of IT automation in supporting the agile enterprise. For it is through automation that the team will replace the significant overhead caused by manual jobs with time to spend on strategic, business-impacting work. While automation can be applied in many different forms, CIOs should ultimately look to build an agile IT platform that delivers business values such as bringing products to market faster; growing governance, security and compliance; powering better customer experiences and making it simpler to explore greenfield opportunities.

For example, I had the opportunity to speak with a CIO at a large insurance company. While consumers and shareholders alike view it as an industry leader, the CIO readily sees that it needs to make a digital transition, given forces in the market. The company’s business model makes heavy use of insurance agents to quote and sell coverage. In this model, it takes about a week to sign a new contract. Yet consumers are increasingly moving online to get insurance quotes and even to sign up for coverage — all without the help of a single agent. The CIO knows he needs to digitize and adapt to changing customer behavior, as well as competitors who are digitizing their efforts. The CIO’s leadership in driving digital transformation, and an evolution toward an agile enterprise, is giving the business an avenue to recover lost ground and expand their future competitiveness.

Agile change most often starts with digital transformation, which directly implicates the CIO in a company’s successful transition to an agile enterprise. This hyper-critical role in ensuring agile success may be the biggest career opportunity many CIOs have seen. By aligning IT agility with business goals, leading by example, and setting the right technical stage, CIOs can help their companies effectively navigate the pace of market change, driving unparalleled responsiveness through an agile IT culture and flexible IT platform.

Download the Paper Today

from Flux7 DevOps Blog