Tag: EC2

Digital Transformation Boosts Manufacturing Agility, Competitiveness

Digital Transformation Boosts Manufacturing Agility, Competitiveness

Digital Transformation Boosts Manufacturing AgilityFew industries face the level of global competition that manufacturing does. To compete and realize the promise of Industry 4.0, manufacturers are increasingly embracing digital transformation. Evolving the business — from the manufacturing floor to the sales office — requires a holistic effort that requires a smart IT roadmap strategy and effective execution. In today’s blog, we’re taking a look at the digital transformation journey of several manufacturers and how it has benefited their productivity, efficiency and ultimately their strategic market position.

Drive Scientific Innovation with DevOps Automation
While the current shortage of digital talent in manufacturing is “very high”, according to research by The Manufacturing Institute, DevOps automation increases employee efficiency by creating a platform that enables researchers, engineers, and scientists to focus on their core work. And so it was that the Infrastructure Engineering team at Toyota Research Institute decided to support its researchers and engineers by making it easier for them to utilize the power of the cloud with automation.

Working with the Flux7 DevOps consulting team, they implemented DevOps methods, processes, and automation that they have used to reduce tactical, manual IT operations activities. Researchers and engineers are able to use a self-serve portal to easily and quickly provision the AWS assets they need to test new ideas, thus helping them become more productive as they won’t wait for the infrastructure team to spin up resources.

Having a secure cloud sandbox environment enables them to try new ideas, fail fast, destroy the sandbox if needed, and start over, enabling researchers to innovate at velocity and at scale. According to Mike Garrison, the technical lead for Infrastructure Engineering at TRI, as quoted in DevOps.com, “Modern cloud infrastructure and DevOps automation are empowering us to quickly remove any barriers that get in the way, allowing the team to do their best work, advance research quickly, push boundaries and transform the industry.”

Similarly, Flux7 worked with a large US manufacturer to adopt elastic high-performance computing (HPC) that facilitates the company’s scientific simulations for various aspects of designing new machinery. These HPC simulations were hosted in the company’s traditional data center, yet required scalability to meet dynamic demand which required planning and a great deal of capital expense. Moving its HPC simulations to the cloud meant that it could innovate for the future faster, with scalable, dynamic demand while greatly reducing internal resource overhead and costs.

IoT for Industry 4.0

Linking IoT devices with the cloud and analytics infrastructure can unlock critical real-time data that enables preventive maintenance and extends system productivity. This kind of data can help staff proactively address issues before they occur thus creating greater system uptime, overall equipment effectiveness, and a greater ROI for capital equipment. For a large equipment manufacturer looking to gather data from its geographically dispersed machines, Flux7 helped set up an AWS IoT infrastructure.

The two teams modernized and migrated several applications to the cloud, connecting them with a new AWS Data Lake. (AWS recently announced AWS Lake Formation to help with his process, check out our blog on it here.) The new system collects important data from the field, processing it to make predictions helpful to its customers’ operations. And, the data helps with machine maintenance schedules, ensuring that machines are serviced appropriately, thus increasing uptime. Moreover, processes that previously took days were reduced to 15 minutes, freeing developer time for strategic work, while creating a new revenue stream for the manufacturer.

Set the foundation for the Agile Enterprise

While becoming an Agile Enterprise will help manufacturers realize the promise of Industry 4.0, digital transformation is a journey that requires a smart roadmap and solid execution. Flux7 partnered with a Fortune 500 manufacturer in its Agile Enterprise evolution. The company reached out AWS premier consulting partners, Flux7, to help it embark on a digital transformation that would eventually work its way through the company’s various departments — from enterprise architecture to application development and security — and business units, such as embedded systems and credit services.

The transformation started with a limited Amazon cloud migration and moved on to include:

  • IoT and an AWS Data Lake,
  • EU data privacy regulatory compliance,
  • Serverless monitoring and notification with a goal to use advanced automation to alert operations and information security teams of any known issues surfacing in the account, or violations of the corporate security standard.
  • Advanced automation to simplify maintenance and improve security and compliance.
  • Amazon VPC automation for faster onboarding.

The outcome has been a complete agile adoption of Flux7’s Enterprise DevOps Framework for greater security, cost efficiencies, and reliability. Enabled by solutions that connect its equipment and customer communities, the digital transformation effectively supports the company’s ultimate goal to create an unrivaled experience for its customers and partners.

From smart production to smart logistics and even smart product design and smarter sales and marketing efforts, a technology-driven transformation will help manufacturers achieve greater fault-tolerance, productivity, and ultimately revenue.

For additional manufacturing use case stories:

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

Build A Best Practice AWS Data Lake Faster with AWS Lake Formation

Build A Best Practice AWS Data Lake Faster with AWS Lake Formation

AWS Best Practice AWS Data Lake Formation

The world’s first gigabyte hard drive was the size of a refrigerator — and that wasn’t all that long ago. Clearly, technology has evolved, and so have our data storage and analysis needs. With data serving a key role in helping companies unearth intelligence that can provide a competitive advantage, solutions that allow organizations to end data silos and help create actionable business outcomes from intelligent data analysis are gaining traction. 

According to the 2018 Big Data Trends and Challenges report by Dimensional Research, the number of firms with an average data lake size over 100 Terabytes has grown from 36% in 2017 to 44% in 2018. A trend that’s sure to continue, especially as cloud providers like AWS provide services such as the newly announced AWS Lake Formation that help streamline the process of creating and managing a data lake solution. As such, in today’s blog, we’re going to take a look at the new AWS Lake Formation service, and share our take on its features, benefits, and things we’d like to see in the next version of the service.  

What is AWS Lake Formation

AWS Lake Formation is the newest service from AWS. It is designed to streamline the process of building a data lake in AWS, creating a full solution in just days. At a high level, AWS Lake Formation provides best practice templates and workflows for creating data lakes that are secure, compliant and operate effectively. The overall goal is to provide a solution that is well architected to identify, ingest, clean and transform data while enforcing appropriate security policies to enable firms to focus on gaining new insights, rather than building data lake infrastructure.

Before the release of AWS Lake Formation, organizations would need to take several steps to build their data lake. Not only was the process time-consuming, but there were several points in the process that proved difficult for the average operator. For example, users needed to set up their own Amazon S3 storage; deploy AWS Glue to prepare the data for analysis through the automated extract, transform and load (ETL) process; configure and enforce security policies; ensure compliance and more.  Each part of the process offered room for missteps, making the overall data lake set up challenging and a month+ long process for many.

AWS Data Lake Benefits

AWS has solved many of these challenges with AWS Lake Formation that offers three key areas of benefit and one area that we think is a neat, supporting feature.

  1. Templates – The new AWS Lake Formation provides templates for a number of things. We are most excited about the templates for AWS Glue which is important as this is an area where many organizations find they need to loop in AWS engineering for best practice help. Glue templates show that AWS really is listening to its customers and providing guidance where they need it most. In addition, our AWS consulting team was really happy to see templates that simplify the import of data and templates for the management of long-running cron jobs. These reusable templates will streamline each part of the data lake process.
  2. Cloud Security Solutions – Data is the lifeblood of an organization and for many companies, it is the foundation of their IP. As a result, sound security (and compliance) must be a key consideration for any data lake solution. AWS is definitely singing from that hymn book with AWS Lake Formation as they have created opportunities for security at the most granular of levels — not just securing the S3 bucket, but the data catalog as well. For example, at the data catalog level, you could specify which columns of data a Lambda function can read, or revoke a user’s permissions to a specific database. (AWS notes that row-level tagging will be in a future version of the solution.)
  3. Machine Learning Transformations – AWS provides algorithms for its customers to create their own machine learning solutions. AWS cites record de-duplication as a use case here, illustrating how ML can help clean and update data. However, we see this feature as being particularly interesting to firms in industries like pharmaceuticals where a company could, for example, use it to mine and predictively match chemical patterns to patients or in the oil and gas industry where ML can be applied to learn from field-based data points to maximize oil production.

Also neat, but not marquee-stealing, is the AWS Lake Formation feature that allows users to add metadata and tag data catalog objects. For developers, in particular, this is a nice-to-have feature as it will allow them to more easily search all this data. Separately, we also like that AWS Lake Formation users will only pay for the underlying services used and that there are no additional charges.  

Ready to Swim?

One feature we’d like to see in an upcoming release of Lake Formation is integration with directory services like AD. This will help further streamline the process of controlling data access to ensure permissions are revoked when, for example, an employee leaves the organization or changes workgroups. 

Moreover, while AWS Lake Formation greatly streamlines the process of building a data lake, being able to create your own templates moving forward may still remain a challenge for some organizations. At Flux7, we teach organizations how to build, manage and maintain templates for this — and many other AWS solutions — and can help your team ensure your templates incorporate Well Architected best practice standards on an ongoing basis.

Ready to dive into your own AWS data lake solution? Check out our AWS Data Lake solution case study on how a healthcare provider addressed its rapid data expansion and data complexity with AWS and Flux7 DevOps consulting, enabling it to quickly analyze information and make important data connections. Impact your time to market, customer experience and market position today with our AWS database services

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

Introducing the capacity-optimized allocation strategy for Amazon EC2 Spot Instances

Introducing the capacity-optimized allocation strategy for Amazon EC2 Spot Instances

AWS announces the new capacity-optimized allocation strategy for Amazon EC2 Auto Scaling and EC2 Fleet. This new strategy automatically makes the most efficient use of spare capacity while still taking advantage of the steep discounts offered by Spot Instances. It’s a new way for you to gain easy access to extra EC2 compute capacity in the AWS Cloud.

This post compares how the capacity-optimized allocation strategy deploys capacity compared to the current lowest-price allocation strategy.

Overview

Spot Instances are spare EC2 compute capacity in the AWS Cloud available to you at savings of up to 90% off compared to On-Demand prices. The only difference between On-Demand Instances and Spot Instances is that Spot Instances can be interrupted by EC2 with two minutes of notification when EC2 needs the capacity back.

When making requests for Spot Instances, customers can take advantage of allocation strategies within services such as EC2 Auto Scaling and EC2 Fleet. The allocation strategy determines how the Spot portion of your request is fulfilled from the possible Spot Instance pools you provide in the configuration.

The existing allocation strategy available in EC2 Auto Scaling and EC2 Fleet is called “lowest-price” (with an option to diversify across N pools). This strategy allocates capacity strictly based on the lowest-priced Spot Instance pool or pools. The “diversified” allocation strategy (available in EC2 Fleet but not in EC2 Auto Scaling) spreads your Spot Instances across all the Spot Instance pools you’ve specified as evenly as possible.

As the AWS global infrastructure has grown over time in terms of geographic Regions and Availability Zones as well as the raw number of EC2 Instance families and types, so has the amount of spare EC2 capacity. Therefore it is important that customers have access to tools to help them utilize spare EC2 capacity optimally. The new capacity-optimized strategy for both EC2 Auto Scaling and EC2 Fleet provisions Spot Instances from the most-available Spot Instance pools by analyzing capacity metrics.

Walkthrough

To illustrate how the capacity-optimized allocation strategy deploys capacity compared to the existing lowest-price allocation strategy, here are examples of Auto Scaling group configurations and use cases for each strategy.

Lowest-price (diversified over N pools) allocation strategy

The lowest-price allocation strategy deploys Spot Instances from the pools with the lowest price in each Availability Zone. This strategy has an optional modifier SpotInstancePools that provides the ability to diversify over the N lowest-priced pools in each Availability Zone.

Spot pricing changes slowly over time based on long-term trends in supply and demand, but capacity fluctuates in real time. The lowest-price strategy does not account for pool capacity depth as it deploys Spot Instances.

As a result, the lowest-price allocation strategy is a good choice for workloads with a low cost of interruption that want the lowest possible prices, such as:

  • Time-insensitive workloads
  • Extremely transient workloads
  • Workloads that are easily check-pointed and restarted

Example

The following example configuration shows how capacity could be allocated in an Auto Scaling group using the lowest-price allocation strategy diversified over two pools:

{
  "AutoScalingGroupName": "runningAmazonEC2WorkloadsAtScale",
  "MixedInstancesPolicy": {
    "LaunchTemplate": {
      "LaunchTemplateSpecification": {
        "LaunchTemplateName": "my-launch-template",
        "Version": "$Latest"
      },
      "Overrides": [
        {
          "InstanceType": "c3.large"
        },
        {
          "InstanceType": "c4.large"
        },
        {
          "InstanceType": "c5.large"
        }
      ]
    },
    "InstancesDistribution": {
      "OnDemandPercentageAboveBaseCapacity": 0,
      "SpotAllocationStrategy": "lowest-price",
      "SpotInstancePools": 2
    }
  },
  "MinSize": 10,
  "MaxSize": 100,
  "DesiredCapacity": 60,
  "HealthCheckType": "EC2",
  "VPCZoneIdentifier": "subnet-a1234567890123456,subnet-b1234567890123456,subnet-c1234567890123456"
}

In this configuration, you request 60 Spot Instances because DesiredCapacity is set to 60 and OnDemandPercentageAboveBaseCapacity is set to 0. The example follows Spot best practices and is flexible across c3.large, c4.large, and c5.large in us-east-1a, us-east-1b, and us-east-1c (mapped according to the subnets in VPCZoneIdentifier). The Spot allocation strategy is set to lowest-price over two SpotInstancePools.

First, EC2 Auto Scaling tries to make sure that it balances the requested capacity across all the Availability Zones provided in the request. To do so, it splits the target capacity request of 60 across the three zones. Then, the lowest-price allocation strategy allocates the Spot Instance launches to the lowest-priced pool per zone.

Using the example Spot prices shown in the following table, the resulting allocation is:

  • 20 Spot Instances from us-east-1a (10 c3.large, 10 c4.large)
  • 20 Spot Instances from us-east-1b (10 c3.large, 10 c4.large)
  • 20 Spot Instances from us-east-1c (10 c3.large, 10 c4.large)
Availability Zone Instance type Spot Instances allocated Spot price
us-east-1a c3.large 10 $0.0294
us-east-1a c4.large 10 $0.0308
us-east-1a c5.large 0 $0.0408
us-east-1b c3.large 10 $0.0294
us-east-1b c4.large 10 $0.0308
us-east-1b c5.large 0 $0.0387
us-east-1c c3.large 10 $0.0294
us-east-1c c4.large 10 $0.0331
us-east-1c c5.large 0 $0.0353

The cost for this Auto Scaling group is $1.83/hour. Of course, the Spot Instances are allocated according to the lowest price and are not optimized for capacity. The Auto Scaling group could experience higher interruptions if the lowest-priced Spot Instance pools are not as deep as others, since upon interruption the Auto Scaling group will attempt to re-provision instances into the lowest-priced Spot Instance pools.

Capacity-optimized allocation strategy

There is a price associated with interruptions, restarting work, and checkpointing. While the overall hourly cost of capacity-optimized allocation strategy might be slightly higher, the possibility of having fewer interruptions can lower the overall cost of your workload.

The effectiveness of the capacity-optimized allocation strategy depends on following Spot best practices by being flexible and providing as many instance types and Availability Zones (Spot Instance pools) as possible in the configuration. It is also important to understand that as capacity demands change, the allocations provided by this strategy also change over time.

Remember that Spot pricing changes slowly over time based on long-term trends in supply and demand, but capacity fluctuates in real time. The capacity-optimized strategy does account for pool capacity depth as it deploys Spot Instances, but it does not account for Spot prices.

As a result, the capacity-optimized allocation strategy is a good choice for workloads with a high cost of interruption, such as:

  • Big data and analytics
  • Image and media rendering
  • Machine learning
  • High performance computing

Example

The following example configuration shows how capacity could be allocated in an Auto Scaling group using the capacity-optimized allocation strategy:

{
  "AutoScalingGroupName": "runningAmazonEC2WorkloadsAtScale",
  "MixedInstancesPolicy": {
    "LaunchTemplate": {
      "LaunchTemplateSpecification": {
        "LaunchTemplateName": "my-launch-template",
        "Version": "$Latest"
      },
      "Overrides": [
        {
          "InstanceType": "c3.large"
        },
        {
          "InstanceType": "c4.large"
        },
        {
          "InstanceType": "c5.large"
        }
      ]
    },
    "InstancesDistribution": {
      "OnDemandPercentageAboveBaseCapacity": 0,
      "SpotAllocationStrategy": "capacity-optimized"
    }
  },
  "MinSize": 10,
  "MaxSize": 100,
  "DesiredCapacity": 60,
  "HealthCheckType": "EC2",
  "VPCZoneIdentifier": "subnet-a1234567890123456,subnet-b1234567890123456,subnet-c1234567890123456"
}

In this configuration, you request 60 Spot Instances because DesiredCapacity is set to 60 and OnDemandPercentageAboveBaseCapacity is set to 0. The example follows Spot best practices (especially critical when using the capacity-optimized allocation strategy) and is flexible across c3.large, c4.large, and c5.large in us-east-1a, us-east-1b, and us-east-1c (mapped according to the subnets in VPCZoneIdentifier). The Spot allocation strategy is set to capacity-optimized.

First, EC2 Auto Scaling tries to make sure that the requested capacity is evenly balanced across all the Availability Zones provided in the request. To do so, it splits the target capacity request of 60 across the three zones. Then, the capacity-optimized allocation strategy optimizes the Spot Instance launches by analyzing capacity metrics per instance type per zone. This is because this strategy effectively optimizes by capacity instead of by the lowest price (hence its name).

Using the example Spot prices shown in the following table, the resulting allocation is:

  • 20 Spot Instances from us-east-1a (20 c4.large)
  • 20 Spot Instances from us-east-1b (20 c3.large)
  • 20 Spot Instances from us-east-1c (20 c5.large)
Availability Zone Instance type Spot Instances allocated Spot price
us-east-1a c3.large 0 $0.0294
us-east-1a c4.large 20 $0.0308
us-east-1a c5.large 0 $0.0408
us-east-1b c3.large 20 $0.0294
us-east-1b c4.large 0 $0.0308
us-east-1b c5.large 0 $0.0387
us-east-1c c3.large 0 $0.0294
us-east-1c c4.large 0 $0.0308
us-east-1c c5.large 20 $0.0353

The cost for this Auto Scaling group is $1.91/hour, only 5% more than the lowest-priced example above. However, notice the distribution of the Spot Instances is different. This is because the capacity-optimized allocation strategy determined this was the most efficient distribution from an available capacity perspective.

Conclusion

Consider using the new capacity-optimized allocation strategy to make the most efficient use of spare capacity. Automatically deploy into the most available Spot Instance pools—while still taking advantage of the steep discounts provided by Spot Instances.

This allocation strategy may be especially useful for workloads with a high cost of interruption, including:

  • Big data and analytics
  • Image and media rendering
  • Machine learning
  • High performance computing

No matter which allocation strategy you choose, you still enjoy the steep discounts provided by Spot Instances. These discounts are possible thanks to the stable Spot pricing made available with the new Spot pricing model.

Chad Schmutzer is a Principal Developer Advocate for the EC2 Spot team. Follow him on twitter to get the latest updates on saving at scale with Spot Instances, to provide feedback, or just say HI.

from AWS Compute Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

DevOps Blog IT Modernization DevOps News

Container security was top of mind this week as Kubernetes announced the results of its first security audit. The review looked at Kubernetes 1.13.4 and found 37 vulnerability issues, including five high-severity issues and 17 medium-severity issues. We are happy to report that fixes for these issues have already been deployed.

Container security was also top of mind for McAfee who said this week it has acquired NanoSec, a California container security startup. This as the Cloud Security Alliance introduced its Egregious Eleven, the top salient threats, risks and vulnerabilities in cloud environments identified in its Fourth Annual Top Threats survey. Two key themes that emerged this year are a maturation in the understanding of the cloud and respondent’s desire to address security issues higher up the technology stack that are the result of senior management decisions. While you can check out the report yourself, the top concerns are: Data Breaches, Misconfiguration and Inadequate Change Control, Lack of Cloud Security Architecture and Strategy and Insufficient Identity, Credential, Access and Key Management. 

To stay up-to-date on DevOps security, CI/CD and IT Modernization, subscribe to our blog here:

Subscribe to the Flux7 Blog

DevOps News

  • This past week HashiCorp released an official Helm Chart for Vault. Operators can reduce the complexity of running Vault on Kubernetes with the new Helm Chart as it provides a repeatable deployment process in less time. For example, HashiCorp reports that using the Helm Chart, allows operators to start a Vault cluster running on Kubernetes in just minutes. The Helm chart allows you to run Vault directly on Kubernetes, so in addition to the native integrations provided by Vault itself, any other tool built for Kubernetes can choose to leverage Vault. Note that a Helm Chart for Vault Enterprise will be available in the future.
  • In response to feedback, GitHub is bringing CI/CD support to GitHub Actions. Available November 13, the new support will allow users to easily automate how they build, test, and deploy projects across platforms — Linux, macOS, and Windows — in containers or virtual machines, and across languages and frameworks such as Node.js, Python, Java, PHP, Ruby, C/C++, .NET, Android, and iOS. GitHub Actions is an API that orchestrates workflows, based on any event, while GitHub manages the execution, provides rich feedback and secures every step along the way. 
  • Jenkins monitoring got a boost this week as Instana announced the addition of Jenkins monitoring to its automatic Application Performance Management (APM) solution as part of its focus on adding performance management for systems in other steps of the application delivery process. According to Peter Abrams, the company COO, and co-founder, “A common theme amongst Instana customers is the need to deliver and deploy quality applications faster, and Jenkins is a critical component of that delivery process.” The new capabilities include providing performance visibility of individual builds and deployments, and health monitoring of the Jenkins tool stack.

AWS News 

  • The long-awaited AWS Lake Formation is now generally available. Introduced at re:Invent last fall, Lake Formation makes it easy to ingest, clean, catalog, transform, and secure data, making it available for analytics and machine learning. Operators work from a central console to manage their data lake and are able to configure the right access permissions and secure access to metadata in the Glue Data Catalog and data stored in S3 using a single set of granular data access policies defined in Lake Formation. AWS Lake Formation notably works with data already in S3, allowing operators to easily register their existing data with Lake Formation.
  • In related news, it was announced that Amazon Redshift Spectrum now supports column-level access control for data stored in Amazon S3 and managed by AWS Lake Formation. This column-level access control helps limit access to only specific columns of a table rather than allowing access to all columns of a table, a key part of data governance and security needs of many enterprises.
  • Our AWS Consulting team enjoyed these two AWS blogs. The first, Auto-populate instance details by integrating AWS Config with your ServiceNow CMDB, shares how to ensure CMDB accuracy by integrating AWS Config and ServiceNow so that a notification creates a server record in the CMDB and tests the setup. 
  • Focused on security by design, we are always interested in how to securely share keys. Therefore, this blog, How to deploy CloudHSM to securely share your keys with your SaaS provider caught our attention. In it, Vinod Madabushi shares two options for deploying and managing a CloudHSM cluster to secure keys, while still allowing trusted third-party SaaS providers to securely access the HSM cluster to perform cryptographic operations.  
  • Amazon announced that operators can now use AWS PrivateLink in the AWS GovCloud (US-East) Region. Already available in several other regions, AWS PrivateLink allows operators to privately access services hosted on AWS without using public IPs and without requiring the traffic to traverse the internet.

Flux7 News

  • Read our latest AWS Case Study, the story of how Flux7 DevOps consultants teamed with a global retailer to create a platform for scalable innovation. To accelerate its cloud migration and standardize its development efforts, the joint client-Flux7 team identified a solution: a DevOps Dashboard that would automatically apply the company’s various standards as cloud infrastructure is deployed. 
  • For CIOs and technology leaders looking to lead the transition to an Agile Enterprise, Flux7 has published a new paper on How CIOs Can Prepare an IT Platform for the Agile Enterprise. Download it today to learn how a technology platform that supports agility with IT automation and DevOps best practices can be a key lever to helping IT engage with and improve the business. 

Download the Paper Today

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

ICYMI: Serverless Q2 2019

ICYMI: Serverless Q2 2019

This post is courtesy of Moheeb Zara, Senior Developer Advocate – AWS Serverless

Welcome to the sixth edition of the AWS Serverless ICYMI (in case you missed it) quarterly recap. Every quarter, we share all of the most recent product launches, feature enhancements, blog posts, webinars, Twitch live streams, and other interesting things that you might have missed!

In case you missed our last ICYMI, checkout what happened last quarter here.

April - June 2019

Amazon EventBridge

Before we dive in to all that happened in Q2, we’re excited about this quarter’s launch of Amazon EventBridge, the serverless event bus that connects application data from your own apps, SaaS, and AWS-as-a-service. This allows you to create powerful event-driven serverless applications using a variety of event sources.

Our very own AWS Solutions Architect, Mike Deck, sat down with AWS Serverless Hero Jeremy Daly and recorded a podcast on Amazon EventBridge. It’s a worthy listen if you’re interested in exploring all the features offered by this launch.

Now, back to Q2, here’s what’s new.

AWS Lambda

Lambda Monitoring

Amazon CloudWatch Logs Insights now allows you to see statistics from recent invocations of your Lambda functions in the Lambda monitoring tab.

Additionally, as of June, you can monitor the [email protected] functions associated with your Amazon CloudFront distributions directly from your Amazon CloudFront console. This includes a revamped monitoring dashboard for CloudFront distributions and [email protected] functions.

AWS Step Functions

Step Functions

AWS Step Functions now supports workflow execution events, which help in the building and monitoring of even-driven serverless workflows. Automatic Execution event notifications can be delivered upon start/completion of CloudWatch Events/Amazon EventBridge. This allows services such as AWS Lambda, Amazon SNS, Amazon Kinesis, or AWS Step Functions to respond to these events.

Additionally you can use callback patterns to automate workflows for applications with human activities and custom integrations with third-party services. You create callback patterns in minutes with less code to write and maintain, run without servers and infrastructure to manage, and scale reliably.

Amazon API Gateway

API Gateway Tag Based Control

Amazon API Gateway now offers tag-based access control for WebSocket APIs using AWS Identity and Access Management (IAM) policies, allowing you to categorize API Gateway resources for WebSocket APIs by purpose, owner, or other criteria.  With the addition of tag-based access control to WebSocket resources, you can now give permissions to WebSocket resources at various levels by creating policies based on tags. For example, you can grant full access to admins to while limiting access to developers.

You can now enforce a minimum Transport Layer Security (TLS) version and cipher suites through a security policy for connecting to your Amazon API Gateway custom domain.

In addition, Amazon API Gateway now allows you to define VPC Endpoint policies, enabling you to specify which Private APIs a VPC Endpoint can connect to. This enables granular security control using VPC Endpoint policies.

AWS Amplify

Amplify CLI (part of the open source Amplify Framework) now includes support for adding and configuring AWS Lambda triggers for events when using Amazon Cognito, Amazon Simple Storage Service, and Amazon DynamoDB as event sources. This means you can setup custom authentication flows for mobile and web applications via the Amplify CLI and Amazon Cognito User Pool as an authentication provider.

Amplify Console

Amplify Console,  a Git-based workflow for continuous deployment and hosting for fullstack serverless web apps, launched several updates to the build service including SAM CLI and custom container support.

Amazon Kinesis

Amazon Kinesis Data Firehose can now utilize AWS PrivateLink to securely ingest data. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises applications, securely over the Amazon network. When AWS PrivateLink is used with Amazon Kinesis Data Firehose, all traffic to a Kinesis Data Firehose from a VPC flows over a private connection.

You can now assign AWS resource tags to applications in Amazon Kinesis Data Analytics. These key/value tags can be used to organize and identify resources, create cost allocation reports, and control access to resources within Amazon Kinesis Data Analytics.

Amazon Kinesis Data Firehose is now available in the AWS GovCloud (US-East), Europe (Stockholm), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), and EU (London) regions.

For a complete list of where Amazon Kinesis Data Analytics is available, please see the AWS Region Table.

AWS Cloud9

Cloud9 Quick Starts

Amazon Web Services (AWS) Cloud9 integrated development environment (IDE) now has a Quick Start which deploys in the AWS cloud in about 30 minutes. This enables organizations to provide developers a powerful cloud-based IDE that can edit, run, and debug code in the browser and allow easy sharing and collaboration.

AWS Cloud9 is also now available in the EU (Frankfurt) and Asia Pacific (Tokyo) regions. For a current list of supported regions, see AWS Regions and Endpoints in the AWS documentation.

Amazon DynamoDB

You can now tag Amazon DynamoDB tables when you create them. Tags are labels you can attach to AWS resources to make them easier to manage, search, and filter.  Tagging support has also been extended to the AWS GovCloud (US) Regions.

DynamoDBMapper now supports Amazon DynamoDB transactional API calls. This support is included within the AWS SDK for Java. These transactional APIs provide developers atomic, consistent, isolated, and durable (ACID) operations to help ensure data correctness.

Amazon DynamoDB now applies adaptive capacity in real time in response to changing application traffic patterns, which helps you maintain uninterrupted performance indefinitely, even for imbalanced workloads.

AWS Training and Certification has launched Amazon DynamoDB: Building NoSQL Database–Driven Applications, a new self-paced, digital course available exclusively on edX.

Amazon Aurora

Amazon Aurora Serverless MySQL 5.6 can now be accessed using the built-in Data API enabling you to access Aurora Serverless with web services-based applications, including AWS LambdaAWS AppSync, and AWS Cloud9. For more check out this post.

Sharing snapshots of Aurora Serverless DB clusters with other AWS accounts or publicly is now possible. We are also giving you the ability to copy Aurora Serverless DB cluster snapshots across AWS regions.

You can now set the minimum capacity of your Aurora Serverless DB clusters to 1 Aurora Capacity Unit (ACU). With Aurora Serverless, you specify the minimum and maximum ACUs for your Aurora Serverless DB cluster instead of provisioning and managing database instances. Each ACU is a combination of processing and memory capacity. By setting the minimum capacity to 1 ACU, you can keep your Aurora Serverless DB cluster running at a lower cost.

AWS Serverless Application Repository

The AWS Serverless Application Repository is now available in 17 regions with the addition of the AWS GovCloud (US-West) region.

Region support includes Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), Canada (Central), EU (Frankfurt, Ireland, London, Paris, Stockholm), South America (São Paulo), US West (N. California, Oregon), and US East (N. Virginia, Ohio).

Amazon Cognito

Amazon Cognito has launched a new API – AdminSetUserPassword – for the Cognito User Pool service that provides a way for administrators to set temporary or permanent passwords for their end users. This functionality is available for end users even when their verified phone or email are unavailable.

Serverless Posts

April

May

June

Events

Events this quarter

Senior Developer Advocates for AWS Serverless spoke at several conferences this quarter. Here are some recordings worth watching!

Tech Talks

We hold several AWS Online Tech Talks covering serverless tech talks throughout the year, so look out for them in the Serverless section of the AWS Online Tech Talks page. Here are the ones from Q2.

Twitch

Twitch Series

In April, we started a 13-week deep dive into building APIs on AWS as part of our Twitch Build On series. The Building Happy Little APIs series covers the common and not-so-common use cases for APIs on AWS and the features available to customers as they look to build secure, scalable, efficient, and flexible APIs.

There are also a number of other helpful video series covering Serverless available on the AWS Twitch Channel.

Build with Serverless on Twitch

Serverless expert and AWS Specialist Solutions architect, Heitor Lessa, has been hosting a weekly Twitch series since April. Join him and others as they build an end-to-end airline booking solution using serverless. The final episode airs on August 7th at Wednesday 8:00am PT.

Here’s a recap of the last quarter:

AWS re:Invent

AWS re:Invent 2019

AWS re:Invent 2019 is around the corner! From December 2 – 6 in Las Vegas, Nevada, join tens of thousands of AWS customers to learn, share ideas, and see exciting keynote announcements. Be sure to take a look at the growing catalog of serverless sessions this year.

Register for AWS re:Invent now!

What did we do at AWS re:Invent 2018? Check out our recap here: AWS re:Invent 2018 Recap at the San Francisco Loft

AWS Serverless Heroes

We urge you to explore the efforts of our AWS Serverless Heroes Community. This is a worldwide network of AWS Serverless experts with a diverse background of experience. For example, check out this post from last month where Marcia Villalba demonstrates how to set up unit tests for serverless applications.

Still looking for more?

The Serverless landing page has lots of information. The Lambda resources page contains case studies, webinars, whitepapers, customer stories, reference architectures, and even more Getting Started tutorials.

from AWS Compute Blog

Why AWS is the best place for your Windows workloads, and how Microsoft is changing their licensing to try to awkwardly force you into Azure

Why AWS is the best place for your Windows workloads, and how Microsoft is changing their licensing to try to awkwardly force you into Azure

This post is contributed by Sandy Carter, Vice President at AWS. It is also located on LinkedIn

Many companies today are considering how to migrate to the cloud to take advantage of the agility and innovation that the cloud brings. Having the right to choose the best provider for your business is critical.

AWS is the best cloud for running your Windows workloads and our experience running Windows applications has earned our customers’ trust. It’s been more than 11 years since AWS first made it possible for customers to run their Windows workloads on AWS—longer than Azure has even been around, and according to a report by IDC, we host nearly two times as many Windows Server instances in the cloud as Microsoft. And more and more enterprises are entrusting their Windows workloads to AWS because of its greater reliability, higher performance, and lower cost, with the number of AWS enterprise customers using AWS for Windows Server growing more than 400% in the past three years.

In fact, we are seeing a trend of customers moving from Azure to AWS. eMarketer started their digital transformation with Azure, but found performance challenges and higher costs that led them to migrate all of their workloads over to AWS. Why did they migrate? They found a better experience, stronger support, higher availability, and better performance, with 4x faster launch times and 35% lower costs compared to Azure. Ancestry, a leader in consumer genomics, went all-in on development in the cloud moving 10 PB data and 400 Windows-based applications in less than 9 months. They also modernized to Linux with .NET Core and leveraged advanced technologies including serverless and containers. With results like that, you can see why organizations like Sysco, Edwards Life Sciences, Expedia, and NextGen Healthcare have chosen AWS to upgrade, migrate, and modernize their Windows workloads

If you are interested in seeing your cost savings over running on-premises or over running on Azure,  send us an email at [email protected] or visit why AWS is the best cloud for Windows.

from AWS Compute Blog

Global Retailer Standardizes Hybrid Cloud with DevOps Dashboard

Global Retailer Standardizes Hybrid Cloud with DevOps Dashboard

Global Retailer Hybrid Cloud DevopsFrom luxury to grocery, the retail war continues. While some would say we’re witnessing a retail apocalypse, others contend it’s really the death of the boring middle. (HT Steve Dennis) With a vision to innovate and extend its leadership in this competitive environment, the DevOps consulting team at Flux7 was approached by our newest customer, a top 50 global retailer. Today’s blog is the story of how Flux7 DevOps consultants teamed with the retailer to create a platform for scalable innovation. 

Read More: Download the full case study 

Growing geographically and looking to support its thousands of locations with innovative new solutions, this retailer has embraced digital transformation, starting with an AWS migration. However, doing so required the move of hundreds of applications from different on-premises platforms, a task that required the retailer’s IT teams to consistently ensure that operational, security and regulatory standards were maintained. 

To standardize and accelerate its development efforts on AWS, the joint client-Flux7 team identified a solution: a DevOps Dashboard that would automatically apply the company’s various standards as cloud infrastructure is deployed

The DevOps Dashboard

The DevOps Dashboard standardizes infrastructure creation and streamlines the process of developing applications on AWS. Developers can quickly start and/or continue development of their applications on AWS using the dashboard. Developers simply enter parameters into the UI and behind the scenes, the dashboard triggers pipelines to deploy infrastructure, connects to a repository, deploys code and sets up the environment. 

The DevOps Dashboard also features:

  • Infrastructure provisioning defined and implemented as code  
  • The ability to create ECS, EKS, and Serverless infrastructure in AWS
  • Jenkins automation to provision infrastructure and deploy sample apps to new and/or existing repositories
  • The ability to create a repository or use an existing one and implement a Webhook for continuous deployment 
  • A standard repository structure
  • The ability to automatically update/push the code of new sample applications to the appropriate environment (Dev/QA/Production) once placed in the repository.

DevOps Dashboard Benefits

Using the DevOps Dashboard allows developers to work on the code repository while their code or application is automatically deployed to the selected environment. This allows the engineer to focus only on editing applications rather than worrying about infrastructure standard compliance. The result of this advanced DevOps automation is that developers are able to create higher quality code faster, which means that they can quickly experiment and get winning ideas to market faster.

In addition, the DevOps Dashboard increases the retailer’s development agility while increasing its consistency and standardization of cloud builds across its hybrid cloud environment. Greater standardization has resulted in less risk, greater security, and compliance as code. 

For further reading on how Flux7 helps retailers establish an agile IT platform that harnesses the power of automation to grow IT productivity: 

For ongoing case studies, DevOps news and analysis, subscribe to our blog:

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

DevOps on AWS Radio: Automating AWS IoT (Episode 25)

DevOps on AWS Radio: Automating AWS IoT (Episode 25)

In this episode, we chat with Michael Neil a DevOps Automation Engineer here at Mphasis Stelligent about the  AWS IoT platform. AWS IoT consists of many products and services, it can be difficult to know where to start when piecing together each of the offerings to create an IoT solution. Paul Duvall and Michael Neil will give you an overview of the AWS IoT platform, guide you in how to get started with AWS IoT, teach you how to automate it, and walk through a use case using AWS IoT. Listen here:

DevOps on AWS News

Episode Topics

  1. Michael Neil Intro & Background 
  2. Overview of AWS IoT and AWS IoT Services
    1. Device software
      1. IoT Greengrass, IoT Device SDK
    2. Control services
      1. AWS IoT Core,  Device Defender, AWS IoT Things Graph
    3. Data services
      1. AWS IoT Analytics, AWS IoT Events
  3. Continuous Delivery with AWS IoT
    1. How is CD different when it comes to embedded devices and AWS IoT?
    2. How do you provision devices at the edge, MCUExpresso IDE?
    3. How to do CD w/ IoT via AWS CodePipeline and  AWS CodeBuild.
    4. How to do just-in-time provisioning, give it the right permissions.
  4. Bootstrapping Automation
    1. Bootstrapping process
    2. How started automating via the SDK
  5. Automating and provisioning  AWS IoT Services
    1. IoT Greengrass
    2. IoT Things
  6.  Integrations with other AWS Services 
    1. Amazon Simple Storage Service (Amazon S3)
    2. AWS Lambda
    3. Amazon Simple Queue Service (SQS)
    4. Amazon DynamoDB
    5. Amazon Kinesis Data Firehose
    6. Amazon QuickSight
  7. Amazon FreeRTOS
  8. Automobile Assembly Line Use Case 
    1. How might they employ AWS IoT?
    2. How to do Continuous Delivery?
    3. Machine Learning

Additional Resources

 

About DevOps on AWS Radio

On DevOps on AWS Radio, we cover topics around applying DevOps principles and practices such as Continuous Delivery on the Amazon Web Services cloud. This is what we do at Stelligent for our customers. We’ll bring listeners in and speak with engineers who’ve recently published on our blog and we’ll also be reaching out to the wider DevOps on AWS community to get their thoughts and insights.

The overall vision of this podcast is to describe how listeners can create a one-click (or “no click”) implementation of their software systems and infrastructure in the Amazon Web Services cloud so that teams can deliver software to users whenever there’s a business need to do so. The podcast will delve into the cultural, process, tooling, and organizational changes that can make this possible including:

  • Automation of
    • Networks (e.g. VPC)
    • Compute (EC2, Containers, Serverless, etc.)
    • Storage (e.g. S3, EBS, etc.)
    • Database and Data (RDS, DynamoDB, etc.)
  • Organizational and Team Structures and Practices
  • Team and Organization Communication and Collaboration
  • Cultural Indicators
  • Version control systems and processes
  • Deployment Pipelines
    • Orchestration of software delivery workflows
    • Execution of these workflows
  • Application/service Architectures – e.g. Microservices
  • Automation of Build and deployment processes
  • Automation of testing and other verification approaches, tools and systems
  • Automation of security practices and approaches
  • Continuous Feedback systems
  • Many other Topics…

 

The post DevOps on AWS Radio: Automating AWS IoT (Episode 25) appeared first on Stelligent.

from Blog – Stelligent

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News At IBM’s Investor Briefing 2019, CEO Ginni Rometty, addressed questions about the future of Red Hat now that the acquisition has closed. Framing what she calls Chapter Two of the cloud, she noted that Red Hat brings the vehicle. “Eighty percent is still to be moved into a hybrid cloud environment,” she said. Noting further that, “Hybrid cloud is the destination because you can modularize apps.” The strategy moving forward is to scale Red Hat, selling more IBM services tied to Red Hat while optimizing the IBM portfolio for Red Hat OpenShift in a move that Rometty called, “middleware everywhere.”

To stay up-to-date on DevOps security, CI/CD and IT Modernization, subscribe to our blog here:

Subscribe to the Flux7 Blog

DevOps News

  • HashiCorp announced the public availability of HashiCorp Vault 1.2. According to the company, new features are focused on supporting new architectures for automated credential and cryptographic key management at a global, highly-distributed scale. Specifically, it includes KMIP Server Secret Engine (Vault Enterprise only) which allows Vault to serve as a KMIP Server for automating secrets management and encryption as a service workflows with enterprise systems; integrated storage; identity tokens; and database static credential rotation.
  • CodeStream is now available for deployment through the Slack app store. With CodeStream, developers can more easily use Slack to discuss code; instead of cutting and pasting, developers can now share code blocks in context right from their IDE. Replies can be made in Slack or CodeStream, and in either case, they become part of the thread that is permanently linked to the code.
  • Armory announced it has raised $28M in its pursuit of additional development of Spinnaker, the firm’s open-source, multi-cloud continuous delivery platform used by developers to release quality software with greater speed and efficiency.
  • Our DevOps consulting team enjoyed this article by Mike Cohn on, Overcoming Four Common Objections to the Daily Scrum. In it, he discusses best practices for well-run daily Scrums.

AWS News

  • Operators can now use AWS CloudFormation templates to specify AWS IoT Events resources. According to the firm, this improvement enables you to use CloudFormation to deploy AWS IoT Events resources—along with the rest of your AWS infrastructure—in a secure, efficient, and repeatable way. The new capability is available now where IoT Events are available.
  • Amazon has added a new Predictions category to its Amplify Framework, allowing operators to now easily add and configure AI/ML use cases to their web and/or mobile applications.
  • In response to greater transparency, Amazon has launched the AWS CloudFormation Coverage Roadmap. In it AWS shares its priorities for CloudFormation in four areas: features that have shipped and are production-ready; features that are on the near-horizon and you should expect to see within the next few months; longer-term features that are actively being worked on; and features being researched.
  • AWS introduced the availability of the Middle East Region, the first AWS Region in the Middle East; it is comprised of three Availability Zones.
  • Our AWS Consulting team enjoyed this AWS blog, Analyzing AWS WAF logs with Amazon ES, Amazon Athena, and Amazon QuickSight, by Aaron Franco in which he discusses how to aggregate AWS WAF logs into a central data lake repository. Check out our resource page for additional reading on AWS WAF.

Flux7 News

  • We continued our blog series about becoming an Agile Enterprise, with the Flux7 case study of our OKR (Objectives and Key Results) journey, sharing lessons we learned along the way and greater role of OKRs in an Agile Enterprise. In case you missed the first article in the series on choosing a flatarchy organizational structure, you can read it here.
  • For CIOs and technology leaders looking to lead the transition to an Agile Enterprise, Flux7 has published a new paper on How CIOs Can Prepare an IT Platform for the Agile Enterprise. Download it today to learn how a technology platform that supports agility with IT automation and DevOps best practices can be a key lever to helping IT engage with and improve the business.

Download the Paper Today

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

The Agile Enterprise: A Flux7 OKR Case Study

The Agile Enterprise: A Flux7 OKR Case Study

Flux7 Agile Enterprise Case Study OKR


The Agile Enterprise is becoming the way successful companies operate and at Flux7 we like to lead by example. As a result, we have embraced many Agile practices across our business — from OKRs to a flatarchy (for additional background, read our blog, Flatarchies and the Agile Enterprise) — and plan to share in a short blog series how we are implementing these agile best practices, lessons we’ve learned along the way and the impacts they’ve had on our business. In today’s blog, we start by taking a look at our OKR (Objectives and Key Results) story and the greater role of OKRs in an Agile Enterprise.

Created by Intel and made popular by organizations like Amazon, Google, Microsoft, and Slack, OKR is a goal setting management style that is gaining traction. The goal of OKRs is to align individuals, teams and the organization as a whole around measurable results that have everyone rowing in the same direction.

Our OKR Timeline

Excited to begin, we started experimenting with them in early Q4 of 2018. And, our first serious attempt with OKRs was as we looked to build them for Q1 of 2019. After trying it once, we saw the shortcomings of what we had done (keep reading as we discuss lessons learned from that exercise below) and brought in an expert who could help us learn and improve. We found Dan Montgomery, founder of Agile Strategies and author of Start Less Finish More, to be exactly what we were looking for.

Dan helped us understand both the theory behind OKRs and gave us practical how-to steps to implement OKRs across Flux7. As an organization that already uses Agile methodologies in our consulting practice, Dan showed us how we can readily apply these principles to the OKR process, growing our corporate strategic agility. With Dan’s guidance, we began implementing OKRs across the organization.

We started with an initial training session on OKRs at Flux7’s All Hands Meeting, followed by an in-depth training and project orientation session for company leads. This training was bolstered with a session with our co-founders to assess company strategy, goals and performance as well as prepare for the development of company OKRs with the leads.

With this foundation in place, we began drafting our company OKRs. While our leads helped pave the way, Dan was instrumental in reviewing drafts and providing feedback. With company OKRs in place, we next turned to team OKRs. Over the course of two weeks, our leads worked with team members to draft team OKRs based on corporate OKRs. We finalized OKRs with a workshop where we made sure everyone was in alignment for the upcoming quarter and our leads committed to integrating OKRs into weekly action planning and accomplishments moving forward.

OKR Lessons Learned

While we tried our hand at developing OKRs before we engaged with Dan, we learned a few important things through this first exercise which were underscored by his expertise:

  1. Less can be more.
    Regardless of the team or role, we found that people erred on the side of having more OKRs than fewer. We quickly realized that Dan’s “Start Less, Finish More” mantra was spot on and that less is indeed more as fewer OKRs mean we all have a laser focus on achieving key organizational goals, minimizing distractions and forcing a real prioritization that generates greater output.

    We have a rule of thumb that no team shall have more than two objectives and would recommend to others that they have no more than three OKRs per group. In this vein, we would also recommend no more than three to five results per outcome. For example, if People Ops has an OKR to grow employee success, that might be measured through employee engagement, percent of employees that take advantage of professional development, and percent of employees taking part in the mentorship program.

  2. Cross-dependencies must be flagged.
    While our teams quickly grokked the idea of how OKRs roll-up in support of top-level business goals, we could have done a better job initially of identifying OKR cross- dependencies between teams and individuals. With one of the goals of OKRs to improve employee engagement and teamwork, we quickly saw how imperative it is to flag any OKRs that bridge workgroups and/or individual employees. By ensuring that individuals are working in tandem and not duplicating efforts, we are able to maximize productivity.
  3. Transparency remains vital.
    A core value since we opened our doors in 2013, the OKR process has served to highlight the importance of transparency in all we do. We are as transparent about OKRs as we are everything else at Flux7; since moving to an OKR process, we have taken several steps to ensure transparency.
  • Integrated a team-by-team discussion of OKRs at each of our monthly meetings. At each meeting, we rotate team members who present progress on OKRs.
  • Like everything else at Flux7, we encourage people to ask questions and to spur participation by everyone.
  • We have created an OKR Trello board where team members can see progress to date on our quarterly OKRs.
  • Translate quarterly OKRs to weekly actions.
    It is really important to map OKRs to weekly actions as they are stepping stones to reaching the broader goal. While we still have room for improvement here, we recognize that it’s important to assess our progress to goal on a weekly basis as it allows us to more accurately track overall success and institute a course correction (when/if needed) in order to reach our ultimate OKR goal.

    Two things worth noting here: First, mapping weekly actions to goals was an easier task for some groups than others, as the nature of work for some groups is naturally longer-horizon. Second, we highly recommend setting quarterly OKRs; this cadence allows us to be aggressive and in-tune with the fast-changing pace of the market while not so fast that we’re constantly re-working OKRs.

  • Another core value at Flux7 is applying learning for constant improvement. After our first quarterly OKR setting, we took a hard look at what went well and what could be improved and in learning from it went about the process of our second OKR setting. They say that the first pancake is always the flattest and this proved to be true with our OKR process as the second set of OKRs moved much more seamlessly, thanks to insight and guidance from Dan on what we were doing well and where we could improve.

    OKRs and the Agile Enterprise

    The Agile Enterprise is defined by its ability to create business value as it reacts to swift market changes. OKRs support this goal by replacing traditional goal-setting (a yearly top-down exercise) with quarterly bottom-up objectives and key results. We’ve seen the benefits first-hand:

    • As employees play a key role in developing the objectives and results that they are personally responsible for, they take ownership and accountability. They are invested in achieving results.
    • With ownership comes empowerment. Our employees know we trust them to create their own OKRs and take the reins and drive the results. As Henrik Kniberg points out here, what we seek — and achieve — is Aligned Autonomy. The business needs alignment, which is what we get when everyone is bought-in on the ultimate objectives. And teams need autonomy which is what we get when people are empowered. The result: we can all row in the same direction very efficiently and effectively.
    • Last, with an agile-focused culture and a handful of objectives, we are all able to see clear progress toward our goals. As everyone feels like they are a part of the company’s success, employee satisfaction grows which creates a virtuous cycle of greater ownership, empowerment and ultimately business value to customers, partners and shareholders.

    Transition is hard; it is chaotic, and it doesn’t have easy answers. Having a guide that knows how to navigate these issues is important; just as we learned from working with Dan, our customers learn from working with us that having a partner who understands how to navigate a path to those unique solutions that will work best for your enterprise is invaluable.

    The Agile Enterprise extends beyond agile development or lean product management; it is a mindset that must permeate corporate strategy as well. OKRs can play an integral role in bringing agility to corporate strategy, in the process growing employee engagement, removing silos and accelerating responsiveness to quickly changing market forces. Make sure you don’t miss the series on becoming an Agile Enterprise. Subscribe to our DevOps Blog here:

    Subscribe to the Flux7 Blog

    from Flux7 DevOps Blog