Category: DevOps

Digital Transformation Boosts Manufacturing Agility, Competitiveness

Digital Transformation Boosts Manufacturing Agility, Competitiveness

Digital Transformation Boosts Manufacturing AgilityFew industries face the level of global competition that manufacturing does. To compete and realize the promise of Industry 4.0, manufacturers are increasingly embracing digital transformation. Evolving the business — from the manufacturing floor to the sales office — requires a holistic effort that requires a smart IT roadmap strategy and effective execution. In today’s blog, we’re taking a look at the digital transformation journey of several manufacturers and how it has benefited their productivity, efficiency and ultimately their strategic market position.

Drive Scientific Innovation with DevOps Automation
While the current shortage of digital talent in manufacturing is “very high”, according to research by The Manufacturing Institute, DevOps automation increases employee efficiency by creating a platform that enables researchers, engineers, and scientists to focus on their core work. And so it was that the Infrastructure Engineering team at Toyota Research Institute decided to support its researchers and engineers by making it easier for them to utilize the power of the cloud with automation.

Working with the Flux7 DevOps consulting team, they implemented DevOps methods, processes, and automation that they have used to reduce tactical, manual IT operations activities. Researchers and engineers are able to use a self-serve portal to easily and quickly provision the AWS assets they need to test new ideas, thus helping them become more productive as they won’t wait for the infrastructure team to spin up resources.

Having a secure cloud sandbox environment enables them to try new ideas, fail fast, destroy the sandbox if needed, and start over, enabling researchers to innovate at velocity and at scale. According to Mike Garrison, the technical lead for Infrastructure Engineering at TRI, as quoted in, “Modern cloud infrastructure and DevOps automation are empowering us to quickly remove any barriers that get in the way, allowing the team to do their best work, advance research quickly, push boundaries and transform the industry.”

Similarly, Flux7 worked with a large US manufacturer to adopt elastic high-performance computing (HPC) that facilitates the company’s scientific simulations for various aspects of designing new machinery. These HPC simulations were hosted in the company’s traditional data center, yet required scalability to meet dynamic demand which required planning and a great deal of capital expense. Moving its HPC simulations to the cloud meant that it could innovate for the future faster, with scalable, dynamic demand while greatly reducing internal resource overhead and costs.

IoT for Industry 4.0

Linking IoT devices with the cloud and analytics infrastructure can unlock critical real-time data that enables preventive maintenance and extends system productivity. This kind of data can help staff proactively address issues before they occur thus creating greater system uptime, overall equipment effectiveness, and a greater ROI for capital equipment. For a large equipment manufacturer looking to gather data from its geographically dispersed machines, Flux7 helped set up an AWS IoT infrastructure.

The two teams modernized and migrated several applications to the cloud, connecting them with a new AWS Data Lake. (AWS recently announced AWS Lake Formation to help with his process, check out our blog on it here.) The new system collects important data from the field, processing it to make predictions helpful to its customers’ operations. And, the data helps with machine maintenance schedules, ensuring that machines are serviced appropriately, thus increasing uptime. Moreover, processes that previously took days were reduced to 15 minutes, freeing developer time for strategic work, while creating a new revenue stream for the manufacturer.

Set the foundation for the Agile Enterprise

While becoming an Agile Enterprise will help manufacturers realize the promise of Industry 4.0, digital transformation is a journey that requires a smart roadmap and solid execution. Flux7 partnered with a Fortune 500 manufacturer in its Agile Enterprise evolution. The company reached out AWS premier consulting partners, Flux7, to help it embark on a digital transformation that would eventually work its way through the company’s various departments — from enterprise architecture to application development and security — and business units, such as embedded systems and credit services.

The transformation started with a limited Amazon cloud migration and moved on to include:

  • IoT and an AWS Data Lake,
  • EU data privacy regulatory compliance,
  • Serverless monitoring and notification with a goal to use advanced automation to alert operations and information security teams of any known issues surfacing in the account, or violations of the corporate security standard.
  • Advanced automation to simplify maintenance and improve security and compliance.
  • Amazon VPC automation for faster onboarding.

The outcome has been a complete agile adoption of Flux7’s Enterprise DevOps Framework for greater security, cost efficiencies, and reliability. Enabled by solutions that connect its equipment and customer communities, the digital transformation effectively supports the company’s ultimate goal to create an unrivaled experience for its customers and partners.

From smart production to smart logistics and even smart product design and smarter sales and marketing efforts, a technology-driven transformation will help manufacturers achieve greater fault-tolerance, productivity, and ultimately revenue.

For additional manufacturing use case stories:

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

Build A Best Practice AWS Data Lake Faster with AWS Lake Formation

Build A Best Practice AWS Data Lake Faster with AWS Lake Formation

AWS Best Practice AWS Data Lake Formation

The world’s first gigabyte hard drive was the size of a refrigerator — and that wasn’t all that long ago. Clearly, technology has evolved, and so have our data storage and analysis needs. With data serving a key role in helping companies unearth intelligence that can provide a competitive advantage, solutions that allow organizations to end data silos and help create actionable business outcomes from intelligent data analysis are gaining traction. 

According to the 2018 Big Data Trends and Challenges report by Dimensional Research, the number of firms with an average data lake size over 100 Terabytes has grown from 36% in 2017 to 44% in 2018. A trend that’s sure to continue, especially as cloud providers like AWS provide services such as the newly announced AWS Lake Formation that help streamline the process of creating and managing a data lake solution. As such, in today’s blog, we’re going to take a look at the new AWS Lake Formation service, and share our take on its features, benefits, and things we’d like to see in the next version of the service.  

What is AWS Lake Formation

AWS Lake Formation is the newest service from AWS. It is designed to streamline the process of building a data lake in AWS, creating a full solution in just days. At a high level, AWS Lake Formation provides best practice templates and workflows for creating data lakes that are secure, compliant and operate effectively. The overall goal is to provide a solution that is well architected to identify, ingest, clean and transform data while enforcing appropriate security policies to enable firms to focus on gaining new insights, rather than building data lake infrastructure.

Before the release of AWS Lake Formation, organizations would need to take several steps to build their data lake. Not only was the process time-consuming, but there were several points in the process that proved difficult for the average operator. For example, users needed to set up their own Amazon S3 storage; deploy AWS Glue to prepare the data for analysis through the automated extract, transform and load (ETL) process; configure and enforce security policies; ensure compliance and more.  Each part of the process offered room for missteps, making the overall data lake set up challenging and a month+ long process for many.

AWS Data Lake Benefits

AWS has solved many of these challenges with AWS Lake Formation that offers three key areas of benefit and one area that we think is a neat, supporting feature.

  1. Templates – The new AWS Lake Formation provides templates for a number of things. We are most excited about the templates for AWS Glue which is important as this is an area where many organizations find they need to loop in AWS engineering for best practice help. Glue templates show that AWS really is listening to its customers and providing guidance where they need it most. In addition, our AWS consulting team was really happy to see templates that simplify the import of data and templates for the management of long-running cron jobs. These reusable templates will streamline each part of the data lake process.
  2. Cloud Security Solutions – Data is the lifeblood of an organization and for many companies, it is the foundation of their IP. As a result, sound security (and compliance) must be a key consideration for any data lake solution. AWS is definitely singing from that hymn book with AWS Lake Formation as they have created opportunities for security at the most granular of levels — not just securing the S3 bucket, but the data catalog as well. For example, at the data catalog level, you could specify which columns of data a Lambda function can read, or revoke a user’s permissions to a specific database. (AWS notes that row-level tagging will be in a future version of the solution.)
  3. Machine Learning Transformations – AWS provides algorithms for its customers to create their own machine learning solutions. AWS cites record de-duplication as a use case here, illustrating how ML can help clean and update data. However, we see this feature as being particularly interesting to firms in industries like pharmaceuticals where a company could, for example, use it to mine and predictively match chemical patterns to patients or in the oil and gas industry where ML can be applied to learn from field-based data points to maximize oil production.

Also neat, but not marquee-stealing, is the AWS Lake Formation feature that allows users to add metadata and tag data catalog objects. For developers, in particular, this is a nice-to-have feature as it will allow them to more easily search all this data. Separately, we also like that AWS Lake Formation users will only pay for the underlying services used and that there are no additional charges.  

Ready to Swim?

One feature we’d like to see in an upcoming release of Lake Formation is integration with directory services like AD. This will help further streamline the process of controlling data access to ensure permissions are revoked when, for example, an employee leaves the organization or changes workgroups. 

Moreover, while AWS Lake Formation greatly streamlines the process of building a data lake, being able to create your own templates moving forward may still remain a challenge for some organizations. At Flux7, we teach organizations how to build, manage and maintain templates for this — and many other AWS solutions — and can help your team ensure your templates incorporate Well Architected best practice standards on an ongoing basis.

Ready to dive into your own AWS data lake solution? Check out our AWS Data Lake solution case study on how a healthcare provider addressed its rapid data expansion and data complexity with AWS and Flux7 DevOps consulting, enabling it to quickly analyze information and make important data connections. Impact your time to market, customer experience and market position today with our AWS database services

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

DevOps Blog IT Modernization DevOps News

Container security was top of mind this week as Kubernetes announced the results of its first security audit. The review looked at Kubernetes 1.13.4 and found 37 vulnerability issues, including five high-severity issues and 17 medium-severity issues. We are happy to report that fixes for these issues have already been deployed.

Container security was also top of mind for McAfee who said this week it has acquired NanoSec, a California container security startup. This as the Cloud Security Alliance introduced its Egregious Eleven, the top salient threats, risks and vulnerabilities in cloud environments identified in its Fourth Annual Top Threats survey. Two key themes that emerged this year are a maturation in the understanding of the cloud and respondent’s desire to address security issues higher up the technology stack that are the result of senior management decisions. While you can check out the report yourself, the top concerns are: Data Breaches, Misconfiguration and Inadequate Change Control, Lack of Cloud Security Architecture and Strategy and Insufficient Identity, Credential, Access and Key Management. 

To stay up-to-date on DevOps security, CI/CD and IT Modernization, subscribe to our blog here:

Subscribe to the Flux7 Blog

DevOps News

  • This past week HashiCorp released an official Helm Chart for Vault. Operators can reduce the complexity of running Vault on Kubernetes with the new Helm Chart as it provides a repeatable deployment process in less time. For example, HashiCorp reports that using the Helm Chart, allows operators to start a Vault cluster running on Kubernetes in just minutes. The Helm chart allows you to run Vault directly on Kubernetes, so in addition to the native integrations provided by Vault itself, any other tool built for Kubernetes can choose to leverage Vault. Note that a Helm Chart for Vault Enterprise will be available in the future.
  • In response to feedback, GitHub is bringing CI/CD support to GitHub Actions. Available November 13, the new support will allow users to easily automate how they build, test, and deploy projects across platforms — Linux, macOS, and Windows — in containers or virtual machines, and across languages and frameworks such as Node.js, Python, Java, PHP, Ruby, C/C++, .NET, Android, and iOS. GitHub Actions is an API that orchestrates workflows, based on any event, while GitHub manages the execution, provides rich feedback and secures every step along the way. 
  • Jenkins monitoring got a boost this week as Instana announced the addition of Jenkins monitoring to its automatic Application Performance Management (APM) solution as part of its focus on adding performance management for systems in other steps of the application delivery process. According to Peter Abrams, the company COO, and co-founder, “A common theme amongst Instana customers is the need to deliver and deploy quality applications faster, and Jenkins is a critical component of that delivery process.” The new capabilities include providing performance visibility of individual builds and deployments, and health monitoring of the Jenkins tool stack.

AWS News 

  • The long-awaited AWS Lake Formation is now generally available. Introduced at re:Invent last fall, Lake Formation makes it easy to ingest, clean, catalog, transform, and secure data, making it available for analytics and machine learning. Operators work from a central console to manage their data lake and are able to configure the right access permissions and secure access to metadata in the Glue Data Catalog and data stored in S3 using a single set of granular data access policies defined in Lake Formation. AWS Lake Formation notably works with data already in S3, allowing operators to easily register their existing data with Lake Formation.
  • In related news, it was announced that Amazon Redshift Spectrum now supports column-level access control for data stored in Amazon S3 and managed by AWS Lake Formation. This column-level access control helps limit access to only specific columns of a table rather than allowing access to all columns of a table, a key part of data governance and security needs of many enterprises.
  • Our AWS Consulting team enjoyed these two AWS blogs. The first, Auto-populate instance details by integrating AWS Config with your ServiceNow CMDB, shares how to ensure CMDB accuracy by integrating AWS Config and ServiceNow so that a notification creates a server record in the CMDB and tests the setup. 
  • Focused on security by design, we are always interested in how to securely share keys. Therefore, this blog, How to deploy CloudHSM to securely share your keys with your SaaS provider caught our attention. In it, Vinod Madabushi shares two options for deploying and managing a CloudHSM cluster to secure keys, while still allowing trusted third-party SaaS providers to securely access the HSM cluster to perform cryptographic operations.  
  • Amazon announced that operators can now use AWS PrivateLink in the AWS GovCloud (US-East) Region. Already available in several other regions, AWS PrivateLink allows operators to privately access services hosted on AWS without using public IPs and without requiring the traffic to traverse the internet.

Flux7 News

  • Read our latest AWS Case Study, the story of how Flux7 DevOps consultants teamed with a global retailer to create a platform for scalable innovation. To accelerate its cloud migration and standardize its development efforts, the joint client-Flux7 team identified a solution: a DevOps Dashboard that would automatically apply the company’s various standards as cloud infrastructure is deployed. 
  • For CIOs and technology leaders looking to lead the transition to an Agile Enterprise, Flux7 has published a new paper on How CIOs Can Prepare an IT Platform for the Agile Enterprise. Download it today to learn how a technology platform that supports agility with IT automation and DevOps best practices can be a key lever to helping IT engage with and improve the business. 

Download the Paper Today

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

Global Retailer Standardizes Hybrid Cloud with DevOps Dashboard

Global Retailer Standardizes Hybrid Cloud with DevOps Dashboard

Global Retailer Hybrid Cloud DevopsFrom luxury to grocery, the retail war continues. While some would say we’re witnessing a retail apocalypse, others contend it’s really the death of the boring middle. (HT Steve Dennis) With a vision to innovate and extend its leadership in this competitive environment, the DevOps consulting team at Flux7 was approached by our newest customer, a top 50 global retailer. Today’s blog is the story of how Flux7 DevOps consultants teamed with the retailer to create a platform for scalable innovation. 

Read More: Download the full case study 

Growing geographically and looking to support its thousands of locations with innovative new solutions, this retailer has embraced digital transformation, starting with an AWS migration. However, doing so required the move of hundreds of applications from different on-premises platforms, a task that required the retailer’s IT teams to consistently ensure that operational, security and regulatory standards were maintained. 

To standardize and accelerate its development efforts on AWS, the joint client-Flux7 team identified a solution: a DevOps Dashboard that would automatically apply the company’s various standards as cloud infrastructure is deployed

The DevOps Dashboard

The DevOps Dashboard standardizes infrastructure creation and streamlines the process of developing applications on AWS. Developers can quickly start and/or continue development of their applications on AWS using the dashboard. Developers simply enter parameters into the UI and behind the scenes, the dashboard triggers pipelines to deploy infrastructure, connects to a repository, deploys code and sets up the environment. 

The DevOps Dashboard also features:

  • Infrastructure provisioning defined and implemented as code  
  • The ability to create ECS, EKS, and Serverless infrastructure in AWS
  • Jenkins automation to provision infrastructure and deploy sample apps to new and/or existing repositories
  • The ability to create a repository or use an existing one and implement a Webhook for continuous deployment 
  • A standard repository structure
  • The ability to automatically update/push the code of new sample applications to the appropriate environment (Dev/QA/Production) once placed in the repository.

DevOps Dashboard Benefits

Using the DevOps Dashboard allows developers to work on the code repository while their code or application is automatically deployed to the selected environment. This allows the engineer to focus only on editing applications rather than worrying about infrastructure standard compliance. The result of this advanced DevOps automation is that developers are able to create higher quality code faster, which means that they can quickly experiment and get winning ideas to market faster.

In addition, the DevOps Dashboard increases the retailer’s development agility while increasing its consistency and standardization of cloud builds across its hybrid cloud environment. Greater standardization has resulted in less risk, greater security, and compliance as code. 

For further reading on how Flux7 helps retailers establish an agile IT platform that harnesses the power of automation to grow IT productivity: 

For ongoing case studies, DevOps news and analysis, subscribe to our blog:

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News At IBM’s Investor Briefing 2019, CEO Ginni Rometty, addressed questions about the future of Red Hat now that the acquisition has closed. Framing what she calls Chapter Two of the cloud, she noted that Red Hat brings the vehicle. “Eighty percent is still to be moved into a hybrid cloud environment,” she said. Noting further that, “Hybrid cloud is the destination because you can modularize apps.” The strategy moving forward is to scale Red Hat, selling more IBM services tied to Red Hat while optimizing the IBM portfolio for Red Hat OpenShift in a move that Rometty called, “middleware everywhere.”

To stay up-to-date on DevOps security, CI/CD and IT Modernization, subscribe to our blog here:

Subscribe to the Flux7 Blog

DevOps News

  • HashiCorp announced the public availability of HashiCorp Vault 1.2. According to the company, new features are focused on supporting new architectures for automated credential and cryptographic key management at a global, highly-distributed scale. Specifically, it includes KMIP Server Secret Engine (Vault Enterprise only) which allows Vault to serve as a KMIP Server for automating secrets management and encryption as a service workflows with enterprise systems; integrated storage; identity tokens; and database static credential rotation.
  • CodeStream is now available for deployment through the Slack app store. With CodeStream, developers can more easily use Slack to discuss code; instead of cutting and pasting, developers can now share code blocks in context right from their IDE. Replies can be made in Slack or CodeStream, and in either case, they become part of the thread that is permanently linked to the code.
  • Armory announced it has raised $28M in its pursuit of additional development of Spinnaker, the firm’s open-source, multi-cloud continuous delivery platform used by developers to release quality software with greater speed and efficiency.
  • Our DevOps consulting team enjoyed this article by Mike Cohn on, Overcoming Four Common Objections to the Daily Scrum. In it, he discusses best practices for well-run daily Scrums.

AWS News

  • Operators can now use AWS CloudFormation templates to specify AWS IoT Events resources. According to the firm, this improvement enables you to use CloudFormation to deploy AWS IoT Events resources—along with the rest of your AWS infrastructure—in a secure, efficient, and repeatable way. The new capability is available now where IoT Events are available.
  • Amazon has added a new Predictions category to its Amplify Framework, allowing operators to now easily add and configure AI/ML use cases to their web and/or mobile applications.
  • In response to greater transparency, Amazon has launched the AWS CloudFormation Coverage Roadmap. In it AWS shares its priorities for CloudFormation in four areas: features that have shipped and are production-ready; features that are on the near-horizon and you should expect to see within the next few months; longer-term features that are actively being worked on; and features being researched.
  • AWS introduced the availability of the Middle East Region, the first AWS Region in the Middle East; it is comprised of three Availability Zones.
  • Our AWS Consulting team enjoyed this AWS blog, Analyzing AWS WAF logs with Amazon ES, Amazon Athena, and Amazon QuickSight, by Aaron Franco in which he discusses how to aggregate AWS WAF logs into a central data lake repository. Check out our resource page for additional reading on AWS WAF.

Flux7 News

  • We continued our blog series about becoming an Agile Enterprise, with the Flux7 case study of our OKR (Objectives and Key Results) journey, sharing lessons we learned along the way and greater role of OKRs in an Agile Enterprise. In case you missed the first article in the series on choosing a flatarchy organizational structure, you can read it here.
  • For CIOs and technology leaders looking to lead the transition to an Agile Enterprise, Flux7 has published a new paper on How CIOs Can Prepare an IT Platform for the Agile Enterprise. Download it today to learn how a technology platform that supports agility with IT automation and DevOps best practices can be a key lever to helping IT engage with and improve the business.

Download the Paper Today

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

The Agile Enterprise: A Flux7 OKR Case Study

The Agile Enterprise: A Flux7 OKR Case Study

Flux7 Agile Enterprise Case Study OKR

The Agile Enterprise is becoming the way successful companies operate and at Flux7 we like to lead by example. As a result, we have embraced many Agile practices across our business — from OKRs to a flatarchy (for additional background, read our blog, Flatarchies and the Agile Enterprise) — and plan to share in a short blog series how we are implementing these agile best practices, lessons we’ve learned along the way and the impacts they’ve had on our business. In today’s blog, we start by taking a look at our OKR (Objectives and Key Results) story and the greater role of OKRs in an Agile Enterprise.

Created by Intel and made popular by organizations like Amazon, Google, Microsoft, and Slack, OKR is a goal setting management style that is gaining traction. The goal of OKRs is to align individuals, teams and the organization as a whole around measurable results that have everyone rowing in the same direction.

Our OKR Timeline

Excited to begin, we started experimenting with them in early Q4 of 2018. And, our first serious attempt with OKRs was as we looked to build them for Q1 of 2019. After trying it once, we saw the shortcomings of what we had done (keep reading as we discuss lessons learned from that exercise below) and brought in an expert who could help us learn and improve. We found Dan Montgomery, founder of Agile Strategies and author of Start Less Finish More, to be exactly what we were looking for.

Dan helped us understand both the theory behind OKRs and gave us practical how-to steps to implement OKRs across Flux7. As an organization that already uses Agile methodologies in our consulting practice, Dan showed us how we can readily apply these principles to the OKR process, growing our corporate strategic agility. With Dan’s guidance, we began implementing OKRs across the organization.

We started with an initial training session on OKRs at Flux7’s All Hands Meeting, followed by an in-depth training and project orientation session for company leads. This training was bolstered with a session with our co-founders to assess company strategy, goals and performance as well as prepare for the development of company OKRs with the leads.

With this foundation in place, we began drafting our company OKRs. While our leads helped pave the way, Dan was instrumental in reviewing drafts and providing feedback. With company OKRs in place, we next turned to team OKRs. Over the course of two weeks, our leads worked with team members to draft team OKRs based on corporate OKRs. We finalized OKRs with a workshop where we made sure everyone was in alignment for the upcoming quarter and our leads committed to integrating OKRs into weekly action planning and accomplishments moving forward.

OKR Lessons Learned

While we tried our hand at developing OKRs before we engaged with Dan, we learned a few important things through this first exercise which were underscored by his expertise:

  1. Less can be more.
    Regardless of the team or role, we found that people erred on the side of having more OKRs than fewer. We quickly realized that Dan’s “Start Less, Finish More” mantra was spot on and that less is indeed more as fewer OKRs mean we all have a laser focus on achieving key organizational goals, minimizing distractions and forcing a real prioritization that generates greater output.

    We have a rule of thumb that no team shall have more than two objectives and would recommend to others that they have no more than three OKRs per group. In this vein, we would also recommend no more than three to five results per outcome. For example, if People Ops has an OKR to grow employee success, that might be measured through employee engagement, percent of employees that take advantage of professional development, and percent of employees taking part in the mentorship program.

  2. Cross-dependencies must be flagged.
    While our teams quickly grokked the idea of how OKRs roll-up in support of top-level business goals, we could have done a better job initially of identifying OKR cross- dependencies between teams and individuals. With one of the goals of OKRs to improve employee engagement and teamwork, we quickly saw how imperative it is to flag any OKRs that bridge workgroups and/or individual employees. By ensuring that individuals are working in tandem and not duplicating efforts, we are able to maximize productivity.
  3. Transparency remains vital.
    A core value since we opened our doors in 2013, the OKR process has served to highlight the importance of transparency in all we do. We are as transparent about OKRs as we are everything else at Flux7; since moving to an OKR process, we have taken several steps to ensure transparency.
  • Integrated a team-by-team discussion of OKRs at each of our monthly meetings. At each meeting, we rotate team members who present progress on OKRs.
  • Like everything else at Flux7, we encourage people to ask questions and to spur participation by everyone.
  • We have created an OKR Trello board where team members can see progress to date on our quarterly OKRs.
  • Translate quarterly OKRs to weekly actions.
    It is really important to map OKRs to weekly actions as they are stepping stones to reaching the broader goal. While we still have room for improvement here, we recognize that it’s important to assess our progress to goal on a weekly basis as it allows us to more accurately track overall success and institute a course correction (when/if needed) in order to reach our ultimate OKR goal.

    Two things worth noting here: First, mapping weekly actions to goals was an easier task for some groups than others, as the nature of work for some groups is naturally longer-horizon. Second, we highly recommend setting quarterly OKRs; this cadence allows us to be aggressive and in-tune with the fast-changing pace of the market while not so fast that we’re constantly re-working OKRs.

  • Another core value at Flux7 is applying learning for constant improvement. After our first quarterly OKR setting, we took a hard look at what went well and what could be improved and in learning from it went about the process of our second OKR setting. They say that the first pancake is always the flattest and this proved to be true with our OKR process as the second set of OKRs moved much more seamlessly, thanks to insight and guidance from Dan on what we were doing well and where we could improve.

    OKRs and the Agile Enterprise

    The Agile Enterprise is defined by its ability to create business value as it reacts to swift market changes. OKRs support this goal by replacing traditional goal-setting (a yearly top-down exercise) with quarterly bottom-up objectives and key results. We’ve seen the benefits first-hand:

    • As employees play a key role in developing the objectives and results that they are personally responsible for, they take ownership and accountability. They are invested in achieving results.
    • With ownership comes empowerment. Our employees know we trust them to create their own OKRs and take the reins and drive the results. As Henrik Kniberg points out here, what we seek — and achieve — is Aligned Autonomy. The business needs alignment, which is what we get when everyone is bought-in on the ultimate objectives. And teams need autonomy which is what we get when people are empowered. The result: we can all row in the same direction very efficiently and effectively.
    • Last, with an agile-focused culture and a handful of objectives, we are all able to see clear progress toward our goals. As everyone feels like they are a part of the company’s success, employee satisfaction grows which creates a virtuous cycle of greater ownership, empowerment and ultimately business value to customers, partners and shareholders.

    Transition is hard; it is chaotic, and it doesn’t have easy answers. Having a guide that knows how to navigate these issues is important; just as we learned from working with Dan, our customers learn from working with us that having a partner who understands how to navigate a path to those unique solutions that will work best for your enterprise is invaluable.

    The Agile Enterprise extends beyond agile development or lean product management; it is a mindset that must permeate corporate strategy as well. OKRs can play an integral role in bringing agility to corporate strategy, in the process growing employee engagement, removing silos and accelerating responsiveness to quickly changing market forces. Make sure you don’t miss the series on becoming an Agile Enterprise. Subscribe to our DevOps Blog here:

    Subscribe to the Flux7 Blog

    from Flux7 DevOps Blog

    Evolution of Netflix Conductor:

    Evolution of Netflix Conductor:

    v2.0 and beyond

    By Anoop Panicker and Kishore Banala

    Conductor is a workflow orchestration engine developed and open-sourced by Netflix. If you’re new to Conductor, this earlier blogpost and the documentation should help you get started and acclimatized to Conductor.

    Netflix Conductor: A microservices orchestrator

    In the last two years since inception, Conductor has seen wide adoption and is instrumental in running numerous core workflows at Netflix. Many of the Netflix Content and Studio Engineering services rely on Conductor for efficient processing of their business flows. The Netflix Media Database (NMDB) is one such example.

    In this blog, we would like to present the latest updates to Conductor, address some of the frequently asked questions and thank the community for their contributions.

    How we’re using Conductor at Netflix


    Conductor is one of the most heavily used services within Content Engineering at Netflix. Of the multitude of modules that can be plugged into Conductor as shown in the image below, we use the Jersey server module, Cassandra for persisting execution data, Dynomite for persisting metadata, DynoQueues as the queuing recipe built on top of Dynomite, Elasticsearch as the secondary datastore and indexer, and Netflix Spectator + Atlas for Metrics. Our cluster size ranges from 12–18 instances of AWS EC2 m4.4xlarge instances, typically running at ~30% capacity.

    Components of Netflix Conductor
    * — Cassandra persistence module is a partial implementation.

    We do not maintain an internal fork of Conductor within Netflix. Instead, we use a wrapper that pulls in the latest version of Conductor and adds Netflix infrastructure components and libraries before deployment. This allows us to proactively push changes to the open source version while ensuring that the changes are fully functional and well-tested.


    As of writing this blog, Conductor orchestrates 600+ workflow definitions owned by 50+ teams across Netflix. While we’re not (yet) actively measuring the nth percentiles, our production workloads speak for Conductor’s performance. Below is a snapshot of our Kibana dashboard which shows the workflow execution metrics over a typical 7-day period.

    Dashboard with typical Conductor usage over 7 days
    Typical Conductor usage at Netflix over a 7 day period.

    Use Cases

    Some of the use cases served by Conductor at Netflix can be categorized under:

    • Content Ingest and Delivery
    • Content Quality Control
    • Content Localization
    • Encodes and Deployments
    • IMF Deliveries
    • Marketing Tech
    • Studio Engineering

    What’s New

    gRPC Framework

    One of the key features in v2.0 was the introduction of the gRPC framework as an alternative/auxiliary to REST. This was contributed by our counterparts at GitHub, thereby strengthening the value of community contributions to Conductor.

    Cassandra Persistence Layer

    To enable horizontal scaling of the datastore for large volume of concurrent workflow executions (millions of workflows/day), Cassandra was chosen to provide elastic scaling and meet throughput demands.

    External Payload Storage

    External payload storage was implemented to prevent the usage of Conductor as a data persistence system and to reduce the pressure on its backend datastore.

    Dynamic Workflow Executions

    For use cases where the need arises to execute a large/arbitrary number of varying workflow definitions or to run a one-time ad hoc workflow for testing or analytical purposes, registering definitions first with the metadata store in order to then execute them only once, adds a lot of additional overhead. The ability to dynamically create and execute workflows removes this friction. This was another great addition that stemmed from our collaboration with GitHub.

    Workflow Status Listener

    Conductor can be configured to publish notifications to external systems or queues upon completion/termination of workflows. The workflow status listener provides hooks to connect to any notification system of your choice. The community has contributed an implementation that publishes a message on a dyno queue based on the status of the workflow. An event handler can be configured on these queues to trigger workflows or tasks to perform specific actions upon the terminal state of the workflow.

    Bulk Workflow Management

    There has always been a need for bulk operations at the workflow level from an operability standpoint. When running at scale, it becomes essential to perform workflow level operations in bulk due to bad downstream dependencies in the worker processes causing task failures or bad task executions. Bulk APIs enable the operators to have macro-level control on the workflows executing within the system.

    Decoupling Elasticsearch from Persistence

    This inter-dependency was removed by moving the indexing layer into separate persistence modules, exposing a property (workflow.elasticsearch.instanceType) to choose the type of indexing engine. Further, the indexer and persistence layer have been decoupled by moving this orchestration from within the primary persistence layer to a service layer through the ExecutionDAOFacade.

    ES5/6 Support

    Support for Elasticsearch versions 5 and 6 have been added as part of the major version upgrade to v2.x. This addition also provides the option to use the Elasticsearch RestClient instead of the Transport Client which was enforced in the previous version. This opens the route to using a managed Elasticsearch cluster (a la AWS) as part of the Conductor deployment.

    Task Rate Limiting & Concurrent Execution Limits

    Task rate limiting helps achieve bounded scheduling of tasks. The task definition parameter rateLimitFrequencyInSeconds sets the duration window, while rateLimitPerFrequency defines the number of tasks that can be scheduled in a duration window. On the other hand, concurrentExecLimit provides unbounded scheduling limits of tasks. I.e the total of current scheduled tasks at any given time will be under concurrentExecLimit. The above parameters can be used in tandem to achieve desired throttling and rate limiting.

    API Validations

    Validation was one of the core features missing in Conductor 1.x. To improve usability and operability, we added validations, which in practice has greatly helped find bugs during creation of workflow and task definitions. Validations enforce the user to create and register their task definitions before registering the workflow definitions using these tasks. It also ensures that the workflow definition is well-formed with correct wiring of inputs and outputs in the various tasks within the workflow. Any anomalies found are reported to the user with a detailed error message describing the reason for failure.

    Developer Labs, Logging and Metrics

    We have been continually improving logging and metrics, and revamped the documentation to reflect the latest state of Conductor. To provide a smooth on boarding experience, we have created developer labs, which guides the user through creating task and workflow definitions, managing a workflow lifecycle, configuring advanced workflows with eventing etc., and a brief introduction to Conductor API, UI and other modules.

    New Task Types

    System tasks have proven to be very valuable in defining the Workflow structure and control flow. As such, Conductor 2.x has seen several new additions to System tasks, mostly contributed by the community:


    Lambda Task executes ad-hoc logic at Workflow run-time, using the Nashorn Javascript evaluator engine. Instead of creating workers for simple evaluations, Lambda task enables the user to do this inline using simple Javascript expressions.


    Terminate task is useful when workflow logic should terminate with a given output. For example, if a decision task evaluates to false, and we do not want to execute remaining tasks in the workflow, instead of having a DECISION task with a list of tasks in one case and an empty list in the other, this can scope the decide and terminate workflow execution.


    Exclusive Join task helps capture task output from a DECISION task’s flow. This is useful to wire task inputs from the outputs of one of the cases within a decision flow. This data will only be available during workflow execution time and the ExclusiveJoin task can be used to collect the output from one of the tasks in any of decision branches.

    For in-depth implementation details of the new additions, please refer the documentation.

    What’s next

    There are a lot of features and enhancements we would like to add to Conductor. The below wish list could be considered as a long-term road map. It is by no means exhaustive, and we are very much welcome to ideas and contributions from the community. Some of these listed in no particular order are:

    Advanced Eventing with Event Aggregation and Distribution

    At the moment, event generation and processing is a very simple implementation. An event task can create only one message, and a task can wait for only one event.

    We envision an Event Aggregation and Distribution mechanism that would open up Conductor to a multitude of use-cases. A coarse idea is to allow a task to wait for multiple events, and to progress several tasks based on one event.

    UI Improvements

    While the current UI provides a neat way to visualize and track workflow executions, we would like to enhance this with features like:

    • Creating metadata objects from UI
    • Support for starting workflows
    • Visualize execution metrics
    • Admin dashboard to show outliers

    New Task types like Goto, Loop etc.

    Conductor has been using a Directed Acyclic Graph (DAG) structure to define a workflow. The Goto and Loop on tasks are valid use cases, which would deviate from the DAG structure. We would like to add support for these tasks without violating the existing workflow execution rules. This would help unlock several other use cases like streaming flow of data to tasks and others that require repeated execution of a set of tasks within a workflow.

    Support for reusable commonly used tasks like Email, DatabaseQuery etc.

    Similarly, we’ve seen the value of shared reusable tasks that does a specific thing. At Netflix internal deployment of Conductor, we’ve added tasks specific to services that users can leverage over recreating the tasks from scratch. For example, we provide a TitusTask which enables our users to launch a new Titus container as part of their workflow execution.

    We would like to extend this idea such that Conductor can offer a repository of commonly used tasks.

    Push based task scheduling interface

    Current Conductor architecture is based on polling from a worker to get tasks that it will execute. We need to enhance the grpc modules to leverage the bidirectional channel to push tasks to workers as and when they are scheduled, thus reducing network traffic, load on the server and redundant client calls.

    Validating Task inputKeys and outputKeys

    This is to provide type safety for tasks and define a parameterized interface for task definitions such that tasks are completely re-usable within Conductor once registered. This provides a contract allowing the user to browse through available task definitions to use as part of their workflow where the tasks could have been implemented by another team/user. This feature would also involve enhancing the UI to display this contract.

    Implementing MetadataDAO in Cassandra

    As mentioned here, Cassandra module provides a partial implementation for persisting only the workflow executions. Metadata persistence implementation is not available yet and is something we are looking to add soon.

    Pluggable Notifications on Task completion

    Similar to the Workflow status listener, we would like to provide extensible interfaces for notifications on task execution.

    Python client in Pypi

    We have seen wide adoption of Python client within the community. However, there is no official Python client in Pypi, and lacks some of the newer additions to the Java client. We would like to achieve feature parity and publish a client from Conductor Github repository, and automate the client release to Pypi.

    Removing Elasticsearch from critical path

    While Elasticsearch is greatly useful in Conductor, we would like to make this optional for users who do not have Elasticsearch set-up. This means removing Elasticsearch from the critical execution path of a workflow and using it as an opt-in layer.

    Pluggable authentication and authorization

    Conductor doesn’t support authentication and authorization for API or UI, and is something that we feel would add great value and is a frequent request in the community.

    Validations and Testing

    Dry runs, i.e the ability to evaluate workflow definitions without actually running it through worker processes and all relevant set-up would make it much easier to test and debug execution paths.

    If you would like to be a part of the Conductor community and contribute to one of the Wishlist items or something that you think would provide a great value add, please read through this guide for instructions or feel free to start a conversation on our Gitter channel, which is Conductor’s user forum.

    We also highly encourage to polish, genericize and share any customizations that you may have built on top of Conductor with the community.

    We really appreciate and are extremely proud of the community involvement, who have made several important contributions to Conductor. We would like to take this further and make Conductor widely adopted with a strong community backing.

    Netflix Conductor is maintained by the Media Workflow Infrastructure team. If you like the challenges of building distributed systems and are interested in building the Netflix Content and Studio ecosystem at scale, connect with Charles Zhao to get the conversation started.

    Thanks to Alexandra Pau, Charles Zhao, Falguni Jhaveri, Konstantinos Christidis and Senthil Sayeebaba.

    Evolution of Netflix Conductor: was originally published in Netflix TechBlog on Medium, where people are continuing the conversation by highlighting and responding to this story.

    from Netflix TechBlog – Medium—-2615bd06b42e—4

    How CIOs Can Prepare an IT Platform for the Agile Enterprise

    How CIOs Can Prepare an IT Platform for the Agile Enterprise

    CIOs Prepare IT Platform Agile Enterprise

    Today’s marketplace is volatile. It is uncertain. It is complex and difficult to navigate. And to stay competitive, enterprises must react to change with unprecedented speed. As many of the external pressures on business today stem from changes happening in the digital world, IT has naturally become one of the first areas to adopt change with an aim of helping the business become an Agile Enterprise.

    To help CIOs embrace this as an opportunity to be a guiding force for the Agile Enterprise we have just published a new paper on how technology leaders can prepare the IT Platform to effectively serve as a foundation for the Agile Enterprise.

    Download the Paper Today

    While achieving an Agile Enterprise must be rooted in the business and focused on reaching corporate goals, a technology platform that supports agility with IT automation and DevOps best practices can be a key lever to helping IT engage with and improve the business. As a result, in this new paper, we discuss:

    • The tale of two digital transformations, examining what went well and lessons we can all learn and apply to our business.
    • The role of an Agile Culture, particularly within IT and how CIOs can set the right tone from the outset.
    • Five key areas of automation that CIOs should be sure to incorporate into the IT Platform to ensure agility, grow IT productivity, and deliver specific business outcomes.
    • How an Enterprise DevOps Framework can help give CIOs an IT platform that enables DevOps at Scale, facilitates enterprise agility and helps technology leaders deliver greater business value.

    For organizations who are looking to leapfrog the competition and create an Agile Enterprise capable of competing effectively today — and well into the future — CIOs can be the change agent that drives responsiveness, starting with an agile IT culture and flexible IT platform. As the pace of the market continues to accelerate led by digitalization, technology leaders have a distinct opportunity to embrace and lead the Agile Enterprise, driving greater business value and business results.

    Download the Paper Today

    For additional reading:

    Written by Flux7 Labs

    Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

    from Flux7 DevOps Blog

    IT Modernization and DevOps News Week in Review

    IT Modernization and DevOps News Week in Review

    IT Modernization DevOps News 10Underscoring why DevOps security continues to be the leading thing that keeps CIOs up at night is the newest Data Breach report from IBM and the Ponemon Institute in which they find that the average cost of a data breach has grown 12% since 2014. However, companies with an incident response team and extensive incident response testing were able to hinder some of the impacts of a data breach, reporting $1.23 million less in losses. Similarly, companies using encryption were able to help reduce the total cost of a breach by $360,000.

    To stay up-to-date on DevOps security, CI/CD and IT Modernization, subscribe to our blog here:

    Subscribe to the Flux7 Blog

    In related news, Palo Alto Networks unveiled its Summer 2019 Unit 42 Cloud Threat Risk Report in which it found that “over the last 18 months, 65% of publicly disclosed cloud security incidents were due to misconfigurations, and 25% were due to account compromises.” In this time, cloud complexity with the growing adoption of Docker and Kubernetes has grown, opening the door to greater exposure.

    Last, our DevOps consulting team enjoyed this blog, Manufacturers’ Digital Transformation Will Fail Without Both IT And OT in which Forrester analyst Paul Miller discusses why manufacturers need to combine the best of both IT and OT to meet the needs of the business — and its customers.

    AWS News

    • Last week AWS announced the new AWS Chatbot, a new service that, according to the company, enables DevOps teams to receive AWS notifications and execute commands in Slack channels and Amazon Chime chat rooms with only minimal effort. While currently in beta, AWS Chatbot already supports Amazon CloudWatch, AWS Health, AWS Budgets, AWS Security Hub, Amazon GuardDuty and AWS CloudFormation.
    • Are PCI compliance and AWS security best practices important to your organization? If so, AWS wants you to know that it has expanded its PCI DSS certification scope by 79%, from 62 services to 111 services including 12 newly added services, Amazon AppStream 2.0, Amazon CloudWatch, Amazon CloudWatch Events, Amazon Managed Streaming for Apache Kafka (Amazon MSK), AWS Amplify Console, AWS Control Tower, AWS CodeDeploy, AWS CodePipeline, AWS Elemental MediaConvert, AWS Elemental MediaLive, AWS Organizations, and AWS SDK Metrics for Enterprise Support.

    Flux7 News

    • We are starting a new blog series about becoming an Agile Enterprise, starting with this week’s article on organizational structures. Many organizations embrace agile ways of working in an attempt to build faster, more customer-focused and resilient organizations. Read how we at Flux7 went about the process of choosing an organizational structure that would best support our Agile Enterprise.
    • We are honored that Forrester has named Flux7 among the companies it has included in its Now Tech: Application Modernization and Migration Services Q1 2019 report. In the report, Flux7 is named a cloud development and AWS cloud migration services specialist, serving markets in North America including software, finance, and life sciences. Download your complimentary copy today.

    Subscribe to the Flux7 Blog

    Written by Flux7 Labs

    Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

    from Flux7 DevOps Blog

    Introducing AWS Chatbot: ChatOps for AWS

    Introducing AWS Chatbot: ChatOps for AWS

    DevOps teams widely use chat rooms as communications hubs where team members interact—both with one another and with the systems that they operate. Bots help facilitate these interactions, delivering important notifications and relaying commands from users back to systems. Many teams even prefer that operational events and notifications come through chat rooms where the entire team can see the notifications and discuss next steps.

    Today, AWS introduces a public beta of AWS Chatbot, a new service that enables DevOps teams to receive AWS notifications and execute commands in Slack channels and Amazon Chime chat rooms with only minimal effort. AWS fully manages the integration, and the service takes only a few minutes to configure.

    AWS Chatbot is in beta with support for receiving notifications from the following services:

    • Amazon CloudWatch
    • AWS Health
    • AWS Budgets
    • AWS Security Hub
    • Amazon GuardDuty
    • AWS CloudFormation

    For the up-to-date list of supported services, see the AWS Chatbot documentation.

    What our customers say

    Revcontent is a content discovery platform that helps advertisers drive highly engaged audiences through technology and partnerships with some of the world’s largest media brands. By using AWS Chatbot, Revcontent has avoided potential downtime.

    Our engineering teams have leveraged AWS Chatbot to enhance our system monitoring capabilities through integration with Amazon SNS and Amazon CloudWatch alarms. The initial setup was simple, and the return has been substantial! Slack functionality has enabled more efficient real-time notifications. For example, we avoided potential outages when AWS Chatbot alerted us of elevated error rates on a load balancer. We identified and resolved Amazon Redshift load aborts within minutes when AWS Chatbot notified our engineering teams of reduced network throughput on our cluster. — Christopher Ekeren, DevOps engineer, Revcontent


    In this post, I walk you through the configuration steps to set up a CloudWatch alarm with a Slack channel or Amazon Chime webhook using AWS Chatbot.

    AWS Chatbot uses Amazon SNS to integrate with other AWS services, as shown in the diagram. This process sets up a CloudWatch alarm to notify an SNS topic, which in turn activates AWS Chatbot to notify a chat room.

    Setting up AWS Chatbot for this example follows these steps:

    1. Create an SNS topic (optional).
    2. Create a CloudWatch alarm.
    3. Create an AWS Chatbot configuration.
    4. Complete the setup.
    5. Test the alarm.


    To follow along with this example, you need an AWS account, as well as a Slack channel or Amazon Chime webhook to configure with AWS Chatbot.

    Step 1: Create an SNS topic

    First, create an SNS topic to connect CloudWatch with AWS Chatbot. If you already have an existing SNS topic, you can skip this step.

    In the SNS console, choose Topics, Create topic. Give your topic a descriptive name and leave all other parameters at their default.

    Step 2: Create a CloudWatch alarm

    For this post, create an alarm for an existing Lambda function. You want to receive a notification every time the function invocation fails so that you can diagnose and fix problems as they occur.

    In the CloudWatch console, choose Alarms, Create alarm.

    Select the metric to monitor, such as the Errors metric for a Lambda function. Configure the following fields:

    • For Period, enter 1 minute.
    • For Statistic, enter Sum.
    • For Threshold, enter Greater/Equal than 1.

    These settings make it easier to trigger a test alarm.

    For Send a notification to…, choose the SNS topic that you created in Step 1. To receive notifications when the alarm enters the OK state, choose Add notification, OK, and repeat the process.

    Complete the creation process with the default settings.

    Step 3: Create an AWS Chatbot configuration

    To start configuring AWS Chatbot with the chat client, in the AWS Chatbot console, choose Configure new client, and choose either Amazon Chime or Slack.

    Using AWS Chatbot with Slack

    In the Configure new client pop-up, choose Slack.

    The setup wizard redirects you to the Slack OAuth 2.0 page. In the top-right corner, select the Slack workspace to configure and choose Agree. Your Slack workspace installs the AWS Slack App, and the AWS account that you logged in with can now send notifications.

    Slack redirects you from here to the Configure Slack Channel page. Select the channel in which to receive notifications. You can either select a public channel from the dropdown list or paste the URL or ID of a private channel.

    Find the URL of your private Slack channel by opening the context (right-click) menu on the channel name in the left sidebar in Slack, and choosing Copy link. AWS Chatbot can only work in a private channel if you invite the AWS bot to the channel by typing /invite @aws in Slack.

    Using AWS Chatbot with Amazon Chime

    On the Configure new client pop-up, choose Amazon Chime.

    For Webhook URL, copy the webhook URL from Amazon Chime and paste it into the text box. Give your webhook a description that reflects its location. I used the chat room name and the webhook name, separated by a slash.

    Step 4: Complete the setup

    After you choose the Slack channel or an Amazon Chime webhook, under IAM Permissions, create a role or select an existing role from the template. CloudWatch alarms can only display metric trends if AWS Chatbot has permission to call the CloudWatch API and retrieve metric details. To do this, choose Notifications permissions.

    Finally, under SNS topics, select the SNS topic that you created in Step 1. You can select multiple SNS topics from more than one public Region, granting them all the ability to notify the same Slack channel.

    After you choose Configure, the configuration completes.

    Step 5: Test the alarm

    You can test whether you properly configured AWS Chatbot by manually forcing your test Lambda function to fail. This should trigger an alarm and a notification in either Slack or Amazon Chime.

    Test the alarm in Slack

    Test the alarm in Amazon Chime


    AWS Chatbot expands the communication tools that your team already uses every day to coordinate and bond. In this post, I walked you through the configuration steps to set up a CloudWatch alarm using AWS Chatbot in a Slack channel or Amazon Chime chat xroom.

    from AWS DevOps Blog