Tag: Services

Major Wholesaler Grows Uptime by Refactoring eComm Apps for AWS DevOps

Major Wholesaler Grows Uptime by Refactoring eComm Apps for AWS DevOps

AWS Case Study Ecommerce Cloud Refactor

A recent IDC survey of the Fortune 1000 found that the average cost of an infrastructure failure is $100,000 per hour and the average total cost of unplanned application downtime per year is between $1.25 billion and $2.5 billion. Our most recent customer relies heavily on its eCommerce site for business and knowing the extreme costs of infrastructure failure to its business, turned to the benefits of cloud-based DevOps. The firm sought to increase uptime, scalability, and security for its eCommerce applications by refactoring them for AWS DevOps.

What is Refactoring?

Refactoring involves an advanced process of re-architecting and often re-coding some portion of an existing application to take advantage of cloud-native frameworks and functionality. While this approach can be time-consuming and resource-intensive, it offers low monthly cloud spend as organizations that refactor are able to modify their applications and infrastructure to take full advantage of cloud-native features and thereby maximize operational cost efficiencies in the cloud.

AWS DevOps Refactoring

Employing the DevOps consulting team at Flux7 to help architect and build a DevOps platform solution, the team’s first goal was to ensure that the applications were architected for high availability at all levels in order to meet the company’s aggressive SLA goals. Here, the first step was to build a common DevOps platform for the company’s eCommerce applications and migrate the underlying technology to a common stack consisting of ECS, CloudFormation, and GoCD, an open source build and release tool from ThoughtWorks. (In the process, the team migrated one of the two applications from Kubernetes and Terraform to the new technology stack.)

As business-critical applications for the future of the retailer, the eCommerce applications needed to provide greater uptime scalability and data security than the legacy, on-premises applications from which they were refactored. As a result, the AWS experts at Flux7 built a CI/CD platform using AWS DevOps best practices, effectively reducing manual tasks and thereby increasing the team’s ability to focus on strategic work.

Further, the Flux7 DevOps team worked alongside the retailer’s team to:

  • Migrate the refactored applications to new AWS Accounts using the new CI/CD platform;
  • Automate remediation, recovering from failures faster;
  • Create AWS Identity and Access Management (IaM) resources as infrastructure as code (IaC);
  • Deliver the new applications in a Docker container-based microservices environment;
  • Deploy CloudWatch and Splunk for security and log management; and
  • Create DR procedures for the new applications to further ensure uptime and availability.

Moving forward, application updates will be rolled out via a blue-green deployment process that Flux7 helped the firm establish in order to achieve its zero downtime goals.

Business Benefits

While the customer team is a very advanced developer team, they were able to further their skills, learning through Flux7 knowledge transfer sessions how to enable DevOps best practices and continue to accelerate the new AWS DevOps platform adoption. At an estimated downtime cost of 6x the industry average, this firm couldn’t withstand the financial or reputational impact of a downtime event. As a result, the team is happy to report that it is meeting its zero downtime SLA objectives, enabling continuous system availability and with it growing customer satisfaction.

Subscribe to the Flux7 Blog
 

from Flux7 DevOps Blog

AWS Case Study: Energy Leader Digitizes Library for Analytics, Compliance

AWS Case Study: Energy Leader Digitizes Library for Analytics, Compliance

AWS Case Study Energy Leader Textract

The oil and gas industry has a rich history and one that is deeply intertwined with regulation — with Federal and State rules that regulate everything from exploration to production and transportation to workplace safety. As a result, our latest customer had amassed millions of paper documents to ensure its ability to prove compliance. It also maintained files with vast amounts of geological data, that served as the backbone of its intellectual property.

With over seven million physical documents saved and filed in deep storage, this oil and gas industry leader called the AWS consulting services team at Flux7 for its help digitizing its vast document library. In the process, it also wanted to make it easy to archive documents moving forward, and ensure that its operators could easily search for and find data.

Read the full AWS Case Study here.

Working with AWS Consulting Partner Flux7, the company created a working plan to digitize and catalog its vast document library. AWS had recently announced at re:Invent a new tool, Amazon Textract, which although still in preview mode, was the ideal tool for the task.

What is Textract?

For those of you unfamiliar with Amazon Textract, it is a new service that uses machine learning to automatically extract text and data from scanned documents. Unlike Optical Character Recognition (OCR) solutions, it also identifies the contents of fields in forms and information stored in tables, which allows users to conduct full data analytics on documents once they are digitized.

The Textract Proof of Concept

The proof of concept included several dozen physical documents that were scanned and uploaded to S3. From here, Lambda functions were triggered which launched Textract. In addition to the data being presented to Kibana, URLs for specific documents are presented to users.

As Amazon Textract automatically detects the key elements in a document or data relationships in forms and tables, it is able to extract data within the context it was originally created. With a core set of key parameters, such as revision date, extracted by Textract, operators will be able to search by key business parameters.

Analytics and Compliance

Interfacing with the data via Kibana, end users can now create smart search indexes which allow them to quickly and easily find key business data. Moreover, operators can build automated approval workflows and better meet document archival rules for regulatory compliance. Moreover, no longer does the company need to send an employee in their car to retrieve files from the warehouse, saving time from a labor-intensive task.

At Flux7, we relish the ability to help organizations apply automation and free their employees from manual tasks, replacing it with time to focus on strategic, business-impacting work. Read more Energy industry AWS case studies for best practices in cloud-based DevOps automation for enterprise agility.

For five tips on how to apply DevOps in your Oil, Gas or Energy enterprise, check out this article our CEO, Dr. Suleman, recently wrote for Oilman magazine. (Note that a free subscription is required.) Or, download the full case study here today.

Subscribe to the Flux7 Blog
 

from Flux7 DevOps Blog

AWS CodePipeline Approval Gate Tracking

AWS CodePipeline Approval Gate Tracking

With the pursuit of DevOps automation and CI/CD (Continuous Integration/Continuous Delivery), many companies are now migrating their applications onto the AWS cloud to take advantage of the service capabilities AWS has to offer. AWS provides native tools to help achieve CI/CD and one of the most core services they provide for that is AWS CodePipeline. CodePipeline is a service that allows a user to build a CI/CD pipeline for the automated build, test, and deployment of applications.

A common practice in using CodePipeline for CI/CD is to be able to automatically deploy applications into multiple lower environments before reaching production. These lower environments for deployed applications could be used for development, testing, business validation, and other use cases. As a CodePipeline progresses through its stages, it is often required by businesses that there are manual approval gates in between the deployments to further environments.

Each time a CodePipeline reaches one of these manual approval gates, a human is required to log into the console and manually either approve (allow pipeline to continue) or reject (stop the pipeline from continuing) the gate. Often times different teams or divisions of a business are responsible for their own application environments and, as a result of that, are also responsible for either allowing or rejecting a pipeline to continue deployment in their environment via the relative manual approval gate.

A problem that a business may run into is trying to figure out a way to easily keep track of who is approving/rejecting which approval gates and in which pipelines. With potentially hundreds of pipelines deployed in an account, it may be very difficult to keep track of and record approval gate actions through manual processes. For auditing situations, this can create a cumbersome problem as there may eventually be a need to provide evidence of why a specific pipeline was approved/rejected on a certain date and the reasoning behind the result.

So how can we keep a long term record of CodePipeline manual approval gate actions in an automated, scalable, and organized fashion? Through the use of AWS CloudTrail, AWS Lambda, AWS CloudWatch Events, AWS S3, and AWS SNS we can create a solution that provides this type of record keeping.

Each time someone approves/rejects an approval gate within an CodePipeline, that API call is logged in CloudTrail under the event name of “PutApprovalResult”. Through the use of an AWS CloudWatch event rule, we can configure that rule to listen for that specific CloudTrail API action and trigger a Lambda function to perform a multitude of tasks. This what that CloudTrail event looks like inside the AWS console.


{
    "eventVersion": "1.05",
    "userIdentity": {
        "type": "AssumedRole",
        "principalId": "AAAABBBCCC111222333:newuser",
        "arn": "arn:aws:sts::12345678912:assumed-role/IamOrg/newuser",
        "accountId": "12345678912",
        "accessKeyId": "1111122222333334444455555",
        "sessionContext": {
            "attributes": {
                "mfaAuthenticated": "true",
                "creationDate": "2019-05-23T15:02:42Z"
            },
            "sessionIssuer": {
                "type": "Role",
                "principalId": "1234567093756383847",
                "arn": "arn:aws:iam::12345678912:role/OrganizationAccountAccessRole",
                "accountId": "12345678912",
                "userName": "newuser"
            }
        }
    },
    "eventTime": "2019-05-23T16:01:25Z",
    "eventSource": "codepipeline.amazonaws.com",
    "eventName": "PutApprovalResult",
    "awsRegion": "us-east-1",
    "sourceIPAddress": "1.1.1.1",
    "userAgent": "aws-internal/3 aws-sdk-java/1.11.550 Linux/4.9.137-0.1.ac.218.74.329.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.212-b03 java/1.8.0_212 vendor/Oracle_Corporation",
    "requestParameters": {
        "pipelineName": "testing-pipeline",
        "stageName": "qa-approval",
        "actionName": "qa-approval",
        "result": {
            "summary": "I approve",
            "status": "Approved"
        },
        "token": "123123123-abcabcabc-123123123-abcabc"
    },
    "responseElements": {
        "approvedAt": "May 23, 2019 4:01:25 PM"
    },
    "requestID": "12345678-123a-123b-123c-123456789abc",
    "eventID": "12345678-123a-123b-123c-123456789abc",
    "eventType": "AwsApiCall",
    "recipientAccountId": "12345678912"
}

When that CloudWatch event rule is triggered, the Lambda function that it executes can be configured to perform multiple tasks including:

  • Capture the CloudTrail event log data from that “PutApprovalResult” API call and log it into the Lambda functions CloudWatch log group.
  • Create a dated text file entry in a S3 bucket containing useful and unique information about the pipeline manual approval gate action.
  • Send out an email notification containing unique information about the pipeline manual approval gate action.

The CloudWatch Event Rule provides a way to narrow down and capture the specific CloudTrail event named “PutApprovalResult”. Below is a snippet of this event rule defined in AWS CloudFormation.

  ApprovalGateEventRule:
    Type: AWS::Events::Rule
    Properties: 
      Description: Event Rule that tracks whenever someone approves/rejects an approval gate in a pipeline
      EventPattern: 
        {
          "source": [
            "aws.codepipeline"
          ],
          "detail-type": [
            "AWS API Call via CloudTrail"
          ],
          "detail": {
            "eventSource": [
              "codepipeline.amazonaws.com"
            ],
            "eventName": [
              "PutApprovalResult"
            ]
          }
        }

The Lambda Function provides the automation and scalability needed to perform this type of approval gate tracking at any scale. The SNS topic provides the ability to send out email alerts whenever someone approves or rejects a manual approval gate in any pipeline.

The recorded text file entries in the S3 bucket provide the long term and durable storage solution to keeping track of CodePipeline manual approval gate results. To ensure an easy way to go back and discover those results, it is best to organize those entries in an appropriate manner such as by “pipeline_name/year/month/day/gate_name_timed_entry.txt“. An example of a recording could look like this:

PipelineApprovalGateActions/testing-pipeline/2019/05/23/dev-approval-APPROVED-11:50:45-AM.txt

Below is a diagram of a solution that can provide the features described above.

The source code and CloudFormation template for a fully built out implementation of this solution can be found here codepipeline-approval-gate-tracking.

To deploy this solution right now, click the Launch Stack button below.

The post AWS CodePipeline Approval Gate Tracking appeared first on Stelligent.

from Blog – Stelligent

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 12

The Uptime Institute announced findings of its ninth annual Data Center Survey, unveiling several interesting — and important — data points. Underscoring what many in the industry are feeling about the skill gap, the survey found that 61% of respondents said they had difficulty retaining or recruiting staff — up from 55% a year earlier. And, according to the synopsis, “while the lack of women working in data centers is well-known, the extent of the imbalance is notable” with one-quarter of respondents saying they had no women at all on their design, build or operations teams.

To stay up-to-date on DevOps automation, Cloud and Container Security, and IT Modernization subscribe to our blog:

Subscribe to the Flux7 Blog

When it comes to downtime, outages continue to cause significant problems. Without much improvement over the past year, 34% of respondents said they had an outage or severe IT service degradation in the past year. 10% said their most significant outage cost more than $1 million. When it comes to public cloud, 20% of operators reported that they would be more likely to put workloads in a public cloud if there were more visibility. While 50% of respondents already using public cloud for mission-critical applications said that they do not have adequate visibility.

DevOps News

  • Atlassian has announced Status Embed, a service designed to boost customer experience and communication by displaying the current state of services where customers are most likely to see it, such as your homepage, app or help center.
  • GitHub has brought to market repository templates to make boilerplate code management and distribution a “first-class citizen” on GitHub, according to the company.
  • HashiCorp announced the availability of Hashicorp Nomad 0.9.2, a workload orchestrator for deploying containerized and legacy apps across multiple regions or cloud providers. Nomad 09.9.2 includes preemption capabilities for service and batch jobs.
  • SDXCentral reports that, “VMware is developing a multi-cloud management tool that Joe Kinsella, chief technology officer of CloudHealth at VMware, describes as ‘Google docs for IT management, which is the ability to collaborate and share across an organization.’”

AWS News

  • Amazon announced that AWS Organizations now support tagging and untagging of AWS Accounts, allowing operators to assign custom attributes, or tags, to the AWS accounts they manage with AWS Organizations. According to AWS, the ability to attach tags such as owner name, project, business group, cost center, environment, and other values directly to an AWS account makes it easier for people in the organization to get information on particular AWS accounts without having to refer to a separate spreadsheet or other out-of-band method for tracking your AWS accounts.
  • Also introduced this week is AWS Systems Manager OpsCenter which is designed to help operators view, investigate, and resolve operational issues related to their environment from a central location.
  • Amazon has launched a new service to enhance recovery. Host Recovery for Amazon EC2 will now automatically restart instances on a new host in the event of an unexpected hardware failure on a Dedicated Host. Host Recovery will reduce the need for manual intervention, minimize recovery time and lower the operational burden for instances running on Dedicated Hosts. As a bonus, it has built-in integration with AWS License Manager to automatically track and manage licenses. There are no additional EC2 charges for using Host Recovery.
  • Last, our AWS Consulting team thought this foundational blog on Getting started with serverless was a good read for those of you looking to build serverless applications to take advantage of its agility and reduced TCO.

Flux7 News

  • Join AWS and Flux7 as they present a one day workshop on how Serverless Technology is impacting business now (and what you need to get started). Serverless technology on AWS is enabling companies by building modern applications with increased agility and lower total cost of ownership. Find additional information and register here.
  • Read CEO Dr. Suleman’s InformationWeek article, Five-Step Action Plan for DevOps at Scale in which he discusses how DevOps is achievable at enterprise scale if you start small, create a dedicated team and effectively use technology patterns and platforms.
  • Also published this week is Dr. Suleman’s take on Servant Leadership, as published in Forbes. In Why CIOs Should Have A Servant-Leadership Approach he shares why CIOs shouldn’t be in a position where they end up needing to justify their efforts. Read the article for the reason why. (No, it isn’t the brash conclusion you might think it is.)

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

Digital Transformation & The Agile Enterprise in Oil and Gas

Digital Transformation & The Agile Enterprise in Oil and Gas

Digital Transformation Agile Enterprise Oil Gas

According to the World Economic Forum, digital transformation could unlock approximately $1.6 trillion of value for the Oil and Gas industry, its customers and society. This value is derived from greater productivity, better system efficiency, savings from reduced resource usage, and fewer spills and emissions. Yet, the journey to these digital transformation benefits begins with a proverbial first step which can be elusive for large oil and gas enterprises who have vast legacy technologies and complicated organizational structures to navigate.

At Flux7, we are proponents of the Agile Enterprise. While much work has been put into defining what makes an enterprise agile, we are fans of the research by McKinsey, who found a common set of five disciplines that agile enterprises share in common. Defined by their practices more than anything else, these agile organizations deploy an agile culture and agile technology to effectively support their digital transformation initiatives.

Becoming an Agile Enterprise is critically important within the oil and gas industries where unparalleled transformation is happening in rapid fashion. From new extraction methods to IoT and changing customer expectations, the industry is evolving quickly. For long-term, scalable success, digital efforts must be a cornerstone as organizations transition to becoming an Agile Enterprise.

DevOps for Oil and Gas

Equal parts people, process and technology, DevOps is a key component of marrying digital and agile. With a solid cloud-based DevOps platform, automation to streamline processes and ensure they are followed, and a Center of Excellence in place to help train teams, oil and gas enterprises have a roadmap to digital transformation success with DevOps.

For a more detailed road map to DevOps success across the enterprise, please download our white paper:

5 Steps to Enterprise DevOps at Scale

Let’s explore a few examples of organizations in the energy industry that have applied DevOps best practices to facilitate digital transformation and reach greater enterprise agility:

TechnipFMC, a world leader in project management, engineering, and construction for the energy industry, was looking to ensure compliance and security for cloud computing for its global sites and the perimeter networks that support its client-facing applications. To help accomplish this goal, TechnipFMC wanted to create a consistent, self-service solution to enable its global IT employees to easily provision cloud infrastructure and migrate externally facing Microsoft SharePoint sites to the cloud. With templates and automation, TechnipFMC can now enforce security and compliance standards in every deployment, which enhances overall perimeter network security. In addition, TechnipFMC is expecting to reduce operational costs while growing operational effectiveness. Listen as TechnipFMC’s John Hutchinson shares the experience at re:Invent or read the full Technip story.

A renewable energy leader had two parallel goals: It wanted to use an AWS cloud migration strategy as an opportunity to overhaul its business systems and in the process, the company wanted to build standardization. Moreover, it aimed to increase developer agility, grow global access for its workers and decrease capital expenses. Based on its application portfolio TCO analysis, a lift and shift migration approach was pursued. With 80% of its applications now defined by a small number of templates, the company has standardized its software builds, ensuring security best practices are followed by default. The enterprise has increased its time to innovation, speed to market and operational efficiencies. Preview their story here.

Fugro, which collects and provides highly specialized interpretation of oceanic geological data, is able to keep skilled staff onshore using an Internet of Things (IoT) platform model. Called OARS, its cloud-based project provides faster interpretation of data and decisions. With continuous delivery of code, its vessels are sure to always have the newest software features at their fingertips. And, new environments which previously took weeks to build, now launch in a matter of hours, providing better access to information across global regions. Read the full Fugro case study here.

A global oil field services company was looking to embrace digitalization with a SaaS model solution that sought to integrate data and business process management and in the process address operational workflows that would lead to greater scalability and more efficient delivery. The firm implemented a pipeline for delivering AMIs that are provisioned using Ansible and Docker containers, thereby streamlining complex workflows, allowing the firm to reap efficiencies of scale from automation, meet tight deadlines and ensure SOC2 compliance. Now the firm has pipelines for delivering resources and processes to build and deploy current and future solutions — ensuring digital transformation in the short- and long- term.

We are living in an uncertain, complex and constantly changing world. To stay competitive, oil and gas enterprises are expected to react to changes at unprecedented speed, which has ushered in a strong focus on becoming an agile enterprise. Effectively balance stability with ever-evolving customer needs, technologies, and overall market conditions with DevOps best practices as your foundation to scalable digital transformation.

For five tips on how to apply DevOps in your Oil, Gas or Energy enterprise, check out this article our CEO, Dr. Suleman, recently wrote for Oilman magazine. (Note that a free subscription is required.)  Or, you can find additional resources on our Energy resource page.

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

Upskill Your Team to Address the Cloud, Kubernetes Skills Gap

Upskill Your Team to Address the Cloud, Kubernetes Skills Gap

Upskill Your Team Kubernetes Cloud Skills Gap This article originally appeared on Forbes

According to CareerBuilder’s Mid Year Job Forecast, 63% of U.S. employers planned to hire full-time, permanent workers in the second half of 2018. This growing demand coupled with low unemployment is driving a real talent shortage. The technology field, in particular, is experiencing acute pain when it comes to finding skilled talent. Indeed, more than five million IT jobs are expected to be added globally by 2027, reports BusinessInsider.

Of these five million jobs, the two most requested tech skills according to research by DICE are for Kubernetes and Terraform with the company also finding that DevOps Engineer has quickly moved up the ranks of the top paid IT careers. As companies invest in IT modernization with approaches like Agile and DevOps and technologies like cloud computing and containers, skills to support these initiatives are in increasing demand.

The problem is not set to get better in the near or mid-term with many companies reporting that it’s taking longer to find candidates with the right technology and business skills for driving digital innovation. A survey by OpsRamp found that 94% of HR departments take at least 30 days to fill an empty position and 25% report taking 90 days or more. With internal pressures for innovation that won’t wait out a protracted hiring process, I encourage leaders to look internally, using two key levers to help grow innovation.

Upskill Your Team

One way to work around a skills gap within the organization is to upskill the team. Rather than hiring a new headcount that is already difficult to find, a solution is to train your existing team. (Or a few members of the team who can in turn train others.) While there are a variety of training options — from classroom training to virtual classes and more — at Flux7, our experience has shown that hands-on training works best for technical skills like Terraform or Kubernetes. 

Specifically, a successful model consists of the following:

  • Find a coach that can work hand-in-hand with your team
  • Identify a small but impactful project for the coach and team to work on together with the goal of having the coach train the team along the way
  • Start the project with the coach taking the initial lead sharing what they are doing, why and how with your team shadowing
  • Slowly transition over the course of the project to the coach assigning tasks to your team, with your employees ultimately leading tasks and checking in with the coach as needed.


In this way, teams are able to learn in a practical, hands-on manner, taking ownership of the environment as they learn and grow — all while having access to an expert who can guide, correct and reinforce learning.

In addition to gaining much-needed skills in-house, upskilling your existing team has retention benefits. In a survey of tech professionals by DICE, 71% said that training and education are important to them, yet only 40% currently have company-paid training and education. Underscoring the importance of training to technologists, 45% who are satisfied with their job receive training; conversely, only 28% of those who are dissatisfied with their job receive training.

Grow Productivity with Automation

In addition to upskilling your team, automation is important to continue to expand your capacity. Approaches like DevOps embrace the use automation to create continuous integration and delivery, in the process reducing handoffs and speeding time to market. In addition, the use of automation can keep employees from working on tactical, repeatable tasks and instead focused on strategic, business-impacting work.

Let me give you an example. I recently had the opportunity to work with a large semiconductor company who sought to bolster its team’s cloud, container and Kubernetes talents in order to support a new AWS initiative. Working hands-on in the cloud to automate its pipelines and other processes, the company was able to streamline tasks that formerly took days to mere minutes.

In addition to working elbow-to-elbow with a cloud coach on the project, the company also initiated weekly knowledge transfer sessions to the team to ensure everyone had received the same level of training and was ready for the next week’s work. At the end of the project, the team was ready to train others in the organization and felt confident that they were building better products faster as their time was focused less on tactical work and more on making a strategic impact. Another benefit to the team — and company as a whole — is that by taking a cross-functional DevOps approach, employees felt that communication improved making their work more enjoyable.

In a recent poll of over 70,000 developers, HackerRank found that salary wasn’t the lead driver of what they look for in a job. Rather, the most important factors for developers, across all job levels and functions, was the opportunity for professional growth and the opportunity to work on interesting problems. The application of automation not only increases developer productivity and code throughput but provides the space to work on interesting projects that leads to greater job satisfaction and retention.

With competition growing for employees skilled in Kubernetes, Terraform, DevOps and more, growing your own is an increasingly attractive approach. UC Berkeley found that the average cost to hire a new professional employee may be as high as $7,000 (while replacement costs can be as great as 2.5x salary) not to mention lost opportunity costs as organizations place projects on hold as they vie to find skilled talent. Upskilling employees, combined with greater automation, can increase code throughput and get more projects to market faster, maximizing near-term opportunity. Just as importantly, presenting employees with new skills and the opportunity to work on interesting work has proven to increase job satisfaction and retention.

Learn more about addressing the skills gap, building cloud-native infrastructure and more on the Flux7 DevOps blog. Subscribe today:

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 13Palo Alto Networks made the most of a short week by announcing its plan to acquire container security company Twistlock for $410 million. It also announced plans to acquire serverless security company PureSec and launched Prisma, its new cloud security service. With cloud and container security top of mind for many, the acquisitions will prove to be valuable assets as enterprises seek to build security in.

 To stay up-to-date on DevOps automation, Cloud and Container Security, and IT Modernization subscribe to our blog:

Subscribe to the Flux7 Blog

DevOps News

  • Red Hat Ansible Tower 3.5 is now generally available. The release now includes support for RHEL 8, external credential vaults via credential plugins, and Become plugins. In addition, Red Hat noted that the Ansible Tower 3.5 release saw over 160 issues closed.
  • Red Hat Ansible Engine 2.8 is now available. In addition to several enhancements, the release includes several new features such as Ansible content (Collections), BECOME being the default privilege escalation path, no longer depending on paramiko, and BECOMEplugins, and other notable improvements and changes.
  • TeamCity 2019.1, the first major release of this year, is here. The release features a redesigned UI, native GitLab integration, and support for GitLab and Bitbucket server pull requests as well as token-based authentication, detection and reporting of Go tests, faster build agent upgrades, and AWS Spot Fleet requests.

AWS News

Flux7 News

  • Join AWS and Flux7 as they present a one day workshop on how Serverless Technology is impacting business now (and what you need to get started). Serverless technology on AWS is enabling companies by building modern applications with increased agility and lower total cost of ownership. Find additional information and register here.
  • Flux7 has been ranked by Growjo as one of the fastest growing companies in the Austin area. Read more about Flux7’s customer and business momentum.

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

Growjo Ranks Flux7 Among Fastest Growing Austin Companies

Growjo Ranks Flux7 Among Fastest Growing Austin Companies

Growjo Ranks Flux7 Fast Growing in Austin

Growjo is on a mission to identify the top growing companies across regions of the US and we’re excited to announce that Flux7 has been ranked among the fastest growing companies in the Austin area. Flux7’s rank of #88 is based on growth indicators and a predictive analysis algorithm unique to Growjo that not only creates the most complete list of the fastest growing companies, but it is also a great predictor of future growth.

In addition to the Austin ranking, the Flux7 DevOps consulting services firm has been named to Growjo’s Tech Services, State of Texas, and overall 10k list of fastest growing companies. Calculated from high growth indicators that include employee size, brand awareness, funding, acquisitions, hiring plans, new locations and additional trigger events, the Growjo formula predicts that Flux7 is both growing at an increased rate and is poised to grow significantly through 2019 and beyond.

In response to the ranking, Aater Suleman, Flux7 co-founder and CEO, said “Flux7 succeeds when our customers succeed. We seek to make it possible for organizations to experiment more, fail cheap, and measure results accurately through an innovation lab strategy. Today’s ranking illustrates the power of this approach combined with Flux7 values of humbleness, transparency, and innovation to solve business challenges.”

At Flux7, we view customer growth as a significant vote of confidence; this year we are humbled to have so many new and repeat customers loudly affirming their confidence in our employees and approach to solving business challenges. We are truly honored to be an integral part of our customer’s digital transformations as we saw customer contracts grow 247% year-over-year in the first quarter of 2019. 2019 growth closely follows our 2018 year-ending cumulative three-year revenue growth of 547%.

Since its inception, Flux7 has established itself as a thought leader and valuable partner for enterprise and midmarket businesses aiming to modernize their IT practices and retain management of their own systems. Flux7 has been able to establish a unique position in the market by filling a need for enterprises to make rapid modernization progress while learning new technical skills for greater business agility.

With its Enterprise DevOps Framework, Flux7 helps organizations apply DevOps methodologies to reap benefits like greater innovation, enhanced security, increased scalability and more.

According to Growjo, inclusion in the Growjo 10000 is a better indicator of success than any other “fast company list”. Want to grow with us? Check out our Career opportunities here: https://www.flux7.com/careers/ Interested in having our DevOps consulting team help with your IT modernization project? Reach out to us today.

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 10

At its ChefConf 2019 held last week in Seattle, Chef announced several enhancements to its Chef Enterprise Automation Stack (EAS). New features include comprehensive Application Operations Dashboards which Chef describes as providing end-to-end visibility of the application lifecycle; new Migration Accelerators; and new versions of Chef Infra and Chef InSpec that use Chef Habitat to make it even easier to deploy, update and manage the EAS regardless of environment.

 To stay up-to-date on DevOps automation, CI/CD and IT Modernization, subscribe to our blog here:
Subscribe to the Flux7 Blog

 DevOps News

GitHub announced several noteworthy news items at its Satellite developer conference last week, notably:

  • It has acquired Dependabot which will give GitHub the ability to monitor and automatically open pull requests for dependencies with known security vulnerabilities.
  • GitHub has partnered with WhiteSource, an open-source security company, to help developers more easily detect open-source vulnerabilities in their GitHub repos.
  • GitHub Enterprise has been updated with a slew of new features such as enterprise accounts, a new account type that connects organizations; two new user roles, Triage and Maintain, that teams can use to grow and scale securely, addressing their access control needs; the ability for cloud administrators to now access audit log events using a new GraphQL API; security vulnerability alerts, token scanning, and more.
  • GitLab 11.11 has been released. It features multi-assignment for merge requests, the ability to automatically push an alert to Slack and/or Mattermost when an event occurs, alerting your team when a deployment occurs, support for Windows Container Executor for GitLab Runners, enabling Docker containers on Windows, and more.
  • HashiCorp released Terraform 0.12 which is focused on improvements to the Terraform language. The aim: to make configurations for more complex situations more readable, and improve the usability of re-usable modules.
  • Our team enjoyed this blog, Using Infoblox As A Dynamic Inventory In Red Hat Ansible Tower, in which Victor da Costa shares how dynamic inventory can replace headaches associated with tracking Configuration Items (CIs) — whether you’re using a CMDB or spreadsheet.
  • A fun read our DevOps team also enjoyed was the NY Times self-published case study on how it built a Slack bot to keep track of Reddit conversations around New York Times articles.

AWS News

  • A new feature that makes the use of encrypted Amazon EBS (Elastic Block Store) volumes even easier was rejoiced by our DevOps consulting team who can now specify if they want new EBS volumes to be created in encrypted form and if so, if they want to use their own key or a default AWS key.
  • Our AWS consulting services team was excited to see that starting August 1st AWS Config rules will switch to a pay-per-use pricing model, which means a lower bill for almost all existing AWS Config rules customers. AWS Config helps operators maintain AWS configuration compliance.
  • Amazon announced preview availability of Amazon CloudWatch Container Insights, which allows operators to monitor, isolate, and diagnose their containerized applications and microservices environments.
  • In separate CloudWatch news, Amazon has announced that CloudWatch Logs now support percentiles in metric filters, allowing operators to turn log data into numerical CloudWatch metrics that can be graphed.

Flux7 News

  • Join AWS and Flux7 as they present a one day workshop on how Serverless Technology is impacting business now (and what you need to get started). Serverless technology on AWS is enabling companies by building modern applications with increased agility and lower total cost of ownership. Find additional information and register here.
  • For additional reading on building modern applications with a strong cloud foundation, check out our blog on the benefits of pairing a Landing Zone with CI/CD. Spoiler alert: together they multiply the business’s ability to grow efficiency, productivity, security and time to market.

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

Could DevOps Best Practices Have Saved Jurassic Park?

Could DevOps Best Practices Have Saved Jurassic Park?

 DevOps Best Practices Jurassic Park

This article originally appeared on Medium.

The other day I decided to re-watch Jurassic Park, the pop-classic from 1993 in which a theme park suffers a major power breakdown that allows its cloned dinosaurs to rampage freely throughout the park. If you are one of the few that haven’t seen it yet, I definitely recommend you do — if nothing else for the realization I had as I watched it again. While the film steers us toward Chaos Theory and the Butterfly Effect, the big a-ha moment I had as I re-watched the film is that Jurassic Park is actually an IT failure that would have benefited from DevOps best practices.

InGen, the company behind Jurassic Park, is a bioengineering start-up founded by John Hammond. Dedicated to the cloning of extinct life, according to the film, InGen “spared no expense” in building the most technologically advanced theme park in the world: Jurassic Park.

Ultimately, we see in the follow-up film, The Lost World, that the financial fall-out for this start-up and its investors is quite vast:

  • Damaged or destroyed equipment, $17.3M
  • Demolition, de-construction, and disposal of Isla Nublar facilities, $126M
  • Wrongful death settlements, $72.1M
  • “Stock drop from seventy-eight and a quarter to nineteen flat with no good end in sight…”

The film’s focus on recreating dinosaurs misguides us as there are several problems that directly stem from IT issues that could have prevented the downfall of Jurassic Park:DevOps Secure Jurassic Park

  1. Only one person, Dennis Nedry, the Park’s computer programmer, knew how to operate the system. According to the film, there are two million lines of code. Yet, there is no transparency about what is in the code, allowing Nedry in the film to create Whte rbt.obj, a backdoor that ultimately disables nearly all of Jurassic Park’s security. With no code or security reviews, Nedry is able to access and steal from the Jurassic Park embryo chamber. Jurassic Park highlights a lack of business continuity as one person’s departure brings the entire system down. Moreover, it illustrates the most extreme possible ramifications of the lone ranger IT mentality.

    DevOps, on the other hand, is about creating a set of shared values and processes that encourage development and operations to work jointly within the organization. Applying DevOps security approaches would have established guardrails, accountability, and transparency, better ensuring that Nedry followed a set of agreed-upon practices.

  2. On top of a lack of transparency and accountability, Jurassic Park lacks IT security. Nedry (an anagram of nerdy) is the only one who knows the system password. As we see in the film, the system denies access time and again as Ray Arnold, the site’s chief engineer, tries to access the system. This delays rescue for many hours as the team decides that a system reboot is their only hope. DevOps Secure Jurassic Park 2

    Single sign-on could have solved this issue, allowing Arnold to use his one set of login credentials to access the system. In addition, proper secret management could have helped address the issue, authenticating users like Arnold, providing them with access to sensitive systems, like the Jurassic Park security system. Imagine what a different outcome the film would have had if Arnold was able to quickly access and restore the security system!

    3. Last, but not least, the Jurassic Park system had no preventive controls to protect against a rogue employee scenario. Nedry had complete, unchecked root access that allowed him to turn off all security systems across the Park — all without any alerts or notifications to other staff.

    Applying DevOps security best practices could have prevented this. Role-based access controls and the principle of least privilege would have assigned Nedry access to resources based on his role within the organization, giving him access to only those resources necessary to conduct his job. Moreover, a robust rules engine would have provided centralized visibility and control, giving management the ability to actively monitor the system. Security rules could have alerted management to changes Nedry made to the core system that were not in-line with organizational policy, allowing them to investigate Nedry’s changes well before an incident occurred.

I found it quite amusing that 25 years later, while a lot more is understood, there is still a lot that IT can learn from the mistakes of In-Gen, the start-up at the heart of the film. While Jurassic Park makes many points about what a bad idea it is to reintroduce dinosaurs in the common age, no one conducted a root cause analysis. While it was poor management of IT that caused the project to fail miserably, DevSecOps best practices could have saved the project.

Do you agree? What ways do you think Jurassic Park could have benefited from DevOps best practices? I look forward to your feedback below. 

 Subscribe to the Flux7 Blog

from Flux7 DevOps Blog