Tag: Automation

Major Wholesaler Grows Uptime by Refactoring eComm Apps for AWS DevOps

Major Wholesaler Grows Uptime by Refactoring eComm Apps for AWS DevOps

AWS Case Study Ecommerce Cloud Refactor

A recent IDC survey of the Fortune 1000 found that the average cost of an infrastructure failure is $100,000 per hour and the average total cost of unplanned application downtime per year is between $1.25 billion and $2.5 billion. Our most recent customer relies heavily on its eCommerce site for business and knowing the extreme costs of infrastructure failure to its business, turned to the benefits of cloud-based DevOps. The firm sought to increase uptime, scalability, and security for its eCommerce applications by refactoring them for AWS DevOps.

What is Refactoring?

Refactoring involves an advanced process of re-architecting and often re-coding some portion of an existing application to take advantage of cloud-native frameworks and functionality. While this approach can be time-consuming and resource-intensive, it offers low monthly cloud spend as organizations that refactor are able to modify their applications and infrastructure to take full advantage of cloud-native features and thereby maximize operational cost efficiencies in the cloud.

AWS DevOps Refactoring

Employing the DevOps consulting team at Flux7 to help architect and build a DevOps platform solution, the team’s first goal was to ensure that the applications were architected for high availability at all levels in order to meet the company’s aggressive SLA goals. Here, the first step was to build a common DevOps platform for the company’s eCommerce applications and migrate the underlying technology to a common stack consisting of ECS, CloudFormation, and GoCD, an open source build and release tool from ThoughtWorks. (In the process, the team migrated one of the two applications from Kubernetes and Terraform to the new technology stack.)

As business-critical applications for the future of the retailer, the eCommerce applications needed to provide greater uptime scalability and data security than the legacy, on-premises applications from which they were refactored. As a result, the AWS experts at Flux7 built a CI/CD platform using AWS DevOps best practices, effectively reducing manual tasks and thereby increasing the team’s ability to focus on strategic work.

Further, the Flux7 DevOps team worked alongside the retailer’s team to:

  • Migrate the refactored applications to new AWS Accounts using the new CI/CD platform;
  • Automate remediation, recovering from failures faster;
  • Create AWS Identity and Access Management (IaM) resources as infrastructure as code (IaC);
  • Deliver the new applications in a Docker container-based microservices environment;
  • Deploy CloudWatch and Splunk for security and log management; and
  • Create DR procedures for the new applications to further ensure uptime and availability.

Moving forward, application updates will be rolled out via a blue-green deployment process that Flux7 helped the firm establish in order to achieve its zero downtime goals.

Business Benefits

While the customer team is a very advanced developer team, they were able to further their skills, learning through Flux7 knowledge transfer sessions how to enable DevOps best practices and continue to accelerate the new AWS DevOps platform adoption. At an estimated downtime cost of 6x the industry average, this firm couldn’t withstand the financial or reputational impact of a downtime event. As a result, the team is happy to report that it is meeting its zero downtime SLA objectives, enabling continuous system availability and with it growing customer satisfaction.

Subscribe to the Flux7 Blog
 

from Flux7 DevOps Blog

AWS Case Study: Energy Leader Digitizes Library for Analytics, Compliance

AWS Case Study: Energy Leader Digitizes Library for Analytics, Compliance

AWS Case Study Energy Leader Textract

The oil and gas industry has a rich history and one that is deeply intertwined with regulation — with Federal and State rules that regulate everything from exploration to production and transportation to workplace safety. As a result, our latest customer had amassed millions of paper documents to ensure its ability to prove compliance. It also maintained files with vast amounts of geological data, that served as the backbone of its intellectual property.

With over seven million physical documents saved and filed in deep storage, this oil and gas industry leader called the AWS consulting services team at Flux7 for its help digitizing its vast document library. In the process, it also wanted to make it easy to archive documents moving forward, and ensure that its operators could easily search for and find data.

Read the full AWS Case Study here.

Working with AWS Consulting Partner Flux7, the company created a working plan to digitize and catalog its vast document library. AWS had recently announced at re:Invent a new tool, Amazon Textract, which although still in preview mode, was the ideal tool for the task.

What is Textract?

For those of you unfamiliar with Amazon Textract, it is a new service that uses machine learning to automatically extract text and data from scanned documents. Unlike Optical Character Recognition (OCR) solutions, it also identifies the contents of fields in forms and information stored in tables, which allows users to conduct full data analytics on documents once they are digitized.

The Textract Proof of Concept

The proof of concept included several dozen physical documents that were scanned and uploaded to S3. From here, Lambda functions were triggered which launched Textract. In addition to the data being presented to Kibana, URLs for specific documents are presented to users.

As Amazon Textract automatically detects the key elements in a document or data relationships in forms and tables, it is able to extract data within the context it was originally created. With a core set of key parameters, such as revision date, extracted by Textract, operators will be able to search by key business parameters.

Analytics and Compliance

Interfacing with the data via Kibana, end users can now create smart search indexes which allow them to quickly and easily find key business data. Moreover, operators can build automated approval workflows and better meet document archival rules for regulatory compliance. Moreover, no longer does the company need to send an employee in their car to retrieve files from the warehouse, saving time from a labor-intensive task.

At Flux7, we relish the ability to help organizations apply automation and free their employees from manual tasks, replacing it with time to focus on strategic, business-impacting work. Read more Energy industry AWS case studies for best practices in cloud-based DevOps automation for enterprise agility.

For five tips on how to apply DevOps in your Oil, Gas or Energy enterprise, check out this article our CEO, Dr. Suleman, recently wrote for Oilman magazine. (Note that a free subscription is required.) Or, download the full case study here today.

Subscribe to the Flux7 Blog
 

from Flux7 DevOps Blog

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 12

The Uptime Institute announced findings of its ninth annual Data Center Survey, unveiling several interesting — and important — data points. Underscoring what many in the industry are feeling about the skill gap, the survey found that 61% of respondents said they had difficulty retaining or recruiting staff — up from 55% a year earlier. And, according to the synopsis, “while the lack of women working in data centers is well-known, the extent of the imbalance is notable” with one-quarter of respondents saying they had no women at all on their design, build or operations teams.

To stay up-to-date on DevOps automation, Cloud and Container Security, and IT Modernization subscribe to our blog:

Subscribe to the Flux7 Blog

When it comes to downtime, outages continue to cause significant problems. Without much improvement over the past year, 34% of respondents said they had an outage or severe IT service degradation in the past year. 10% said their most significant outage cost more than $1 million. When it comes to public cloud, 20% of operators reported that they would be more likely to put workloads in a public cloud if there were more visibility. While 50% of respondents already using public cloud for mission-critical applications said that they do not have adequate visibility.

DevOps News

  • Atlassian has announced Status Embed, a service designed to boost customer experience and communication by displaying the current state of services where customers are most likely to see it, such as your homepage, app or help center.
  • GitHub has brought to market repository templates to make boilerplate code management and distribution a “first-class citizen” on GitHub, according to the company.
  • HashiCorp announced the availability of Hashicorp Nomad 0.9.2, a workload orchestrator for deploying containerized and legacy apps across multiple regions or cloud providers. Nomad 09.9.2 includes preemption capabilities for service and batch jobs.
  • SDXCentral reports that, “VMware is developing a multi-cloud management tool that Joe Kinsella, chief technology officer of CloudHealth at VMware, describes as ‘Google docs for IT management, which is the ability to collaborate and share across an organization.’”

AWS News

  • Amazon announced that AWS Organizations now support tagging and untagging of AWS Accounts, allowing operators to assign custom attributes, or tags, to the AWS accounts they manage with AWS Organizations. According to AWS, the ability to attach tags such as owner name, project, business group, cost center, environment, and other values directly to an AWS account makes it easier for people in the organization to get information on particular AWS accounts without having to refer to a separate spreadsheet or other out-of-band method for tracking your AWS accounts.
  • Also introduced this week is AWS Systems Manager OpsCenter which is designed to help operators view, investigate, and resolve operational issues related to their environment from a central location.
  • Amazon has launched a new service to enhance recovery. Host Recovery for Amazon EC2 will now automatically restart instances on a new host in the event of an unexpected hardware failure on a Dedicated Host. Host Recovery will reduce the need for manual intervention, minimize recovery time and lower the operational burden for instances running on Dedicated Hosts. As a bonus, it has built-in integration with AWS License Manager to automatically track and manage licenses. There are no additional EC2 charges for using Host Recovery.
  • Last, our AWS Consulting team thought this foundational blog on Getting started with serverless was a good read for those of you looking to build serverless applications to take advantage of its agility and reduced TCO.

Flux7 News

  • Join AWS and Flux7 as they present a one day workshop on how Serverless Technology is impacting business now (and what you need to get started). Serverless technology on AWS is enabling companies by building modern applications with increased agility and lower total cost of ownership. Find additional information and register here.
  • Read CEO Dr. Suleman’s InformationWeek article, Five-Step Action Plan for DevOps at Scale in which he discusses how DevOps is achievable at enterprise scale if you start small, create a dedicated team and effectively use technology patterns and platforms.
  • Also published this week is Dr. Suleman’s take on Servant Leadership, as published in Forbes. In Why CIOs Should Have A Servant-Leadership Approach he shares why CIOs shouldn’t be in a position where they end up needing to justify their efforts. Read the article for the reason why. (No, it isn’t the brash conclusion you might think it is.)

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

Backup and Restore in Same SQL Server RDS

Backup and Restore in Same SQL Server RDS

Written by SelvaKumar K, Sr. Database Administrator at Powerupcloud Technologies.

Problem Scenario :

One of our customers reported Production database has been corrupted, needs to back-up and restore database with a different name in the same RDS. But it’s not possible in the AWS RDS if we try to restore will get the below error.

Limitations :

Database <database_name> cannot be restored because there is already an existing database with the same family_guid on the instance

You can’t restore a backup file to the same DB instance that was used to create the backup file. Instead, restore the backup file to a new DB instance.

Approaches to Backup and Restore :

Option 1:

1.Import and Export into same RDS instance

The database is corrupted so we can’t proceed with this step

Option 2:

2. Backup and restore into different RDS using S3

2.1.Backup from production RDS Instance

exec msdb.dbo.rds_backup_database

@source_db_name=’selva’,

@s3_arn_to_backup_to=’arn:aws:s3:::mmano/selva.bak’,

@overwrite_S3_backup_file=1,

@type=’FULL’;

Check the status with below command

exec msdb.dbo.rds_task_status @db_name=’selva_selva’;

2.2.Restore into different RDS Instance or download from s3 and restore into local SQL Server instance

exec msdb.dbo.rds_restore_database

@restore_db_name=’selva’,

@s3_arn_to_restore_from=’arn:aws:s3:::mmano/selva.bak’;

2.3.In Another RDS instance or local instance

Restore the database into local dev or staging instance

a. Create a new database as selva_selva

b. Using Generate scripts wizard. Generate scripts and execute in the newly created database

Click Database → Tasks → Generate Scripts

Click Next → Select Specific Database Objects → Select required objects

Click Next → Save to a new query window

Click Advanced → Only Required to Change Script Indexes to True

Click Next → Next → Once the script is generated close this window

In Query, Window Scripts will be generated, Select Required Database and Execute the scripts

Direct Export and Import will not work because due to foreign key relationships, so we need to run below scripts and save the outputs in notepad

2.4. Prepare create and drop foreign Constraints for data load using below scripts

— — SCRIPT TO GENERATE THE CREATION SCRIPT OF ALL FOREIGN KEY CONSTRAINTS

declare @ForeignKeyID int

declare @ForeignKeyName varchar(4000)

declare @ParentTableName varchar(4000)

declare @ParentColumn varchar(4000)

declare @ReferencedTable varchar(4000)

declare @ReferencedColumn varchar(4000)

declare @StrParentColumn varchar(max)

declare @StrReferencedColumn varchar(max)

declare @ParentTableSchema varchar(4000)

declare @ReferencedTableSchema varchar(4000)

declare @TSQLCreationFK varchar(max)

— Written by Percy Reyes

www.percyreyes.com

declare CursorFK cursor for select object_id — , name, object_name( parent_object_id)

from sys.foreign_keys

open CursorFK

fetch next from CursorFK into @ForeignKeyID

while (@@FETCH_STATUS=0)

begin

set @StrParentColumn=’’

set @StrReferencedColumn=’’

declare CursorFKDetails cursor for

select fk.name ForeignKeyName, schema_name(t1.schema_id) ParentTableSchema,

object_name(fkc.parent_object_id) ParentTable, c1.name ParentColumn,schema_name(t2.schema_id) ReferencedTableSchema,

object_name(fkc.referenced_object_id) ReferencedTable,c2.name ReferencedColumn

from — sys.tables t inner join

sys.foreign_keys fk

inner join sys.foreign_key_columns fkc on fk.object_id=fkc.constraint_object_id

inner join sys.columns c1 on c1.object_id=fkc.parent_object_id and c1.column_id=fkc.parent_column_id

inner join sys.columns c2 on c2.object_id=fkc.referenced_object_id and c2.column_id=fkc.referenced_column_id

inner join sys.tables t1 on t1.object_id=fkc.parent_object_id

inner join sys.tables t2 on t2.object_id=fkc.referenced_object_id

where [email protected]

open CursorFKDetails

fetch next from CursorFKDetails into @ForeignKeyName, @ParentTableSchema, @ParentTableName, @ParentColumn, @ReferencedTableSchema, @ReferencedTable, @ReferencedColumn

while (@@FETCH_STATUS=0)

begin

set @[email protected] + ‘, ‘ + quotename(@ParentColumn)

set @[email protected] + ‘, ‘ + quotename(@ReferencedColumn)

fetch next from CursorFKDetails into @ForeignKeyName, @ParentTableSchema, @ParentTableName, @ParentColumn, @ReferencedTableSchema, @ReferencedTable, @ReferencedColumn

end

close CursorFKDetails

deallocate CursorFKDetails

set @StrParentColumn=substring(@StrParentColumn,2,len(@StrParentColumn)-1)

set @StrReferencedColumn=substring(@StrReferencedColumn,2,len(@StrReferencedColumn)-1)

set @TSQLCreationFK=’ALTER TABLE ‘+quotename(@ParentTableSchema)+’.’+quotename(@ParentTableName)+’ WITH CHECK ADD CONSTRAINT ‘+quotename(@ForeignKeyName)

+ ‘ FOREIGN KEY(‘+ltrim(@StrParentColumn)+’) ‘+ char(13) +’REFERENCES ‘+quotename(@ReferencedTableSchema)+’.’+quotename(@ReferencedTable)+’ (‘+ltrim(@StrReferencedColumn)+’)’ +’;’

print @TSQLCreationFK

fetch next from CursorFK into @ForeignKeyID

end

close CursorFK

deallocate CursorFK

— — SCRIPT TO GENERATE THE DROP SCRIPT OF ALL FOREIGN KEY CONSTRAINTS

declare @ForeignKeyName varchar(4000)

declare @ParentTableName varchar(4000)

declare @ParentTableSchema varchar(4000)

declare @TSQLDropFK varchar(max)

declare CursorFK cursor for select fk.name ForeignKeyName, schema_name(t.schema_id) ParentTableSchema, t.name ParentTableName

from sys.foreign_keys fk inner join sys.tables t on fk.parent_object_id=t.object_id

open CursorFK

fetch next from CursorFK into @ForeignKeyName, @ParentTableSchema, @ParentTableName

while (@@FETCH_STATUS=0)

begin

set @TSQLDropFK =’ALTER TABLE ‘+quotename(@ParentTableSchema)+’.’+quotename(@ParentTableName)+’ DROP CONSTRAINT ‘+quotename(@ForeignKeyName) + ‘;’

print @TSQLDropFK

fetch next from CursorFK into @ForeignKeyName, @ParentTableSchema, @ParentTableName

end

close CursorFK

deallocate CursorFK

Save the above script values and process the below steps

2.5.Execute the Drop foreign key scripts in the newly created database

2.6.Using import and export wizard transfer the data from the old database to the new database

Select Data Source for Data Pull

Select the Destination Server to Data Push

Click Next → Copy data from one or more tables or views

Click Next → Select the required tables to copy the data

Click Next and Verify the Source and Destination

2.7. Once data load is completed, Execute the Create foreign key constraint scripts

Final Step:

3.Backup the database Database and restore into production RDS instance with a different name

from Powerupcloud Tech Blog – Medium

Digital Transformation & The Agile Enterprise in Oil and Gas

Digital Transformation & The Agile Enterprise in Oil and Gas

Digital Transformation Agile Enterprise Oil Gas

According to the World Economic Forum, digital transformation could unlock approximately $1.6 trillion of value for the Oil and Gas industry, its customers and society. This value is derived from greater productivity, better system efficiency, savings from reduced resource usage, and fewer spills and emissions. Yet, the journey to these digital transformation benefits begins with a proverbial first step which can be elusive for large oil and gas enterprises who have vast legacy technologies and complicated organizational structures to navigate.

At Flux7, we are proponents of the Agile Enterprise. While much work has been put into defining what makes an enterprise agile, we are fans of the research by McKinsey, who found a common set of five disciplines that agile enterprises share in common. Defined by their practices more than anything else, these agile organizations deploy an agile culture and agile technology to effectively support their digital transformation initiatives.

Becoming an Agile Enterprise is critically important within the oil and gas industries where unparalleled transformation is happening in rapid fashion. From new extraction methods to IoT and changing customer expectations, the industry is evolving quickly. For long-term, scalable success, digital efforts must be a cornerstone as organizations transition to becoming an Agile Enterprise.

DevOps for Oil and Gas

Equal parts people, process and technology, DevOps is a key component of marrying digital and agile. With a solid cloud-based DevOps platform, automation to streamline processes and ensure they are followed, and a Center of Excellence in place to help train teams, oil and gas enterprises have a roadmap to digital transformation success with DevOps.

For a more detailed road map to DevOps success across the enterprise, please download our white paper:

5 Steps to Enterprise DevOps at Scale

Let’s explore a few examples of organizations in the energy industry that have applied DevOps best practices to facilitate digital transformation and reach greater enterprise agility:

TechnipFMC, a world leader in project management, engineering, and construction for the energy industry, was looking to ensure compliance and security for cloud computing for its global sites and the perimeter networks that support its client-facing applications. To help accomplish this goal, TechnipFMC wanted to create a consistent, self-service solution to enable its global IT employees to easily provision cloud infrastructure and migrate externally facing Microsoft SharePoint sites to the cloud. With templates and automation, TechnipFMC can now enforce security and compliance standards in every deployment, which enhances overall perimeter network security. In addition, TechnipFMC is expecting to reduce operational costs while growing operational effectiveness. Listen as TechnipFMC’s John Hutchinson shares the experience at re:Invent or read the full Technip story.

A renewable energy leader had two parallel goals: It wanted to use an AWS cloud migration strategy as an opportunity to overhaul its business systems and in the process, the company wanted to build standardization. Moreover, it aimed to increase developer agility, grow global access for its workers and decrease capital expenses. Based on its application portfolio TCO analysis, a lift and shift migration approach was pursued. With 80% of its applications now defined by a small number of templates, the company has standardized its software builds, ensuring security best practices are followed by default. The enterprise has increased its time to innovation, speed to market and operational efficiencies. Preview their story here.

Fugro, which collects and provides highly specialized interpretation of oceanic geological data, is able to keep skilled staff onshore using an Internet of Things (IoT) platform model. Called OARS, its cloud-based project provides faster interpretation of data and decisions. With continuous delivery of code, its vessels are sure to always have the newest software features at their fingertips. And, new environments which previously took weeks to build, now launch in a matter of hours, providing better access to information across global regions. Read the full Fugro case study here.

A global oil field services company was looking to embrace digitalization with a SaaS model solution that sought to integrate data and business process management and in the process address operational workflows that would lead to greater scalability and more efficient delivery. The firm implemented a pipeline for delivering AMIs that are provisioned using Ansible and Docker containers, thereby streamlining complex workflows, allowing the firm to reap efficiencies of scale from automation, meet tight deadlines and ensure SOC2 compliance. Now the firm has pipelines for delivering resources and processes to build and deploy current and future solutions — ensuring digital transformation in the short- and long- term.

We are living in an uncertain, complex and constantly changing world. To stay competitive, oil and gas enterprises are expected to react to changes at unprecedented speed, which has ushered in a strong focus on becoming an agile enterprise. Effectively balance stability with ever-evolving customer needs, technologies, and overall market conditions with DevOps best practices as your foundation to scalable digital transformation.

For five tips on how to apply DevOps in your Oil, Gas or Energy enterprise, check out this article our CEO, Dr. Suleman, recently wrote for Oilman magazine. (Note that a free subscription is required.)  Or, you can find additional resources on our Energy resource page.

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

WFSC and AlwaysOn Availability Groups onAWS Cloud

WFSC and AlwaysOn Availability Groups onAWS Cloud

Written by SelvaKumar K, Sr. Database Administrator at Powerupcloud Technologies.

What is Failover Clustering?

A failover cluster is a group of independent computers that work together to increase the availability and scalability of clustered roles (formerly called clustered applications and services). The clustered servers (called nodes) are connected by physical cables and by software. If one or more of the cluster nodes fail, other nodes begin to provide service (a process known as failover). In addition, the clustered roles are proactively monitored to verify that they are working properly. If they are not working, they are restarted or moved to another node

What is AlwaysOn Availability Group?

An availability group supports a replicated environment for a discrete set of user databases, known as availability databases. You can create an availability group for high availability (HA) or for read-scale. An HA availability group is a group of databases that fail over together. A read-scale availability group is a group of databases that are copied to other instances of SQL Server for read-only workload

What we cover in this,

  1. Implementing Windows Failover Cluster (WFSC) in AWS Cloud and configure Alwayson Availability Group between two Windows Servers

2. As like On-Prem server, we can install and configure the WSFC Cluster and SQL Server 2017 “AlwaysOn Availability Group” in AWS Cloud to access the SQL Database Server outside the world with AG Listener on 99.99% uptime.

3. To implemented SQL Server Alwayson with Minimal configuration instances and SQL Server 2017 Developer Edition. We have configured Alwayson without shared storage If you want to do the shared storage use storage gateway in AWS service.

Architecture:

Implement Prerequisites from AWS :

  1. AWS VPC ( ag-sql-vpc )

2. AWS Subnets ( two private and two public subnets )

Launch and Configure the server Infrastructure :

It requires three EC2 instances for Alwayson Setup and it is in different Availability Zones, Minimum requirement for SQL Server Instances is t2.small

Our setup is configured without shared storage, add 50 GB additional disk on each EC2 instance. In addition, secondary IP’s need for windows cluster resource and AG Listener

Disk and Secondary IP for the EC2 Instances :

Security Groups :

Each EC2 instance security group allowed for all ports between Active Directory and SQL Server Instances

Implement and configure Active Directory Domain Service :

Active Directory domain ( agsql.com ) is to be configured in ag-sql-AD server, add SQL Server instances ( ag-sql-node1 and ag-sql-node2 ) in agsql.com domain

Implement and Configure WFSC :

We need to do multiple reboots once SQL Server instance configured with agsql.com active directory domain account. Let’s start to configure failover clustering roles on each server

Failover clustering role needs to be added in both servers and start creating clusters with your own steps

Adding the SQL Server nodes in Create Cluster and perform all necessary tests for windows cluster creation

Assign the secondary IP’s for Windows cluster and bring online the cluster resources. Once cluster resource is ready, parallelly start installing SQL Server 2017 Developer editions in SQL Server Instances

Once SQL Server Installation is completed, Enable AlwaysOn Availability Group in SQL Server Service and restart the SQL Service on Both SQL Server Instances.

So, We are ready with Windows failover clustering and SQL Server Setup on both instances. Start creating AlwaysOn Availability Group and Configure AG Listener

Step 1: Specify Name for the Always on Group

Step 2: Connect the replica Node for AlwaysOn Group

Step 3: Specify the secondary IP addresses for AG Listener

So, aglistener will be added in Active Directory DNS Name and it will be connected from the outside world to access the SQL Servers with respected IP Addresses. We will able to ping or telnet the aglistner from the agsql.com domain account

Step 4: AlwaysOn Dashboard to check Database Sync Status

DNS and Active Directory Computers Configuration didn’t cover in this Setup, those are automatically created in the Active Directory Server

Finally, AlwaysOn Availability Group Ready in AWS Cloud !!!

from Powerupcloud Tech Blog – Medium

Upskill Your Team to Address the Cloud, Kubernetes Skills Gap

Upskill Your Team to Address the Cloud, Kubernetes Skills Gap

Upskill Your Team Kubernetes Cloud Skills Gap This article originally appeared on Forbes

According to CareerBuilder’s Mid Year Job Forecast, 63% of U.S. employers planned to hire full-time, permanent workers in the second half of 2018. This growing demand coupled with low unemployment is driving a real talent shortage. The technology field, in particular, is experiencing acute pain when it comes to finding skilled talent. Indeed, more than five million IT jobs are expected to be added globally by 2027, reports BusinessInsider.

Of these five million jobs, the two most requested tech skills according to research by DICE are for Kubernetes and Terraform with the company also finding that DevOps Engineer has quickly moved up the ranks of the top paid IT careers. As companies invest in IT modernization with approaches like Agile and DevOps and technologies like cloud computing and containers, skills to support these initiatives are in increasing demand.

The problem is not set to get better in the near or mid-term with many companies reporting that it’s taking longer to find candidates with the right technology and business skills for driving digital innovation. A survey by OpsRamp found that 94% of HR departments take at least 30 days to fill an empty position and 25% report taking 90 days or more. With internal pressures for innovation that won’t wait out a protracted hiring process, I encourage leaders to look internally, using two key levers to help grow innovation.

Upskill Your Team

One way to work around a skills gap within the organization is to upskill the team. Rather than hiring a new headcount that is already difficult to find, a solution is to train your existing team. (Or a few members of the team who can in turn train others.) While there are a variety of training options — from classroom training to virtual classes and more — at Flux7, our experience has shown that hands-on training works best for technical skills like Terraform or Kubernetes. 

Specifically, a successful model consists of the following:

  • Find a coach that can work hand-in-hand with your team
  • Identify a small but impactful project for the coach and team to work on together with the goal of having the coach train the team along the way
  • Start the project with the coach taking the initial lead sharing what they are doing, why and how with your team shadowing
  • Slowly transition over the course of the project to the coach assigning tasks to your team, with your employees ultimately leading tasks and checking in with the coach as needed.


In this way, teams are able to learn in a practical, hands-on manner, taking ownership of the environment as they learn and grow — all while having access to an expert who can guide, correct and reinforce learning.

In addition to gaining much-needed skills in-house, upskilling your existing team has retention benefits. In a survey of tech professionals by DICE, 71% said that training and education are important to them, yet only 40% currently have company-paid training and education. Underscoring the importance of training to technologists, 45% who are satisfied with their job receive training; conversely, only 28% of those who are dissatisfied with their job receive training.

Grow Productivity with Automation

In addition to upskilling your team, automation is important to continue to expand your capacity. Approaches like DevOps embrace the use automation to create continuous integration and delivery, in the process reducing handoffs and speeding time to market. In addition, the use of automation can keep employees from working on tactical, repeatable tasks and instead focused on strategic, business-impacting work.

Let me give you an example. I recently had the opportunity to work with a large semiconductor company who sought to bolster its team’s cloud, container and Kubernetes talents in order to support a new AWS initiative. Working hands-on in the cloud to automate its pipelines and other processes, the company was able to streamline tasks that formerly took days to mere minutes.

In addition to working elbow-to-elbow with a cloud coach on the project, the company also initiated weekly knowledge transfer sessions to the team to ensure everyone had received the same level of training and was ready for the next week’s work. At the end of the project, the team was ready to train others in the organization and felt confident that they were building better products faster as their time was focused less on tactical work and more on making a strategic impact. Another benefit to the team — and company as a whole — is that by taking a cross-functional DevOps approach, employees felt that communication improved making their work more enjoyable.

In a recent poll of over 70,000 developers, HackerRank found that salary wasn’t the lead driver of what they look for in a job. Rather, the most important factors for developers, across all job levels and functions, was the opportunity for professional growth and the opportunity to work on interesting problems. The application of automation not only increases developer productivity and code throughput but provides the space to work on interesting projects that leads to greater job satisfaction and retention.

With competition growing for employees skilled in Kubernetes, Terraform, DevOps and more, growing your own is an increasingly attractive approach. UC Berkeley found that the average cost to hire a new professional employee may be as high as $7,000 (while replacement costs can be as great as 2.5x salary) not to mention lost opportunity costs as organizations place projects on hold as they vie to find skilled talent. Upskilling employees, combined with greater automation, can increase code throughput and get more projects to market faster, maximizing near-term opportunity. Just as importantly, presenting employees with new skills and the opportunity to work on interesting work has proven to increase job satisfaction and retention.

Learn more about addressing the skills gap, building cloud-native infrastructure and more on the Flux7 DevOps blog. Subscribe today:

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog

Sponsored Post: Etleap, PerfOps, InMemory.Net, Triplebyte, Stream, Scalyr

Sponsored Post: Etleap, PerfOps, InMemory.Net, Triplebyte, Stream, Scalyr

Who’s Hiring? 

  • Triplebyte lets exceptional software engineers skip screening steps at hundreds of top tech companies like Apple, Dropbox, Mixpanel, and Instacart. Make your job search O(1), not O(n). Apply here.
  • Need excellent people? Advertise your job here! 

Fun and Informative Events

  • Advertise your event here!

Cool Products and Services

  • For heads of IT/Engineering responsible for building an analytics infrastructure, Etleap is an ETL solution for creating perfect data pipelines from day one. Unlike older enterprise solutions, Etleap doesn’t require extensive engineering work to set up, maintain, and scale. It automates most ETL setup and maintenance work, and simplifies the rest into 10-minute tasks that analysts can own. Read stories from customers like Okta and PagerDuty, or try Etleap yourself.
  • PerfOps is a data platform that digests real-time performance data for CDN and DNS providers as measured by real users worldwide. Leverage this data across your monitoring efforts and integrate with PerfOps’ other tools such as Alerts, Health Monitors and FlexBalancer – a smart approach to load balancing. FlexBalancer makes it easy to manage traffic between multiple CDN providers, API’s, Databases or any custom endpoint helping you achieve better performance, ensure the availability of services and reduce vendor costs. Creating an account is Free and provides access to the full PerfOps platform.
  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net
  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorialStream is free up to 3 million feed updates so it’s easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we’d like to ad a few zeros to that number. Check out the job opening on AngelList.
  • Scalyr is a lightning-fast log management and operational data platform.  It’s a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services — all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.
  • Advertise your product or service here!

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.


Make Your Job Search O(1) — not O(n)

Triplebyte is unique because they’re a team of engineers running their own centralized technical assessment. Companies like Apple, Dropbox, Mixpanel, and Instacart now let Triplebyte-recommended engineers skip their own screening steps.

We found that High Scalability readers are about 80% more likely to be in the top bracket of engineering skill.

Take Triplebyte’s multiple-choice quiz (system design and coding questions) to see if they can help you scale your career faster.


The Solution to Your Operational Diagnostics Woes

Scalyr gives you instant visibility of your production systems, helping you turn chaotic logs and system metrics into actionable data at interactive speeds. Don’t be limited by the slow and narrow capabilities of traditional log monitoring tools. View and analyze all your logs and system metrics from multiple sources in one place. Get enterprise-grade functionality with sane pricing and insane performance. Learn more today


If you are interested in a sponsored post for an event, job, or product, please contact us for more information.

from High Scalability

IT Modernization and DevOps News Week in Review

IT Modernization and DevOps News Week in Review

IT Modernization DevOps News 13Palo Alto Networks made the most of a short week by announcing its plan to acquire container security company Twistlock for $410 million. It also announced plans to acquire serverless security company PureSec and launched Prisma, its new cloud security service. With cloud and container security top of mind for many, the acquisitions will prove to be valuable assets as enterprises seek to build security in.

 To stay up-to-date on DevOps automation, Cloud and Container Security, and IT Modernization subscribe to our blog:

Subscribe to the Flux7 Blog

DevOps News

  • Red Hat Ansible Tower 3.5 is now generally available. The release now includes support for RHEL 8, external credential vaults via credential plugins, and Become plugins. In addition, Red Hat noted that the Ansible Tower 3.5 release saw over 160 issues closed.
  • Red Hat Ansible Engine 2.8 is now available. In addition to several enhancements, the release includes several new features such as Ansible content (Collections), BECOME being the default privilege escalation path, no longer depending on paramiko, and BECOMEplugins, and other notable improvements and changes.
  • TeamCity 2019.1, the first major release of this year, is here. The release features a redesigned UI, native GitLab integration, and support for GitLab and Bitbucket server pull requests as well as token-based authentication, detection and reporting of Go tests, faster build agent upgrades, and AWS Spot Fleet requests.

AWS News

Flux7 News

  • Join AWS and Flux7 as they present a one day workshop on how Serverless Technology is impacting business now (and what you need to get started). Serverless technology on AWS is enabling companies by building modern applications with increased agility and lower total cost of ownership. Find additional information and register here.
  • Flux7 has been ranked by Growjo as one of the fastest growing companies in the Austin area. Read more about Flux7’s customer and business momentum.

Subscribe to the Flux7 Blog

Written by Flux7 Labs

Flux7 is the only Sherpa on the DevOps journey that assesses, designs, and teaches while implementing a holistic solution for its enterprise customers, thus giving its clients the skills needed to manage and expand on the technology moving forward. Not a reseller or an MSP, Flux7 recommendations are 100% focused on customer requirements and creating the most efficient infrastructure possible that automates operations, streamlines and enhances development, and supports specific business goals.

from Flux7 DevOps Blog

Growjo Ranks Flux7 Among Fastest Growing Austin Companies

Growjo Ranks Flux7 Among Fastest Growing Austin Companies

Growjo Ranks Flux7 Fast Growing in Austin

Growjo is on a mission to identify the top growing companies across regions of the US and we’re excited to announce that Flux7 has been ranked among the fastest growing companies in the Austin area. Flux7’s rank of #88 is based on growth indicators and a predictive analysis algorithm unique to Growjo that not only creates the most complete list of the fastest growing companies, but it is also a great predictor of future growth.

In addition to the Austin ranking, the Flux7 DevOps consulting services firm has been named to Growjo’s Tech Services, State of Texas, and overall 10k list of fastest growing companies. Calculated from high growth indicators that include employee size, brand awareness, funding, acquisitions, hiring plans, new locations and additional trigger events, the Growjo formula predicts that Flux7 is both growing at an increased rate and is poised to grow significantly through 2019 and beyond.

In response to the ranking, Aater Suleman, Flux7 co-founder and CEO, said “Flux7 succeeds when our customers succeed. We seek to make it possible for organizations to experiment more, fail cheap, and measure results accurately through an innovation lab strategy. Today’s ranking illustrates the power of this approach combined with Flux7 values of humbleness, transparency, and innovation to solve business challenges.”

At Flux7, we view customer growth as a significant vote of confidence; this year we are humbled to have so many new and repeat customers loudly affirming their confidence in our employees and approach to solving business challenges. We are truly honored to be an integral part of our customer’s digital transformations as we saw customer contracts grow 247% year-over-year in the first quarter of 2019. 2019 growth closely follows our 2018 year-ending cumulative three-year revenue growth of 547%.

Since its inception, Flux7 has established itself as a thought leader and valuable partner for enterprise and midmarket businesses aiming to modernize their IT practices and retain management of their own systems. Flux7 has been able to establish a unique position in the market by filling a need for enterprises to make rapid modernization progress while learning new technical skills for greater business agility.

With its Enterprise DevOps Framework, Flux7 helps organizations apply DevOps methodologies to reap benefits like greater innovation, enhanced security, increased scalability and more.

According to Growjo, inclusion in the Growjo 10000 is a better indicator of success than any other “fast company list”. Want to grow with us? Check out our Career opportunities here: https://www.flux7.com/careers/ Interested in having our DevOps consulting team help with your IT modernization project? Reach out to us today.

Subscribe to the Flux7 Blog

from Flux7 DevOps Blog