Tag: Linux

Running Red Hat Enterprise Linux as Kubernetes Worker Nodes -XI

Running Red Hat Enterprise Linux as Kubernetes Worker Nodes -XI

Priyanka Sharma

In our previous blogs, we have covered the deployment strategies, networking, and logging of the Kubernetes cluster. By default, for the EKS workers, AWS provides EKS optimized AMIs which uses Amazon Linux as the Operating System. In this article, we will be discussing how we can have RHEL workers configured with the AWS EKS Cluster.

  • Red Hat Enterprise Linux 7.6
  • Kubernetes 1.13 on AWS EKS. We have opted for private subnets for the EKS Control Plane. To provision a new EKS cluster, refer to the below command:
aws eks create-cluster --name <CLUSTER_NAME> --role-arn arn:aws:iam::<ACCOUNT>:role/<EKS_SERVICE_ROLE> --resources-vpc-config subnetIds=<PRIV_SUBNETA>,<PRIV_SUBNETB>,<PRIV_SUBNETC>,securityGroupIds=<EKS_SECURITYGROUP_ID>,endpointPublicAccess=false,endpointPrivateAccess=true --region ap-south-1

If running an old version, upgrade to the latest one by using the below command:

aws eks update-cluster-version --name <CLUSTER_NAME> --client-request-token updating-version --kubernetes-version 1.13 --region ap-south-1

Check the status using below command:

aws eks describe-cluster --name <CLUSTER_NAME> --query cluster.status --region ap-south-1

Update the Kube Config. Ensure you are using the latest version of AWS CLI. In our case, it is 1.16.195.

aws eks --region ap-south-1 update-kubeconfig --name <CLUSTER_NAME>
  • Provision RHEL 7.6 as standalone EC2 Server.
  • Execute a shell script to make it as EKS Optimized. The script is available in the Git Repo.
  • Take an AMI of the RHEL server.
  • Pass the AMI to the CF template parameters to provision the worker nodes.
  • Create AWS Auth ConfigMap and pass the ARN of the Instance Role.
  • See the RHEL server registering as workers.
  • Switch to EC2 Console and Provision an EC2 Server with RHEL 7.6 AMI.
  • Install the dependencies using the below commands:
yum install -y git
yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
yum install -y python-pip
pip install --upgrade awscli
pip install --upgrade aws-cfn-bootstrap
mkdir -p /opt/aws/bin
ln -s /usr/bin/cfn-signal /opt/aws/bin/cfn-signal
yum install -y http://mirror.centos.org/centos/7/extras/x86_64/Packages/container-selinux-2.74-1.el7.noarch.rpm ****can be replaced with the version required by docker****
sed -i 's/enforcing/permissive/g' /etc/selinux/config ****If not set to permissive, the docker containers will not be able to provision and throw Permission Denied Error****
  • Clone the git repo and Execute install-worker.sh.
git clone https://github.com/powerupcloud/aws-eks-rhel-workers.git
cd aws-eks-rhel-workers
sh install-worker.sh
  • Go to EC2 Console and create an AMI of this server.
  • Provision a Cloudformation Stack with the below template provided by AWS:
  • In the parameter “NodeImageId”, input the Image ID of the AMI created in the previous step.

Wait for the bootstrap script to execute inside the Worker Node. Get the Instance Role ARN from the Cloudformation stack outputs and provide as the value of rolearn in the below yaml template.

apiVersion: v1
kind: ConfigMap
name: aws-auth
namespace: kube-system
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:
- system:bootstrappers
- system:nodes

Execute “kubectl apply -f aws-auth.yaml”.

Run “kubectl get nodes”. The RHEL worker node is registered with the EKS Cluster.

And that’s all. At this point, we have RHEL 7.6 worker nodes running in K8s Cluster.


from Powerupcloud Tech Blog – Medium

DynamoDB attributes Batch and Bulk Update

DynamoDB attributes Batch and Bulk Update

Yogitha O

Written by Selvakumar K, Associate Tech Lead — DBA & Kavitha L, Software Developer at Powerupcloud Technologies.

It’s been a couple of weeks, me and my colleague were struggling to get the work together to put learning and solutions for help.

In the beginning, we have written scripts which fortunately worked for Dev and QA Dynamodb Environment but if we look for the real-time scenarios where there could be numerous (say 3 crores) records in the Dynamodb table, the solution would not work. After some days of research, we accomplish a solution using python.

Problem Statement

Retrieve the primary key from the Dynamodb table for the particular policy number and update the dependent items in the excel sheet.

Problems and Limitations in DynamoDB

  1. Dynamodb Read and Write capacity is limited to 20, so we have changed unlimited the provisioned capacity
  2. To perform an update in one shot it’s difficult in case of huge data size. (e.g Compare the policy number from an excel sheet with Dynamodb table). The BatchGetItem operation can retrieve a maximum of 100 items at a time. The total size of all the items retrieved cannot exceed 16 MB
  3. Batch wise update consumes more memory resources so we have increased instance type and updated the items

How does it work?

First, we would read the excel data and convert into the python dictionary.


In the above scenario, each ID has multiple policy information and we are fetching single policy ID from the excel sheet and storing in memory.

If we have more than one policy information we would need to separate and retrieve the policy ID for the update.

Before we begin a comparison of policy number with Dynamodb table, establish the connectivity with DynamoDB.

Comparing the policy number from excel and DynamoDB table to fetch the ID of the DynamoDB table.

Finally, update the records in two batches. First for the ID’s which has more than one policy information and than ID’s which has one policy information.

Updated DynamoDB Table


Frequent Modification of data is very important and it’s required for customer business. Python is a convenient program to automate the update tasks in an easy way. In the above experiment, we have compared two different items and updated the records in the DynamoDB table.

from Powerupcloud Tech Blog – Medium

Backup and Restore in Same SQL Server RDS

Backup and Restore in Same SQL Server RDS

Written by SelvaKumar K, Sr. Database Administrator at Powerupcloud Technologies.

Problem Scenario :

One of our customers reported Production database has been corrupted, needs to back-up and restore database with a different name in the same RDS. But it’s not possible in the AWS RDS if we try to restore will get the below error.

Limitations :

Database <database_name> cannot be restored because there is already an existing database with the same family_guid on the instance

You can’t restore a backup file to the same DB instance that was used to create the backup file. Instead, restore the backup file to a new DB instance.

Approaches to Backup and Restore :

Option 1:

1.Import and Export into same RDS instance

The database is corrupted so we can’t proceed with this step

Option 2:

2. Backup and restore into different RDS using S3

2.1.Backup from production RDS Instance

exec msdb.dbo.rds_backup_database





Check the status with below command

exec msdb.dbo.rds_task_status @db_name=’selva_selva’;

2.2.Restore into different RDS Instance or download from s3 and restore into local SQL Server instance

exec msdb.dbo.rds_restore_database



2.3.In Another RDS instance or local instance

Restore the database into local dev or staging instance

a. Create a new database as selva_selva

b. Using Generate scripts wizard. Generate scripts and execute in the newly created database

Click Database → Tasks → Generate Scripts

Click Next → Select Specific Database Objects → Select required objects

Click Next → Save to a new query window

Click Advanced → Only Required to Change Script Indexes to True

Click Next → Next → Once the script is generated close this window

In Query, Window Scripts will be generated, Select Required Database and Execute the scripts

Direct Export and Import will not work because due to foreign key relationships, so we need to run below scripts and save the outputs in notepad

2.4. Prepare create and drop foreign Constraints for data load using below scripts


declare @ForeignKeyID int

declare @ForeignKeyName varchar(4000)

declare @ParentTableName varchar(4000)

declare @ParentColumn varchar(4000)

declare @ReferencedTable varchar(4000)

declare @ReferencedColumn varchar(4000)

declare @StrParentColumn varchar(max)

declare @StrReferencedColumn varchar(max)

declare @ParentTableSchema varchar(4000)

declare @ReferencedTableSchema varchar(4000)

declare @TSQLCreationFK varchar(max)

— Written by Percy Reyes


declare CursorFK cursor for select object_id — , name, object_name( parent_object_id)

from sys.foreign_keys

open CursorFK

fetch next from CursorFK into @ForeignKeyID

while (@@FETCH_STATUS=0)


set @StrParentColumn=’’

set @StrReferencedColumn=’’

declare CursorFKDetails cursor for

select fk.name ForeignKeyName, schema_name(t1.schema_id) ParentTableSchema,

object_name(fkc.parent_object_id) ParentTable, c1.name ParentColumn,schema_name(t2.schema_id) ReferencedTableSchema,

object_name(fkc.referenced_object_id) ReferencedTable,c2.name ReferencedColumn

from — sys.tables t inner join

sys.foreign_keys fk

inner join sys.foreign_key_columns fkc on fk.object_id=fkc.constraint_object_id

inner join sys.columns c1 on c1.object_id=fkc.parent_object_id and c1.column_id=fkc.parent_column_id

inner join sys.columns c2 on c2.object_id=fkc.referenced_object_id and c2.column_id=fkc.referenced_column_id

inner join sys.tables t1 on t1.object_id=fkc.parent_object_id

inner join sys.tables t2 on t2.object_id=fkc.referenced_object_id

where [email protected]

open CursorFKDetails

fetch next from CursorFKDetails into @ForeignKeyName, @ParentTableSchema, @ParentTableName, @ParentColumn, @ReferencedTableSchema, @ReferencedTable, @ReferencedColumn

while (@@FETCH_STATUS=0)


set @[email protected] + ‘, ‘ + quotename(@ParentColumn)

set @[email protected] + ‘, ‘ + quotename(@ReferencedColumn)

fetch next from CursorFKDetails into @ForeignKeyName, @ParentTableSchema, @ParentTableName, @ParentColumn, @ReferencedTableSchema, @ReferencedTable, @ReferencedColumn


close CursorFKDetails

deallocate CursorFKDetails

set @StrParentColumn=substring(@StrParentColumn,2,len(@StrParentColumn)-1)

set @StrReferencedColumn=substring(@StrReferencedColumn,2,len(@StrReferencedColumn)-1)

set @TSQLCreationFK=’ALTER TABLE ‘+quotename(@ParentTableSchema)+’.’+quotename(@ParentTableName)+’ WITH CHECK ADD CONSTRAINT ‘+quotename(@ForeignKeyName)

+ ‘ FOREIGN KEY(‘+ltrim(@StrParentColumn)+’) ‘+ char(13) +’REFERENCES ‘+quotename(@ReferencedTableSchema)+’.’+quotename(@ReferencedTable)+’ (‘+ltrim(@StrReferencedColumn)+’)’ +’;’

print @TSQLCreationFK

fetch next from CursorFK into @ForeignKeyID


close CursorFK

deallocate CursorFK


declare @ForeignKeyName varchar(4000)

declare @ParentTableName varchar(4000)

declare @ParentTableSchema varchar(4000)

declare @TSQLDropFK varchar(max)

declare CursorFK cursor for select fk.name ForeignKeyName, schema_name(t.schema_id) ParentTableSchema, t.name ParentTableName

from sys.foreign_keys fk inner join sys.tables t on fk.parent_object_id=t.object_id

open CursorFK

fetch next from CursorFK into @ForeignKeyName, @ParentTableSchema, @ParentTableName

while (@@FETCH_STATUS=0)


set @TSQLDropFK =’ALTER TABLE ‘+quotename(@ParentTableSchema)+’.’+quotename(@ParentTableName)+’ DROP CONSTRAINT ‘+quotename(@ForeignKeyName) + ‘;’

print @TSQLDropFK

fetch next from CursorFK into @ForeignKeyName, @ParentTableSchema, @ParentTableName


close CursorFK

deallocate CursorFK

Save the above script values and process the below steps

2.5.Execute the Drop foreign key scripts in the newly created database

2.6.Using import and export wizard transfer the data from the old database to the new database

Select Data Source for Data Pull

Select the Destination Server to Data Push

Click Next → Copy data from one or more tables or views

Click Next → Select the required tables to copy the data

Click Next and Verify the Source and Destination

2.7. Once data load is completed, Execute the Create foreign key constraint scripts

Final Step:

3.Backup the database Database and restore into production RDS instance with a different name

from Powerupcloud Tech Blog – Medium

WFSC and AlwaysOn Availability Groups onAWS Cloud

WFSC and AlwaysOn Availability Groups onAWS Cloud

Written by SelvaKumar K, Sr. Database Administrator at Powerupcloud Technologies.

What is Failover Clustering?

A failover cluster is a group of independent computers that work together to increase the availability and scalability of clustered roles (formerly called clustered applications and services). The clustered servers (called nodes) are connected by physical cables and by software. If one or more of the cluster nodes fail, other nodes begin to provide service (a process known as failover). In addition, the clustered roles are proactively monitored to verify that they are working properly. If they are not working, they are restarted or moved to another node

What is AlwaysOn Availability Group?

An availability group supports a replicated environment for a discrete set of user databases, known as availability databases. You can create an availability group for high availability (HA) or for read-scale. An HA availability group is a group of databases that fail over together. A read-scale availability group is a group of databases that are copied to other instances of SQL Server for read-only workload

What we cover in this,

  1. Implementing Windows Failover Cluster (WFSC) in AWS Cloud and configure Alwayson Availability Group between two Windows Servers

2. As like On-Prem server, we can install and configure the WSFC Cluster and SQL Server 2017 “AlwaysOn Availability Group” in AWS Cloud to access the SQL Database Server outside the world with AG Listener on 99.99% uptime.

3. To implemented SQL Server Alwayson with Minimal configuration instances and SQL Server 2017 Developer Edition. We have configured Alwayson without shared storage If you want to do the shared storage use storage gateway in AWS service.


Implement Prerequisites from AWS :

  1. AWS VPC ( ag-sql-vpc )

2. AWS Subnets ( two private and two public subnets )

Launch and Configure the server Infrastructure :

It requires three EC2 instances for Alwayson Setup and it is in different Availability Zones, Minimum requirement for SQL Server Instances is t2.small

Our setup is configured without shared storage, add 50 GB additional disk on each EC2 instance. In addition, secondary IP’s need for windows cluster resource and AG Listener

Disk and Secondary IP for the EC2 Instances :

Security Groups :

Each EC2 instance security group allowed for all ports between Active Directory and SQL Server Instances

Implement and configure Active Directory Domain Service :

Active Directory domain ( agsql.com ) is to be configured in ag-sql-AD server, add SQL Server instances ( ag-sql-node1 and ag-sql-node2 ) in agsql.com domain

Implement and Configure WFSC :

We need to do multiple reboots once SQL Server instance configured with agsql.com active directory domain account. Let’s start to configure failover clustering roles on each server

Failover clustering role needs to be added in both servers and start creating clusters with your own steps

Adding the SQL Server nodes in Create Cluster and perform all necessary tests for windows cluster creation

Assign the secondary IP’s for Windows cluster and bring online the cluster resources. Once cluster resource is ready, parallelly start installing SQL Server 2017 Developer editions in SQL Server Instances

Once SQL Server Installation is completed, Enable AlwaysOn Availability Group in SQL Server Service and restart the SQL Service on Both SQL Server Instances.

So, We are ready with Windows failover clustering and SQL Server Setup on both instances. Start creating AlwaysOn Availability Group and Configure AG Listener

Step 1: Specify Name for the Always on Group

Step 2: Connect the replica Node for AlwaysOn Group

Step 3: Specify the secondary IP addresses for AG Listener

So, aglistener will be added in Active Directory DNS Name and it will be connected from the outside world to access the SQL Servers with respected IP Addresses. We will able to ping or telnet the aglistner from the agsql.com domain account

Step 4: AlwaysOn Dashboard to check Database Sync Status

DNS and Active Directory Computers Configuration didn’t cover in this Setup, those are automatically created in the Active Directory Server

Finally, AlwaysOn Availability Group Ready in AWS Cloud !!!

from Powerupcloud Tech Blog – Medium

TFS Integration with Jenkins

TFS Integration with Jenkins

Written by AZHAGIRI PANNEERSELVAM, Associate Architect at Powerupcloud Technologies

What is TFS (Team Foundation Server)?

Team Foundation Server is a Microsoft product which provides source code management, reporting, requirements management, project management, automated builds, lab management, testing, and release management capabilities. It covers the entire Application Lifecycle Management. TFS can be used as a back end to numerous integrated development environments but is designed to provide the most benefit by serving as the back end to Microsoft Visual Studio or Eclipse.

What is Jenkins?

Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to building and testing your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.


Assume that we have a VPN tunnel between AWS and on-premise. The requirement is to download the code from on-premise TFS server via Jenkins and build the code using MSBuild and deploy to the AWS EC2 instances via AWS code deploy.

Proposed Design for the Integration

AWS services we have used in our design:

1. EC2 instances ( 2 Windows servers, one with IIS and code deploy agent installed, the other Windows server with Jenkins and Visual studio build tools installed).

2. AWS Code Deploy.

3. Simple Storage Service (S3).

4. IAM (Instance Role to upload the revision and call code deploy).

Note: — MSBuild plugin was not working as expected. Hence, we have planned to install the Visual studio Build tools on the server and mapped the installation path to make it work.

Other necessary details

1. TFS server Login details (This need to be created in the TFS server to connect the latest code from TFS).

2. Microsoft account Login details to download the visual studio tool for Jenkins

3. TFS Plugin on the Jenkins server

Step — 1 Installing Visual Studio Build tools on the Jenkins server.

Download the build tools for visual studio 2017 version 15.9 from the official Microsoft websitehttps://my.visualstudio.com/Downloads?q=visual%20studio%202017&wt.mc_id=o~msft~vscom~older-downloadshere

Note:- Use the Microsoft Login details to download the exe file.

While installing the Visual Studio tool you have to make sure you have installed the following components to make sure the Jenkins MSBuild plugin is working as expected.

· .NET desktop Build tools

· Data storage and processing build tools

.NET Core Build tools

Visual Studio Installation location

Base Location: C:\Program Files(x86)\Microsoft Visual Studio\2017\BuildTools

Note:- Make note of the Location. We will be using it in the steps below.

Step — 2 Install necessary plugins on the Jenkins server

Plugin Name

MSBuild Plug-in

Team Foundation Server Plug-in

AWS CodeDeploy Plugin for Jenkins

Mass Passwords Plugin

PostBuildScript Plugin

Use Plugin Manager to install the above Plugins on the Jenkins server.

Step — 3 Plugin Configuration in Jenkins

Once you have installed the Plugins from the Jenkins Plugin Manager. We need to configure the necessary plugins.

MSBuild Plugin Path configuration

In the Global tool configuration page. (Located in the Manage Jenkins). Find the MSBuild installation configuration. It looks as shown below. Make note of the Name. We will be using this in an upcoming step.

The explanation of the field below.

Name: Name of MSBuild configuration

Path to MSBuild: Kindly find the MSBuild exe file in the base location and paste it here.

Default parameters: Leave as blank

Once you have updated the details. Kindly save the configuration.

Step — 4 Integration of Jenkins with Team Foundation Server and build with MSBuild Plugin

Team Foundation Server login details configuration in Job

While Creating New freestyle project (Job) in Jenkins. In the Source code Management session you have to select the Team Foundation Version Control(TFVC) and give the following input to connect the TFS server project to get the latest code.

The explanation of the field below.

Collection URL: The URL to the team project collection

Project path: The path to the TFVC project must start with ‘$/’

Credentials: Username and password to connect TFS project.

Update the above details in the configuration and leave the other option as default.

Note: — Make sure you have unchecked the use update.

MSBuild configuration in job

In build session, we have to build the code with MSBuild plugin. Kindly choose the “Build a Visual Studio project or solution using MSBuild”

Once you have clicked, The Build box added as below. You have to select the version of the MSBuild. Make sure you have selected the one which you configured in the previous step.

Example Input for MSBuild configuration

MSBuild Version: MS Build VS 2015

MSBuild Build File: ${WORKSPACE}\exmple.sln (Make sure you select the .sln file to build the code).

Command Line Arguments

/p:Configuration=PROD /p:OutDir=”C:\Program Files (x86)\Jenkins\workspace\exampleJob\Output”

That’s it. Save the Job and execute it.

Now you have your build in the following location

C:\Program Files (x86)\Jenkins\workspace\exampleJob\Output”

Hope you found it useful. Happy Integrating 🙂

from Powerupcloud Tech Blog – Medium

Granting AWS Console Access to OnPrem Active Directory Users through AWS Single Sign-On

Granting AWS Console Access to OnPrem Active Directory Users through AWS Single Sign-On

AWS Single Sign-On (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. This blog demonstrates how you can avoid the creation of an additional IAM user to grant AWS console access to a corporate user. This can be achieved through the use of AWS Single Sign-On service.

Few are the following benefits that you can achieve if you are following this article:

  • A centralized place to grant access to multiple accounts.
  • Reduced cost of maintenance of operating your own SSO infrastructure.
  • Ease of access for users who can access their assigned accounts through their corporate credentials.


  • Active Directory Configured on OnPrem.
  • One AWS Master Account with multiple organizations.
  • VPN Tunnel established between the OnPrem network and AWS. Configure the route tables accordingly. Ensure you are provisioning the RODC server in the same subnet which has the connectivity to the AD sitting OnPrem.
  • Ensure the following ports are allowed on AD: TCP 53/UDP 53/TCP 389/UDP 389/TCP 88/UDP 88

Problem Statement:

The OnPrem Active Directory contains huge data of Corporate Users. We had to provide AWS Console access to certain existing users/groups of AD.


One common and traditional way to provide console access is to create IAM users for each corporate user and share the access details with them. It requires human efforts to create multiple IAM users as well as the user has to remember his AWS credentials every time he logs into the console. Another solution is to go for AWS Single Sign-On service where the user can use his/her AD credentials to log into the AWS Console. If we are routing all the requests to go to the OnPrem AD, it might increase the load on the AD server. As a solution, we have created an RODC Domain Controller of the OnPrem AD on AWS Cloud.

Here’s the workflow:

  • AWS Organisations are created for multiple AWS accounts, for example, Prod/UAT/DR through a master account.
  • The Active Directory exists on OnPrem which already have a huge data of the corporate users. We are assuming two AD groups here: Admins group which requires Administrator privileges and ReadOnly group which requires only Read-Only privileges.
  • Create a ReadOnly Domain Controller (RODC) of the OnPrem Active Directory on AWS.
  • Create an AD Connector in the Master account using AWS Directory Service which connects to RODC on AWS but it also requires connectivity to the OnPrem AD since the Domain resolves to the primary DNS IP.
  • Configure SSO using AD Connector directory which fetches the AD Users/Groups from RODC. Assign the users/groups to the respective AWS Organisation and grant the required permissions to the users.
  • SSO creates permission set in the master account and respective IAM roles with given privileges will be created in the target organization console.

Creating Read-Only Domain Controller of the OnPrem Active Directory on AWS

Get the following values of the existing Active Directory:

  • DNS Server IP
  • Directory Domain name
  • Domain Admin Credentials i.e. Username/Password

Launch a windows server i.e. Microsoft Windows Server 2019 Base on AWS. Login to the server once it’s available. Go to Server Manager and add adds roles n features of ADDS.

Now go to the Network sharing.

Ethernet→ Properties→ IPv4→ Update DNS Server IP → Provide DNS IP of the OnPrem AD.

Go to Server Manager → Workgroup → Under the “Computer Name” tab → Click on Change.

Provide the AD Domain Name. Input the AD user credentials.

Now for setting up RODC, go to Server Manager → You will get an option to “promote this server to a domain” on the right top corner. Change the current user to an AD Domain Admin user.

Select RODC and give a random DCRM password on the next screen.

Click Next and let the default settings unchanged. Review the settings once on the last screen.

Click Next and Install. At this point, the RODC is configured on the AWS server. Now you can log in to the RODC server by using Remote Desktop Protocol (RDP) connection through any one of the AD users.

Creating AD Connector in the Master account

Create an AD Connector through AWS Directory Service in the Master account where AWS Organizations are created.

Select the Directory size on the next screen.

Select VPC and subnets on the next page. Ensure these subnets are configured properly to have connectivity to the RODC DNS IP.

Provide the AD details such as DNS IP of the RODC (private IP of the RODC Server), AD Domain Name and any Service Account Credentials on the next page.

Wait till the directory is available.

Configuring AWS Single Sign ON for the AD Connector

Configure SSO for the AD Connector in the same region as of AD Connector. Switch to AWS SSO Console.

Click on “Manage your directory”. Select Microsoft AD and select the AD connector which we have created in the previous step.

Select the account for which you want to give access to the AD users.

Click Assign users and select the Groups/Users to whom you want to give access to the selected account.

Create a new permission set. For admins group, we have created permission set with AdministratorAccess and For ReadOnly Group, we have created a permission set with ViewOnlyAccess. We can also create a Custom permission set according to the requirement. Select the Administrator access for the Admins Groups.

Similarly, give ViewOnlyAccess to the ReadOnly Group in AD.

On the SSO Dashboard, note down the User Portal URL which is used for log in to the console.

Hit the URL in the browser. The URL will redirect you to provide the AD Credentials:

Once you login, it gives the list of accounts for which the logged in user has access. The below screenshot shows the logged in user is User2. User2 is a member of Read-Only group so it has ViewOnlyAccess to the assigned account.

Hit Management Console to log into the AWS console of the selected account.

And that’s all. Hope you found this article useful.

from Powerupcloud Tech Blog – Medium

Automate Blue Green Deployment on Kubernetes in a single Pipeline-Part X

Automate Blue Green Deployment on Kubernetes in a single Pipeline-Part X

Blue-green deployment is a technique that reduces downtime and risk by running two identical production environments called Blue and Green. At any time, only one of the environments is live, with the live environment serving all production traffic. In this article, we are covering how we can achieve blue-green deployment in an automated way on Kubernetes Clusters running Dev and Prod environments respectively. The clusters are provisioned using AWS EKS. Refer to our previous blog for the steps to setup EKS Cluster.


  • Kubernetes Clusters (i.e. Dev and Prod) running on AWS EKS
  • Cluster Version: 1.11
  • Docker Registry: AWS ECR
  • Application Language: Java
  • CI/CD Tool: Jenkins


  • Create an AWS ECR Repo for the Application. For example, java-app-ecr.
  • Provision an EC2 Server with Jenkins Installed on it.
  • Ensure yq and curl are installed on the server.
  • Install Docker and kubectl on the server.
  • Setup Apache Maven in Jenkins.
  • Jenkins → Global Tool Configuration → Add Maven Installation. The name is hardcoded in Jenkinsfile.

  • Kube config files for both the clusters i.e. Dev and Prod are kept inside .kube directory in Jenkins Home i.e. /var/lib/jenkins/.kube.
  • Execute the below command to get the Kube config file. Copy the contents of the ~/.kube/config and paste in a new file in Jenkins Home i.e. /var/lib/jenkins/.kube/dev-config. Repeat this step for both the clusters. For Prod config, the config file is available at /var/lib/jenkins/.kube/prod-config.

aws eks update-kubeconfig --name <CLUSTER_NAME> --region us-east-1


Create a Pipeline Job with the Jenkinsfile provided in our Github Repo. The Jenkins server had SSH access to the Github Repo. So we have provided SSH URL for the repo.

The pipeline takes user inputs for the following parameters:

  • GIT_BRANCH: Git Branch to use for the application source code.
  • ACCOUNT: AWS Account Number.
  • PROD_BLUE_SERVICE: If we already have a blue environment, specify the live blue service name in Prod cluster. Otherwise, leave blank.
  • ECR_REPO_NAME: Name of the existing AWS ECR Repo name where the built docker images will be pushed.

Once the above parameters are provided as user inputs, it will trigger the following jobs in a pipeline manner:

Clone: Clones the source code from Git Repo.

Build: Builds a packaged file using MVN commands.

Image: Prepares a docker image out of Dockerfile provided in Git repo and pushes the image to AWS ECR.

Deploy on Dev: The built image is deployed on the Dev K8s cluster using kubectl. It’s an in-place deployment where the existing deployment is updated with the docker image.

  • The yaml files for deployment and service are available in the repo. Once cloned to the Jenkins workspace, the variables in the yaml files are replaced with the actual values.
  • “kubectl apply” command is used to create the k8s resources i.e. deployment and service.

Prod: This step needs a manual intervention for proceeding to Prod environment. Two user inputs are required here:

  • DEPLOY_TO_PROD: Tick mark to deploy the built docker image to Prod Cluster.
  • PROD_BULE_DEPLOYMENT: Tick mark if it’s a fresh deployment on the prod cluster.

Deploy to Prod: If selected to proceed to prod, this step deploys the image on Prod cluster using “kubectl apply” command. It creates a green deployment and a temporary green LoadBalancer Service.

Validate: This step can contain multiple selenium test cases to validate application functionality. In our case, we have a sample java application for which we have provided a curl command on a specific path to test the application.

Patch Live Service and Delete Blue: Once validated successfully, this step patches the existing live blue service using “kubectl patch” command to point the live service to the latest deployment and delete the blue deployment as well as temporary green service.


The application will load on hitting the LoadBalancer Endpoint in the browser. Execute “kubectl get svc” to get the LoadBalancer endpoint. Basic Authentication is enabled on the frontend with default credentails admin/password.

Kubernetes manifests and other scripts are available in our Github Repo.

Hope you found it useful.


from Powerupcloud Tech Blog – Medium

Connectivity betweenAzure Virtual WAN and Fortinet Appliance

Connectivity betweenAzure Virtual WAN and Fortinet Appliance

Contributors: Karthik T, Principal Cloud Architect at Powerupcloud Technologies.

“Networking is the cornerstone of communication and Infrastructure”

Azure VWAN

Microsoft Azure Virtual WAN allows to enable simplified connectivity to Azure Cloud workloads and to route traffic across the Azure backbone network and beyond. Azure provides 54+ regions and multiple points of presence across the globe Azure regions serve as hubs that you can choose to connect to the branches. After the branches are connected, use the Azure cloud service through hub-to-hub connectivity. You can simplify connectivity by applying multiple Azure services including hub peering with Azure VNETs. Hubs serve as traffic gateways for the branches.

Fortinet with Azure VWAN

Connecting Fortinet Firewalls to a Microsoft Azure Virtual WAN hub can be done automatically. The automatic configuration provides a robust and redundant connection by introducing two active-active IPSec IKEv2 VPN tunnels with the respective BGP setup and fully automated Azure Virtual WAN site creation on Microsoft Azure. The finished deployment allows full connectivity between branch-office sites and resources in Azure Virtual Networks via the Azure VPN Hub.

VWAN Offerings:

Microsoft Azure Virtual WAN offers the following advantages:

Integrated connectivity solutions in hub and spoke

Automated setup and configuration

Intuitive troubleshooting

Organizations can use Azure Virtual WAN to connect branch offices around the globe. An Azure Virtual WAN consists of multiple virtual hubs, and an organization can create virtual hubs in different Azure regions.

For on-premises devices to connect into Azure a controller is required. A controller ingests Azure APIs to establish site-to-site connectivity with the Azure WAN and a Hub.

Microsoft Azure Virtual WAN includes the following components and resources:

WAN: Represents the entire network in Microsoft Azure. It contains links to all Hubs that you would like to have within this WAN. WANs are isolated from each other and cannot contain a common hub, or connections between two hubs in different WANs.

Site: Represents your on-premises VPN device and its settings. A Site can connect to multiple hubs.

Hub: Represents the core of your network in a specific region. The Hub contains various service endpoints to enable connectivity and other solutions to your on-premises network. Site-to-site connections are established between the Sites to a Hubs VPN endpoint.

Hub virtual network connection: Hub network connects the Azure Virtual WAN Hub seamlessly to your virtual network. Currently, connectivity to virtual networks that are within the same Virtual Hub Region is available.

Branch: The branches are the on-premises Fortinet appliances, which exist in customer office locations. The connection originates from behind these branches and terminates into Azure.

Prerequisites and requirements

The following prerequisites required for configuring Azure and Fortinet to manage branch sites connecting to Azure hubs.

  1. Have white-listed Azure subscription for Virtual WAN.
  2. Have an on-premise appliance such as a Fortinet appliance to establish IPsec connection into Azure resources.
  3. Have Internet links with public IP addresses. Though a single Internet link is enough to establish connectivity into Azure, you need two IPsec tunnels to use the same WAN link.
  4. SD-WAN controller — a controller is the interface responsible for configuring appliances connecting into Azure.
  5. A VNET in Azure that has at least one workload. For instance, a VM, which is hosting a service. Consider the following points:
  6. The virtual network should not have an Azure VPN or Express Route gateway, or a network virtual appliance.
  7. The virtual network should not have a user-defined route, which routes traffic to a non-Virtual WAN virtual network for the workload accessed from the on-premise branch.
  8. Appropriate permissions to access the workload must be configured. For example, port 22 SSH access for a Ubuntu VM.

Step 1. Configure Microsoft Azure Virtual WAN Service

Fig 1.1 Virtual Network Configuration

Fig 1.2 Virtual WAN Creation

Fig 1.3 Virtual WAN

Fig 1.4 Virtual Hub

Fig 1.5 Hub status with no sites configured

Step 2. Configure and Connect the Fortinet Firewall

Fig 1.6 Fortinet Firewall Configuration

Fig 1.7 Fortinet Phase 1 & Phase 2 Proposal

Fig 1.8 Azure to Fortinet Rule

Fig 1.9 Fortinet to Azure Rule

Step 3. Associate Sites to the Hub

Fig 1.10 Add a connection between hub and site

Fig 1.11 Associate site with one or more hubs

Step 4. Verify Connectivity and Routing

Fig 1.12 Hub status with VPN site

Fig 1.13 VWAN Heath and Gateway status

Fig 1.14 Fortinet Gateway status

There you go the connection is established and network flows:)

Virtual WAN enables centralized, simple and fast connection of several branches, with each other and with Microsoft Azure.

If you need any help on Virtual WAN Implementation, Please do reach out to us.

from Powerupcloud Tech Blog – Medium