Category: Architecture

How to Design Your Serverless Apps for Massive Scale

How to Design Your Serverless Apps for Massive Scale

Serverless is one of the hottest design patterns in the cloud today, allowing you to focus on building and innovating, rather than worrying about the heavy lifting of server and OS operations. In this series of posts, we’ll discuss topics that you should consider when designing your serverless architectures. First, we’ll look at architectural patterns designed to achieve massive scale with serverless.

Scaling Considerations

In general, developers in a “serverful” world need to be worried about how many total requests can be served throughout the day, week, or month, and how quickly their system can scale. As you move into the serverless world, the most important question you should understand becomes: “What is the concurrency that your system is designed to handle?”

The AWS Serverless platform allows you to scale very quickly in response to demand. Below is an example of a serverless design that is fully synchronous throughout the application. During periods of extremely high demand, Amazon API Gateway and AWS Lambda will scale in response to your incoming load. This design places extremely high load on your backend relational database because Lambda can easily scale from thousands to tens of thousands of concurrent requests. In most cases, your relational databases are not designed to accept the same number of concurrent connections.

Serverless at scale-1

This design risks bottlenecks at your relational database and may cause service outages. This design also risks data loss due to throttling or database connection exhaustion.

Cloud Native Design

Instead, you should consider decoupling your architecture and moving to an asynchronous model. In this architecture, you use an intermediary service to buffer incoming requests, such as Amazon Kinesis or Amazon Simple Queue Service (SQS). You can configure Kinesis or SQS as out of the box event sources for Lambda. In design below, AWS will automatically poll your Kinesis stream or SQS resource for new records and deliver them to your Lambda functions. You can control the batch size per delivery and further place throttles on a per Lambda function basis.

Serverless at scale - 2

This design allows you to accept extremely high volume of requests, store the requests in a durable datastore, and process them at the speed which your system can handle.

Conclusion

Serverless computing allows you to scale much quicker than with server-based applications, but that means application architects should always consider the effects of scaling to your downstream services. Always keep in mind cost, speed, and reliability when you’re building your serverless applications.

Our next post in this series will discuss the different ways to invoke your Lambda functions and how to design your applications appropriately.

About the Author

George MaoGeorge Mao is a Specialist Solutions Architect at Amazon Web Services, focused on the Serverless platform. George is responsible for helping customers design and operate Serverless applications using services like Lambda, API Gateway, Cognito, and DynamoDB. He is a regular speaker at AWS Summits, re:Invent, and various tech events. George is a software engineer and enjoys contributing to open source projects, delivering technical presentations at technology events, and working with customers to design their applications in the Cloud. George holds a Bachelor of Computer Science and Masters of IT from Virginia Tech.

from AWS Architecture Blog

FICO: Fraud Detection and Anti-Money Laundering with AWS Lambda and AWS Step Functions

FICO: Fraud Detection and Anti-Money Laundering with AWS Lambda and AWS Step Functions

In this episode of This is My Architecture, filmed in 2018 on the last day of re:Invent (a learning conference hosted by Amazon Web Services for the global cloud computing community), FICO lead Software Engineer Sven Ahlfeld talks to AWS Solutions Architect Tom Jones about how the company uses a combination of AWS Lambda and AWS Step Functions to architect an on-demand solution for fraud detection and anti-money laundering.

When you think of FICO, you probably thing credit score. And that’s true: founded in 1956, FICO introduced analytic solutions–such as credit scoring–that have made credit more widely available in the US and around the world. As well, the FICO score is the standard measure of consumer risk in the US.

In the video, Sven explains that FICO is making software to meet regulatory compliance goals and requirements, in this case to tackle money laundering. FICO ingests a massive amounts of customer data in the form of financial documents into S3, and then uses S3 to trigger and analyze each document for a number of different fraud and money laundering characteristics.

Key architecture components are designed to be immutable, assuring that the EC2 instances doing the analysis work themselves can’t be compromised and tampered with. But as well, an unchangeable instance can scale very fast and allow for the ability to ingest a large amount of documents. It can also scale back when there is less demand. The immutable images also support regulatory requirements for the various needs and regulations of localities around the world.

 

*Check out more This Is My Architecture videos on YouTube.

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

from AWS Architecture Blog

Backup and Restore in Same SQL Server RDS

Backup and Restore in Same SQL Server RDS

Written by SelvaKumar K, Sr. Database Administrator at Powerupcloud Technologies.

Problem Scenario :

One of our customers reported Production database has been corrupted, needs to back-up and restore database with a different name in the same RDS. But it’s not possible in the AWS RDS if we try to restore will get the below error.

Limitations :

Database <database_name> cannot be restored because there is already an existing database with the same family_guid on the instance

You can’t restore a backup file to the same DB instance that was used to create the backup file. Instead, restore the backup file to a new DB instance.

Approaches to Backup and Restore :

Option 1:

1.Import and Export into same RDS instance

The database is corrupted so we can’t proceed with this step

Option 2:

2. Backup and restore into different RDS using S3

2.1.Backup from production RDS Instance

exec msdb.dbo.rds_backup_database

@source_db_name=’selva’,

@s3_arn_to_backup_to=’arn:aws:s3:::mmano/selva.bak’,

@overwrite_S3_backup_file=1,

@type=’FULL’;

Check the status with below command

exec msdb.dbo.rds_task_status @db_name=’selva_selva’;

2.2.Restore into different RDS Instance or download from s3 and restore into local SQL Server instance

exec msdb.dbo.rds_restore_database

@restore_db_name=’selva’,

@s3_arn_to_restore_from=’arn:aws:s3:::mmano/selva.bak’;

2.3.In Another RDS instance or local instance

Restore the database into local dev or staging instance

a. Create a new database as selva_selva

b. Using Generate scripts wizard. Generate scripts and execute in the newly created database

Click Database → Tasks → Generate Scripts

Click Next → Select Specific Database Objects → Select required objects

Click Next → Save to a new query window

Click Advanced → Only Required to Change Script Indexes to True

Click Next → Next → Once the script is generated close this window

In Query, Window Scripts will be generated, Select Required Database and Execute the scripts

Direct Export and Import will not work because due to foreign key relationships, so we need to run below scripts and save the outputs in notepad

2.4. Prepare create and drop foreign Constraints for data load using below scripts

— — SCRIPT TO GENERATE THE CREATION SCRIPT OF ALL FOREIGN KEY CONSTRAINTS

declare @ForeignKeyID int

declare @ForeignKeyName varchar(4000)

declare @ParentTableName varchar(4000)

declare @ParentColumn varchar(4000)

declare @ReferencedTable varchar(4000)

declare @ReferencedColumn varchar(4000)

declare @StrParentColumn varchar(max)

declare @StrReferencedColumn varchar(max)

declare @ParentTableSchema varchar(4000)

declare @ReferencedTableSchema varchar(4000)

declare @TSQLCreationFK varchar(max)

— Written by Percy Reyes

www.percyreyes.com

declare CursorFK cursor for select object_id — , name, object_name( parent_object_id)

from sys.foreign_keys

open CursorFK

fetch next from CursorFK into @ForeignKeyID

while (@@FETCH_STATUS=0)

begin

set @StrParentColumn=’’

set @StrReferencedColumn=’’

declare CursorFKDetails cursor for

select fk.name ForeignKeyName, schema_name(t1.schema_id) ParentTableSchema,

object_name(fkc.parent_object_id) ParentTable, c1.name ParentColumn,schema_name(t2.schema_id) ReferencedTableSchema,

object_name(fkc.referenced_object_id) ReferencedTable,c2.name ReferencedColumn

from — sys.tables t inner join

sys.foreign_keys fk

inner join sys.foreign_key_columns fkc on fk.object_id=fkc.constraint_object_id

inner join sys.columns c1 on c1.object_id=fkc.parent_object_id and c1.column_id=fkc.parent_column_id

inner join sys.columns c2 on c2.object_id=fkc.referenced_object_id and c2.column_id=fkc.referenced_column_id

inner join sys.tables t1 on t1.object_id=fkc.parent_object_id

inner join sys.tables t2 on t2.object_id=fkc.referenced_object_id

where [email protected]

open CursorFKDetails

fetch next from CursorFKDetails into @ForeignKeyName, @ParentTableSchema, @ParentTableName, @ParentColumn, @ReferencedTableSchema, @ReferencedTable, @ReferencedColumn

while (@@FETCH_STATUS=0)

begin

set @[email protected] + ‘, ‘ + quotename(@ParentColumn)

set @[email protected] + ‘, ‘ + quotename(@ReferencedColumn)

fetch next from CursorFKDetails into @ForeignKeyName, @ParentTableSchema, @ParentTableName, @ParentColumn, @ReferencedTableSchema, @ReferencedTable, @ReferencedColumn

end

close CursorFKDetails

deallocate CursorFKDetails

set @StrParentColumn=substring(@StrParentColumn,2,len(@StrParentColumn)-1)

set @StrReferencedColumn=substring(@StrReferencedColumn,2,len(@StrReferencedColumn)-1)

set @TSQLCreationFK=’ALTER TABLE ‘+quotename(@ParentTableSchema)+’.’+quotename(@ParentTableName)+’ WITH CHECK ADD CONSTRAINT ‘+quotename(@ForeignKeyName)

+ ‘ FOREIGN KEY(‘+ltrim(@StrParentColumn)+’) ‘+ char(13) +’REFERENCES ‘+quotename(@ReferencedTableSchema)+’.’+quotename(@ReferencedTable)+’ (‘+ltrim(@StrReferencedColumn)+’)’ +’;’

print @TSQLCreationFK

fetch next from CursorFK into @ForeignKeyID

end

close CursorFK

deallocate CursorFK

— — SCRIPT TO GENERATE THE DROP SCRIPT OF ALL FOREIGN KEY CONSTRAINTS

declare @ForeignKeyName varchar(4000)

declare @ParentTableName varchar(4000)

declare @ParentTableSchema varchar(4000)

declare @TSQLDropFK varchar(max)

declare CursorFK cursor for select fk.name ForeignKeyName, schema_name(t.schema_id) ParentTableSchema, t.name ParentTableName

from sys.foreign_keys fk inner join sys.tables t on fk.parent_object_id=t.object_id

open CursorFK

fetch next from CursorFK into @ForeignKeyName, @ParentTableSchema, @ParentTableName

while (@@FETCH_STATUS=0)

begin

set @TSQLDropFK =’ALTER TABLE ‘+quotename(@ParentTableSchema)+’.’+quotename(@ParentTableName)+’ DROP CONSTRAINT ‘+quotename(@ForeignKeyName) + ‘;’

print @TSQLDropFK

fetch next from CursorFK into @ForeignKeyName, @ParentTableSchema, @ParentTableName

end

close CursorFK

deallocate CursorFK

Save the above script values and process the below steps

2.5.Execute the Drop foreign key scripts in the newly created database

2.6.Using import and export wizard transfer the data from the old database to the new database

Select Data Source for Data Pull

Select the Destination Server to Data Push

Click Next → Copy data from one or more tables or views

Click Next → Select the required tables to copy the data

Click Next and Verify the Source and Destination

2.7. Once data load is completed, Execute the Create foreign key constraint scripts

Final Step:

3.Backup the database Database and restore into production RDS instance with a different name

from Powerupcloud Tech Blog – Medium

WFSC and AlwaysOn Availability Groups onAWS Cloud

WFSC and AlwaysOn Availability Groups onAWS Cloud

Written by SelvaKumar K, Sr. Database Administrator at Powerupcloud Technologies.

What is Failover Clustering?

A failover cluster is a group of independent computers that work together to increase the availability and scalability of clustered roles (formerly called clustered applications and services). The clustered servers (called nodes) are connected by physical cables and by software. If one or more of the cluster nodes fail, other nodes begin to provide service (a process known as failover). In addition, the clustered roles are proactively monitored to verify that they are working properly. If they are not working, they are restarted or moved to another node

What is AlwaysOn Availability Group?

An availability group supports a replicated environment for a discrete set of user databases, known as availability databases. You can create an availability group for high availability (HA) or for read-scale. An HA availability group is a group of databases that fail over together. A read-scale availability group is a group of databases that are copied to other instances of SQL Server for read-only workload

What we cover in this,

  1. Implementing Windows Failover Cluster (WFSC) in AWS Cloud and configure Alwayson Availability Group between two Windows Servers

2. As like On-Prem server, we can install and configure the WSFC Cluster and SQL Server 2017 “AlwaysOn Availability Group” in AWS Cloud to access the SQL Database Server outside the world with AG Listener on 99.99% uptime.

3. To implemented SQL Server Alwayson with Minimal configuration instances and SQL Server 2017 Developer Edition. We have configured Alwayson without shared storage If you want to do the shared storage use storage gateway in AWS service.

Architecture:

Implement Prerequisites from AWS :

  1. AWS VPC ( ag-sql-vpc )

2. AWS Subnets ( two private and two public subnets )

Launch and Configure the server Infrastructure :

It requires three EC2 instances for Alwayson Setup and it is in different Availability Zones, Minimum requirement for SQL Server Instances is t2.small

Our setup is configured without shared storage, add 50 GB additional disk on each EC2 instance. In addition, secondary IP’s need for windows cluster resource and AG Listener

Disk and Secondary IP for the EC2 Instances :

Security Groups :

Each EC2 instance security group allowed for all ports between Active Directory and SQL Server Instances

Implement and configure Active Directory Domain Service :

Active Directory domain ( agsql.com ) is to be configured in ag-sql-AD server, add SQL Server instances ( ag-sql-node1 and ag-sql-node2 ) in agsql.com domain

Implement and Configure WFSC :

We need to do multiple reboots once SQL Server instance configured with agsql.com active directory domain account. Let’s start to configure failover clustering roles on each server

Failover clustering role needs to be added in both servers and start creating clusters with your own steps

Adding the SQL Server nodes in Create Cluster and perform all necessary tests for windows cluster creation

Assign the secondary IP’s for Windows cluster and bring online the cluster resources. Once cluster resource is ready, parallelly start installing SQL Server 2017 Developer editions in SQL Server Instances

Once SQL Server Installation is completed, Enable AlwaysOn Availability Group in SQL Server Service and restart the SQL Service on Both SQL Server Instances.

So, We are ready with Windows failover clustering and SQL Server Setup on both instances. Start creating AlwaysOn Availability Group and Configure AG Listener

Step 1: Specify Name for the Always on Group

Step 2: Connect the replica Node for AlwaysOn Group

Step 3: Specify the secondary IP addresses for AG Listener

So, aglistener will be added in Active Directory DNS Name and it will be connected from the outside world to access the SQL Servers with respected IP Addresses. We will able to ping or telnet the aglistner from the agsql.com domain account

Step 4: AlwaysOn Dashboard to check Database Sync Status

DNS and Active Directory Computers Configuration didn’t cover in this Setup, those are automatically created in the Active Directory Server

Finally, AlwaysOn Availability Group Ready in AWS Cloud !!!

from Powerupcloud Tech Blog – Medium

Updates to Serverless Architectural Patterns and Best Practices

Updates to Serverless Architectural Patterns and Best Practices

As we sail past the halfway point between re:Invent 2018 and re:Invent 2019, I’d like to revisit some of the recent serverless announcements we’ve made. These are all complimentary to the patterns discussed in the re:Invent architecture track’s Serverless Architectural Patterns and Best Practices session.

AWS Event Fork Pipelines

AWS Event Fork Pipelines was announced in March 2019. Many customers use asynchronous event-driven processing in their serverless applications to decouple application components and address high concurrency needs. And in doing so, they often find themselves needing to backup, search, analyze, or replay these asynchronous events. That is exactly what AWS Event Fork Pipelines aims to achieve. You can plug them into a new or existing SNS topic used by your application and immediately address retention and compliance needs, gain new business insights, or even improve your application’s disaster recovery abilities.

AWS Event Fork Pipelines is a suite of three applications. The first application addresses event storage and backup needs by writing all events to an S3 bucket where they can be queried with services like Amazon Athena. The second is a search and analytics pipeline that delivers events to a new or existing Amazon ES domain, enabling search and analysis of your events. Finally, the third application is an event replay pipeline that can be used to reprocess messages should a downstream failure occur in your application. AWS Event Fork Pipelines is available in AWS Serverless Application Model (SAM) templates and are available in the AWS Serverless Application Repository (SAR). Check out our example e-commerce application on GitHub..

Amazon API Gateway Serverless Developer Portal

If you publish APIs for developers allowing them to build new applications and capabilities with your data, you understand the need for a developer portal. Also, in March 2019, we announced some significant upgrades to the API Gateway Serverless Developer Portal. The portal’s front end is written in React and is designed to be fully customizable.

The API Gateway Serverless Developer Portal is also available in GitHub and the AWS SAR. As you can see from the architecture diagram below, it is integrated with Amazon Cognito User Pools to allow developers to sign-up, receive an API Key, and register for one or more of your APIs. You can now also enable administrative scenarios from your developer portal by logging in as users belonging to the portal’s Admin group which is created when the portal is initially deployed to your account. For example, you can control which APIs appear in a customer’s developer portal, enable SDK downloads, solicit developer feedback, and even publish updates for APIs that have been recently revised.

AWS Lambda with Amazon Application Load Balancer (ALB)

Serverless microservices have been built by our customers for quite a while, with AWS Lambda and Amazon API Gateway. At re:Invent 2018 during Dr. Werner Vogel’s keynote, a new approach to serverless microservices was announced, Lambda functions as ALB targets.

ALB’s support for Lambda targets gives customers the ability to deploy serverless code behind an ALB, alongside servers, containers, and IP addresses. With this feature, ALB path and host-based routing can be used to direct incoming requests to Lambda functions. Also, ALB can now provide an entry point for legacy applications to take on new serverless functionality, and enable migration scenarios from monolithic legacy server or container-based applications.

Use cases for Lambda targets for ALB include adding new functionality to an existing application that already sits behind an ALB. This could be request monitoring by sending http headers to Elasticsearch clusters or implementing controls that manage cookies. Check out our demo of this new feature. For additional details, take a look at the feature’s documentation.

Security Overview of AWS Lambda Whitepaper

Finally, I’d be remiss if I didn’t point out the great work many of my colleagues have done in releasing the Security Overview of AWS Lambda Whitepaper. It is a succinct and enlightening read for anyone wishing to better understand the Lambda runtime environment, function isolation, or data paths taken for payloads sent to the Lambda service during synchronous and asynchronous invocations. It also has some great insight into compliance, auditing, monitoring, and configuration management of your Lambda functions. A must read for anyone wishing to better understand the overall security of AWS serverless applications.

I look forward to seeing everyone at re:Invent 2019 for more exciting serverless announcements!

About the author

Drew DennisDrew Dennis is a Global Solutions Architect with AWS based in Dallas, TX. He enjoys all things Serverless and has delivered the Architecture Track’s Serverless Patterns and Best Practices session at re:Invent the past three years. Today, he helps automotive companies with autonomous driving research on AWS, connected car use cases, and electrification.

from AWS Architecture Blog

Sponsored Post: Etleap, PerfOps, InMemory.Net, Triplebyte, Stream, Scalyr

Sponsored Post: Etleap, PerfOps, InMemory.Net, Triplebyte, Stream, Scalyr

Who’s Hiring? 

  • Triplebyte lets exceptional software engineers skip screening steps at hundreds of top tech companies like Apple, Dropbox, Mixpanel, and Instacart. Make your job search O(1), not O(n). Apply here.
  • Need excellent people? Advertise your job here! 

Fun and Informative Events

  • Advertise your event here!

Cool Products and Services

  • For heads of IT/Engineering responsible for building an analytics infrastructure, Etleap is an ETL solution for creating perfect data pipelines from day one. Unlike older enterprise solutions, Etleap doesn’t require extensive engineering work to set up, maintain, and scale. It automates most ETL setup and maintenance work, and simplifies the rest into 10-minute tasks that analysts can own. Read stories from customers like Okta and PagerDuty, or try Etleap yourself.
  • PerfOps is a data platform that digests real-time performance data for CDN and DNS providers as measured by real users worldwide. Leverage this data across your monitoring efforts and integrate with PerfOps’ other tools such as Alerts, Health Monitors and FlexBalancer – a smart approach to load balancing. FlexBalancer makes it easy to manage traffic between multiple CDN providers, API’s, Databases or any custom endpoint helping you achieve better performance, ensure the availability of services and reduce vendor costs. Creating an account is Free and provides access to the full PerfOps platform.
  • InMemory.Net provides a Dot Net native in memory database for analysing large amounts of data. It runs natively on .Net, and provides a native .Net, COM & ODBC apis for integration. It also has an easy to use language for importing data, and supports standard SQL for querying data. http://InMemory.Net
  • Build, scale and personalize your news feeds and activity streams with getstream.io. Try the API now in this 5 minute interactive tutorialStream is free up to 3 million feed updates so it’s easy to get started. Client libraries are available for Node, Ruby, Python, PHP, Go, Java and .NET. Stream is currently also hiring Devops and Python/Go developers in Amsterdam. More than 400 companies rely on Stream for their production feed infrastructure, this includes apps with 30 million users. With your help we’d like to ad a few zeros to that number. Check out the job opening on AngelList.
  • Scalyr is a lightning-fast log management and operational data platform.  It’s a tool (actually, multiple tools) that your entire team will love.  Get visibility into your production issues without juggling multiple tabs and different services — all of your logs, server metrics and alerts are in your browser and at your fingertips. .  Loved and used by teams at Codecademy, ReturnPath, Grab, and InsideSales. Learn more today or see why Scalyr is a great alternative to Splunk.
  • Advertise your product or service here!

If you are interested in a sponsored post for an event, job, or product, please contact us for more information.


Make Your Job Search O(1) — not O(n)

Triplebyte is unique because they’re a team of engineers running their own centralized technical assessment. Companies like Apple, Dropbox, Mixpanel, and Instacart now let Triplebyte-recommended engineers skip their own screening steps.

We found that High Scalability readers are about 80% more likely to be in the top bracket of engineering skill.

Take Triplebyte’s multiple-choice quiz (system design and coding questions) to see if they can help you scale your career faster.


The Solution to Your Operational Diagnostics Woes

Scalyr gives you instant visibility of your production systems, helping you turn chaotic logs and system metrics into actionable data at interactive speeds. Don’t be limited by the slow and narrow capabilities of traditional log monitoring tools. View and analyze all your logs and system metrics from multiple sources in one place. Get enterprise-grade functionality with sane pricing and insane performance. Learn more today


If you are interested in a sponsored post for an event, job, or product, please contact us for more information.

from High Scalability

TFS Integration with Jenkins

TFS Integration with Jenkins

Written by AZHAGIRI PANNEERSELVAM, Associate Architect at Powerupcloud Technologies

What is TFS (Team Foundation Server)?

Team Foundation Server is a Microsoft product which provides source code management, reporting, requirements management, project management, automated builds, lab management, testing, and release management capabilities. It covers the entire Application Lifecycle Management. TFS can be used as a back end to numerous integrated development environments but is designed to provide the most benefit by serving as the back end to Microsoft Visual Studio or Eclipse.

What is Jenkins?

Jenkins is an open source automation tool written in Java with plugins built for Continuous Integration purpose. Jenkins is used to building and testing your software projects continuously making it easier for developers to integrate changes to the project, and making it easier for users to obtain a fresh build. It also allows you to continuously deliver your software by integrating with a large number of testing and deployment technologies.

UseCase:

Assume that we have a VPN tunnel between AWS and on-premise. The requirement is to download the code from on-premise TFS server via Jenkins and build the code using MSBuild and deploy to the AWS EC2 instances via AWS code deploy.

Proposed Design for the Integration

AWS services we have used in our design:

1. EC2 instances ( 2 Windows servers, one with IIS and code deploy agent installed, the other Windows server with Jenkins and Visual studio build tools installed).

2. AWS Code Deploy.

3. Simple Storage Service (S3).

4. IAM (Instance Role to upload the revision and call code deploy).

Note: — MSBuild plugin was not working as expected. Hence, we have planned to install the Visual studio Build tools on the server and mapped the installation path to make it work.

Other necessary details

1. TFS server Login details (This need to be created in the TFS server to connect the latest code from TFS).

2. Microsoft account Login details to download the visual studio tool for Jenkins

3. TFS Plugin on the Jenkins server

Step — 1 Installing Visual Studio Build tools on the Jenkins server.

Download the build tools for visual studio 2017 version 15.9 from the official Microsoft websitehttps://my.visualstudio.com/Downloads?q=visual%20studio%202017&wt.mc_id=o~msft~vscom~older-downloadshere

Note:- Use the Microsoft Login details to download the exe file.

While installing the Visual Studio tool you have to make sure you have installed the following components to make sure the Jenkins MSBuild plugin is working as expected.

· .NET desktop Build tools

· Data storage and processing build tools

.NET Core Build tools

Visual Studio Installation location

Base Location: C:\Program Files(x86)\Microsoft Visual Studio\2017\BuildTools

Note:- Make note of the Location. We will be using it in the steps below.

Step — 2 Install necessary plugins on the Jenkins server

Plugin Name

MSBuild Plug-in

Team Foundation Server Plug-in

AWS CodeDeploy Plugin for Jenkins

Mass Passwords Plugin

PostBuildScript Plugin

Use Plugin Manager to install the above Plugins on the Jenkins server.

Step — 3 Plugin Configuration in Jenkins

Once you have installed the Plugins from the Jenkins Plugin Manager. We need to configure the necessary plugins.

MSBuild Plugin Path configuration

In the Global tool configuration page. (Located in the Manage Jenkins). Find the MSBuild installation configuration. It looks as shown below. Make note of the Name. We will be using this in an upcoming step.

The explanation of the field below.

Name: Name of MSBuild configuration

Path to MSBuild: Kindly find the MSBuild exe file in the base location and paste it here.

Default parameters: Leave as blank

Once you have updated the details. Kindly save the configuration.

Step — 4 Integration of Jenkins with Team Foundation Server and build with MSBuild Plugin

Team Foundation Server login details configuration in Job

While Creating New freestyle project (Job) in Jenkins. In the Source code Management session you have to select the Team Foundation Version Control(TFVC) and give the following input to connect the TFS server project to get the latest code.

The explanation of the field below.

Collection URL: The URL to the team project collection

Project path: The path to the TFVC project must start with ‘$/’

Credentials: Username and password to connect TFS project.

Update the above details in the configuration and leave the other option as default.

Note: — Make sure you have unchecked the use update.

MSBuild configuration in job

In build session, we have to build the code with MSBuild plugin. Kindly choose the “Build a Visual Studio project or solution using MSBuild”

Once you have clicked, The Build box added as below. You have to select the version of the MSBuild. Make sure you have selected the one which you configured in the previous step.

Example Input for MSBuild configuration

MSBuild Version: MS Build VS 2015

MSBuild Build File: ${WORKSPACE}\exmple.sln (Make sure you select the .sln file to build the code).

Command Line Arguments

/p:Configuration=PROD /p:OutDir=”C:\Program Files (x86)\Jenkins\workspace\exampleJob\Output”

That’s it. Save the Job and execute it.

Now you have your build in the following location

C:\Program Files (x86)\Jenkins\workspace\exampleJob\Output”

Hope you found it useful. Happy Integrating 🙂

from Powerupcloud Tech Blog – Medium

Granting AWS Console Access to OnPrem Active Directory Users through AWS Single Sign-On

Granting AWS Console Access to OnPrem Active Directory Users through AWS Single Sign-On

AWS Single Sign-On (SSO) is a cloud SSO service that makes it easy to centrally manage SSO access to multiple AWS accounts and business applications. This blog demonstrates how you can avoid the creation of an additional IAM user to grant AWS console access to a corporate user. This can be achieved through the use of AWS Single Sign-On service.

Few are the following benefits that you can achieve if you are following this article:

  • A centralized place to grant access to multiple accounts.
  • Reduced cost of maintenance of operating your own SSO infrastructure.
  • Ease of access for users who can access their assigned accounts through their corporate credentials.

Prerequisites:

  • Active Directory Configured on OnPrem.
  • One AWS Master Account with multiple organizations.
  • VPN Tunnel established between the OnPrem network and AWS. Configure the route tables accordingly. Ensure you are provisioning the RODC server in the same subnet which has the connectivity to the AD sitting OnPrem.
  • Ensure the following ports are allowed on AD: TCP 53/UDP 53/TCP 389/UDP 389/TCP 88/UDP 88

Problem Statement:

The OnPrem Active Directory contains huge data of Corporate Users. We had to provide AWS Console access to certain existing users/groups of AD.

Solution:

One common and traditional way to provide console access is to create IAM users for each corporate user and share the access details with them. It requires human efforts to create multiple IAM users as well as the user has to remember his AWS credentials every time he logs into the console. Another solution is to go for AWS Single Sign-On service where the user can use his/her AD credentials to log into the AWS Console. If we are routing all the requests to go to the OnPrem AD, it might increase the load on the AD server. As a solution, we have created an RODC Domain Controller of the OnPrem AD on AWS Cloud.

Here’s the workflow:

  • AWS Organisations are created for multiple AWS accounts, for example, Prod/UAT/DR through a master account.
  • The Active Directory exists on OnPrem which already have a huge data of the corporate users. We are assuming two AD groups here: Admins group which requires Administrator privileges and ReadOnly group which requires only Read-Only privileges.
  • Create a ReadOnly Domain Controller (RODC) of the OnPrem Active Directory on AWS.
  • Create an AD Connector in the Master account using AWS Directory Service which connects to RODC on AWS but it also requires connectivity to the OnPrem AD since the Domain resolves to the primary DNS IP.
  • Configure SSO using AD Connector directory which fetches the AD Users/Groups from RODC. Assign the users/groups to the respective AWS Organisation and grant the required permissions to the users.
  • SSO creates permission set in the master account and respective IAM roles with given privileges will be created in the target organization console.

Creating Read-Only Domain Controller of the OnPrem Active Directory on AWS

Get the following values of the existing Active Directory:

  • DNS Server IP
  • Directory Domain name
  • Domain Admin Credentials i.e. Username/Password

Launch a windows server i.e. Microsoft Windows Server 2019 Base on AWS. Login to the server once it’s available. Go to Server Manager and add adds roles n features of ADDS.

Now go to the Network sharing.

Ethernet→ Properties→ IPv4→ Update DNS Server IP → Provide DNS IP of the OnPrem AD.

Go to Server Manager → Workgroup → Under the “Computer Name” tab → Click on Change.

Provide the AD Domain Name. Input the AD user credentials.

Now for setting up RODC, go to Server Manager → You will get an option to “promote this server to a domain” on the right top corner. Change the current user to an AD Domain Admin user.

Select RODC and give a random DCRM password on the next screen.

Click Next and let the default settings unchanged. Review the settings once on the last screen.

Click Next and Install. At this point, the RODC is configured on the AWS server. Now you can log in to the RODC server by using Remote Desktop Protocol (RDP) connection through any one of the AD users.

Creating AD Connector in the Master account

Create an AD Connector through AWS Directory Service in the Master account where AWS Organizations are created.

Select the Directory size on the next screen.

Select VPC and subnets on the next page. Ensure these subnets are configured properly to have connectivity to the RODC DNS IP.

Provide the AD details such as DNS IP of the RODC (private IP of the RODC Server), AD Domain Name and any Service Account Credentials on the next page.

Wait till the directory is available.

Configuring AWS Single Sign ON for the AD Connector

Configure SSO for the AD Connector in the same region as of AD Connector. Switch to AWS SSO Console.

Click on “Manage your directory”. Select Microsoft AD and select the AD connector which we have created in the previous step.

Select the account for which you want to give access to the AD users.

Click Assign users and select the Groups/Users to whom you want to give access to the selected account.

Create a new permission set. For admins group, we have created permission set with AdministratorAccess and For ReadOnly Group, we have created a permission set with ViewOnlyAccess. We can also create a Custom permission set according to the requirement. Select the Administrator access for the Admins Groups.

Similarly, give ViewOnlyAccess to the ReadOnly Group in AD.

On the SSO Dashboard, note down the User Portal URL which is used for log in to the console.

Hit the URL in the browser. The URL will redirect you to provide the AD Credentials:

Once you login, it gives the list of accounts for which the logged in user has access. The below screenshot shows the logged in user is User2. User2 is a member of Read-Only group so it has ViewOnlyAccess to the assigned account.

Hit Management Console to log into the AWS console of the selected account.

And that’s all. Hope you found this article useful.

from Powerupcloud Tech Blog – Medium

Gone Fishin’

Gone Fishin’

Well, not exactly Fishin’, but I’ll be on a month long vacation starting today. I won’t be posting new content, so we’ll all have a break. Disappointing, I know. Please use this time for quiet contemplation and other inappropriate activities.

If you really need a not so quick fix there’s always the back catalog of Stuff the Internet Says. Odds are there’s a lot you didn’t read—yet.

from High Scalability

Building an AWS Landing Zone from Scratch in Six Weeks

Building an AWS Landing Zone from Scratch in Six Weeks

In an effort to deliver a simpler, smarter, and more unified experience on its website, the UK’s Ministry of Justice and its Lead Technical Architect, James Abley, created a bespoke AWS Landing Zone, a pre-defined template for an AWS account or infrastructure. And they did it in six weeks.

Supporting 33 agencies and public bodies, and making sure they all work together, the Ministry of Justice is at the heart of the United Kingdom’s justice system. Its task is to look after all parts of the justice system, including the courts, prisons, probation services, and legal aid, striving to bring the principles of justice to life for everyone in society.

In a This Is My Architecture video, shot at 2018 re:Invent in Las Vegas, James talks with AWS Solutions Architect, Simon Treacy, about the importance of delivering a consistent experience to his website’s customers, a mix of citizen and internal legal aid agency case workers.

Utilizing a number of AWS services, James walks us through the user experience, and he why decided to put AWS CoudFront and AWS Web Application Firewall (WAF) up front to improve the security posture of the ministry’s legacy applications and extend their lifespan. James also explained how he split traffic between two availability zones, using AWS Elastic Load Balancing (ELB) to provide higher availability and resilience, which will help with zero downtime deployment later on.

*Check out more This Is My Architecture videos on YouTube.

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

 

from AWS Architecture Blog