Tag: Compliance

AWS Security Profile: Ron Cully, Principal Product Manager, AWS Identity

AWS Security Profile: Ron Cully, Principal Product Manager, AWS Identity


In the weeks leading up to re:Invent, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for nearly four years. I’m a Principal Product Manager in AWS Identity. I spent most of my time covering our Managed Active Directory products, and over the past year I’ve taken on management for AWS Single Sign-On and AWS Identity and Access Management (IAM).

How do you explain your job to non-tech friends?

Identity is what people use when they sign in to their services. What we work on is the back-end systems that authenticate and manage access so that people have secure access to their services.

What are you currently working on that you’re excited about?

Wow, it’s hard to pick just one. So, I’d say I’m most excited about the work that we’re doing so that customers can use identities that they already have across all of AWS.

What’s the most challenging part of your job?

Making sure that we deliver the most important features that customers want, in the right sequence, as quickly as possible. To do that, we need to focus on the key pain points customers have right now and resolve those pain points in ways that are the most meaningful to them. We also need to make sure that we have the right roadmap and keep doing that on an iterative basis.

What’s your favorite part of your job?

I get to work with some really incredibly smart people inside and outside of Amazon. It’s a really interesting space to be in. There’s a lot happening at the industry level, and we’re trying to sort out the puzzle of how we bring things together given what customers have and use today. Customers have all of this existing technology that they want to use, and they have a lot of investments in it. We want to make it possible for them to use those investments in new innovative ways that make their lives easier.

The AWS Identity team is growing rapidly. What are some of the biggest challenges that teams face during rapid growth?

One key challenge is hiring. How do we find great people? Amazon has some pretty high bars, and we need to find the right people that can ramp up quickly to help us solve the challenges that we want to go fix. The other thing is making sure that we stay on the same page. There’s a lot of work that we’re doing across a lot of different areas. So it’s important to stay in coordination so that we deliver the most important things that solve our customers’ current pain points.

What advice would you give to people coming on board the AWS Identity team?

Make sure that you’re highly customer focused. Dive deep because we really need to understand the details of what’s going on and what customers are trying to accomplish. Be a really effective communicator by breaking things down into the simplest terms. I find that often, people get so caught up in technology that they get lost in the technology. It’s really important to remember that we’re solving problems that are very visceral to human beings. In order to get the correct results, you need to be able to communicate in a way that makes sense to anybody.

Which Amazon leadership principles have you relied on the most in your own career at AWS?

Certainly Customer Obsession. That’s absolutely imperative. Dive Deep of course. Learn and Be Curious is huge. But also a less popular principle: Have Backbone; Disagree and Commit. It’s important that we have healthy discussions. This principle isn’t about being confrontational. It’s about being smart about how you synthesize the information that you learn from your customers and bring forth your ideas and opinions in a respectful way. It’s important to have a healthy conversational debate about what’s right for customers, so that we can drive important things forward when they need to be done. At the same time, we must recognize that not all ideas or their timing are right. It’s important to understand the bigger picture of what’s going on, understand that a different approach might be better in that particular moment, and commit to moving forward as a team after the debate is finished.

What’s the most common misperception you encounter about AWS Identity?

I think there’s a huge amount of confusion in the Active Directory area about what you can and can’t do, and how it relates to what customers are doing with Azure AD. We probably have the best managed active directory in the cloud. But, people sometimes confuse Active Directory with Azure AD, which are completely different technologies. So, we try to help customers understand how our product works relative to Azure AD. They are complementary; they can work together.

Another area that’s confusing for customers is choosing which AWS identity system to use today. AWS identity systems have grown organically over time. We’ve listened to customers and added features, and so now we have a couple of different ways of approaching identity. We started out with IAM users and groups. Then over the past few years, we’ve made it possible to use Active Directory identities in AWS. We’ve also been embracing the use of standards-based federation. Federation enables customers who use identity systems like Okta, Ping, Google, or Azure AD, to use those identities to sign into AWS. Due to this organic change, customers can choose between managing identities as IAM, create them in AWS SSO, bring them in from Active Directory by using AWS SSO, or use SAML federation through IAM. We also have the Cognito product that people have been adapting to use with IAM federation. Based on the state of where the technologies are now, it can be confusing for customers to know which identity system is the right one to use right now so they are on the right path going into the future. This is an area we are working hard to simplify and clarify for our customers.

What do you think is the biggest challenge facing the identity space right now?

I think it’s helping customers understand how to use the identity system that they have now—broadly, across all of the applications and services that they want to use—and how to provide them with a consistent experience. I think that’s one of the key industry challenges. We’ve come a long way, but there’s still a lot of road ahead of us to make that all possible at the industry level.

Looking to the future, how do you think the authorization and authentication landscape will evolve?

I think we’ll start to see more convergence on interoperable technologies for authentication. There’s some evolution already happening between the SAML model of authentication and OIDC (OpenID Connect). And I think we’ll start to see more convergence. One sticky spot in the industry right now is how to set up federation. It can be complicated and time consuming to set up, and there’s work that we’re doing in this space to help make it easier. We did a technology demonstration at identiverse last June using the Fast Federation standards draft to connect IDPs and service providers together. In our demonstration, we showed how Fast Fed makes it possible to connect AWS SSO to Google in a couple of clicks. That enables customers to use the identities they already have and use AWS SSO as their AWS integrated permissions management tool to grant access to resources across all their AWS accounts. I think Fast Fed will really help customers because today it’s so complicated to try and connect identity providers to tens or hundreds of applications.

What does identity mean to you on a personal level?

When I think about identity, it’s about who I am, and there are different contexts for that, such as who I am as a consumer or who I am as an employee. Let’s focus on who I am as an employee: Today I may have different user identities and credentials, each to a different system. I also have to manage my passwords for each of those identities. If I make a mistake and use the wrong sign-in or password, I get blocked, and I might get locked out. These things get in the way of focusing on my job. Another example is that if I change my role within a company, I need access to new resources, and there are old resources that I should no longer be able to access. It’s really a pain today for people to navigate getting my access to resources set up correctly. It can take a month before you have all of the different permissions to access the things you need. So when I look at what I want to do for customers, it’s about “how do I make it really easy for people to get access to the things they need without compromising security?” I want to make it so that people can have one identity to use, and when there’s a change to their identity, the system automatically gives them access to what they need and removes access to what they don’t need. People shouldn’t have to go through all the painful processes of going to websites and talking to managers to get them to change group membership.

Will you be doing anything at re:Invent this year?

I’m involved in a few sessions.

I’ll be talking about our single sign-on product, AWS Single Sign-On. It enables customers to centrally manage access to the AWS Console, accounts, roles, and applications using identities from their Active Directory, or identities they create in AWS SSO. We’ll be talking about some exciting new features that we’ve released in that product area since the last re:Invent.

I’m also involved in a session about how enterprises can use Active Directory in the cloud. Customers have a lot of investment in their Windows environments on premises, and they’re migrating their workloads into the cloud. As they do that, those Windows workloads in the cloud need access to Active Directory. Customers often don’t want to manage the Active Directory infrastructure in the cloud. The operational pain of doing that detracts from what they’re trying to do, which is to get to the cloud and actually convert into server-less technologies where they get better economies of scale and more flexibility. AWS offers a managed Active Directory solution that customers can use with their Windows workloads while eliminating the overhead of operating Active Directory domain controllers in the cloud.

What are you hoping that your audience will do differently as a result of attending?

I would love to see customers realize they can take advantage of the services we offer in new ways, and then go home and deploy them. I would hope that they go back and do a proof of concept—go play with it and understand what it can do, see what kind of value it can bring, and then build out from there. Armed with the right information I think customers can streamline some processes in terms of how to get on to the cloud and take advantage of the cloud faster.

What do you recommend that first-time attendees do at Re:Invent?

There’s so much amazing content that’s there, you won’t be able to get it all. So, get clear about what information you’re after, go through the session list, and get registered for the sessions. Sometimes these fill up fast! If you’re coming with a team, divide and conquer. But also leave some time to learn something new in an area you’re less familiar with. Also, take advantage of the presenters. Ask us questions! We’re here to help customers learn as much as they can. If you see me there, stop me and ask your questions!

If you had to pick any other job, what would you want to do with your life?

I would probably want to be in food safety. I used to not care about food at all. Then, I went to an event where I made a life decision that made me think about my health and made me think about my food. So I started understanding more about food. I began realizing how much happens with our food today that we just don’t know about. There are a lot of things that I really don’t align with. I would love to see more transparency about our food so that we could have the ability to pick and choose what we want to eat based upon our values. If it wasn’t food safety, maybe politics.

Want more AWS Security news? Follow us on Twitter.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Ron Cully

Ron Cully is a Principal Product Manager at AWS where he leads feature and roadmap planning for workforce identity products at AWS. Ron has over 20 years of industry experience in product and program management of networking and directory related products. He is passionate about delivering secure, reliable solutions that help make it easier for customers to migrate directory aware applications and workloads to the cloud.

from AWS Security Blog

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 18 services in the AWS US East/West and AWS GovCloud (US) Regions

AWS achieves FedRAMP JAB High and Moderate Provisional Authorization across 18 services in the AWS US East/West and AWS GovCloud (US) Regions

It’s my pleasure to announce that we’ve expanded the number of AWS services that customers can use to run sensitive and highly regulated workloads in the federal government space. This expansion of our FedRAMP program marks a 28.6% increase in our number of FedRAMP authorizations.

Today, we’ve achieved FedRAMP authorizations for 6 services in our AWS US East/West Regions:

We also received 14 service authorizations in our AWS GovCloud (US) Regions:

In total, we now offer 48 AWS services authorized in the AWS US East/West Regions under FedRAMP Moderate and 43 services authorized in our AWS GovCloud (US) Regions under FedRamp High. You can see our full, updated list of authorizations on the FedRAMP Marketplace. We also list all of our services in scope by compliance program on our Services in Scope page.

Our FedRAMP assessment was completed with a third-party assessment partner to ensure an independent validation of our technical, management, and operational security controls against the FedRAMP baselines.

We care deeply about our customers’ needs, and compliance is my team’s priority. As we expand in the federal space, we want to continue to onboard services into the compliance programs our customers are using, such as FedRAMP.

To learn what other public sector customers are doing on AWS, see our Government, Education, and Nonprofits Case Studies and Customer Success Stories. Stay tuned for future updates on our Services in Scope by Compliance Program page. If you have feedback about this blog post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

author photo

Amendaze Thomas

Amendaze is the manager of AWS Security’s Government Assessments and Authorization Program (GAAP). He has 15 years of experience providing advisory services to clients in the Federal government, and over 13 years’ experience supporting CISO teams with risk management framework (RMF) activities.

from AWS Security Blog

Updated whitepaper available: “Navigating GDPR Compliance on AWS”

Updated whitepaper available: “Navigating GDPR Compliance on AWS”

The European Union’s General Data Protection Regulation 2016/679 (GDPR) safeguards EU citizens’ fundamental right to privacy and to personal data protection. In order to make local regulations coherent and homogeneous, the GDPR introduces and defines stringent new standards in terms of compliance, security and data protection.

The updated version of our Navigating GDPR Compliance on AWS whitepaper (.pdf) explains the role that AWS plays in your GDPR compliance process and shows how AWS can help your organization accelerate the process of aligning your compliance programs to the GDPR by using AWS cloud services.

AWS compliance, data protection, and security experts work with customers across the world to help them run workloads in the AWS Cloud, including customers who must operate within GDPR requirements. AWS teams also review what AWS is responsible for to make sure that our operations comply with the requirements of the GDPR so that customers can continue to use AWS services. The whitepaper provides guidelines to better orient you to the wide variety of AWS security offerings and to help you identify the service that best suits your GDPR compliance needs.

If you have feedback about this blog post, please submit comments in the Comments section below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Carmela Gambardella

Carmela graduated in Computer Science at the Federico II University of Naples, Italy. She has worked in a variety of roles at large IT companies, including as a software engineer, security consultant, and security solutions architect. Her areas of interest include data protection, security and compliance, application security, and software engineering. In April 2018, she joined the AWS Public Sector Solution Architects team in Italy.

Author photo

Giuseppe Russo

Giuseppe is a Security Assurance Manager for AWS in Italy. He has a Master’s Degree in Computer Science with a specialization in cryptography, security and coding theory. Giuseppe is s a seasoned information security practitioner with many years of experience engaging key stakeholders, developing guidelines, and influencing the security market on strategic topics such as privacy and critical infrastructure protection.

from AWS Security Blog

AWS Security Profile: Byron Cook, Director of the AWS Automated Reasoning Group

AWS Security Profile: Byron Cook, Director of the AWS Automated Reasoning Group

Author


Byron Cook leads the AWS Automated Reasoning Group, which automates proof search in mathematical logic and builds tools that provide AWS customers with provable security. Byron has pushed boundaries in this field, delivered real-world applications in the cloud, and fostered a sense of community amongst its practitioners. In recognition of Byron’s contributions to cloud security and automated reasoning, the UK’s Royal Academy of Engineering elected him as one of 7 new Fellows in computing this year.

I recently sat down with Byron to discuss his new Fellowship, the work that it celebrates, and how he and his team continue to use automated reasoning in new ways to provide higher security assurance for customers in the AWS cloud.

Congratulations, Byron! Can you tell us a little bit about the Royal Academy of Engineering, and the significance of being a Fellow?

Thank you. I feel very honored! The Royal Academy of Engineering is focused on engineering in the broad sense; for example, aeronautical, biomedical, materials, etc. I’m one of only 7 Fellows elected this year that specialize in computing or logic, making the announcement really unique.

As for what the Royal Academy of Engineering is: the UK has Royal Academies for key disciplines such as music, drama, etc. The Royal Academies focus financial support and recognition on these fields, and gives a location and common meeting place. The Royal Academy of Music, for example, is near Regent’s Park in West London. The Royal Academy of Engineering’s building is in Carlton Place, one of the most exclusive locations in central London near Pall Mall and St. James’ Park. I’ve been to a number of lectures and events in that space. For example, it’s where I spoke ten years ago when I was the recipient of the Roger Needham prize. Some examples of previously elected Fellows include Sir Frank Whittle, who invented the jet engine; radar pioneer Sir George MacFarlane, and Sir Tim Berners-Lee, who developed the world-wide web.

Can you tell us a little bit about why you were selected for the award?

The letter I received from the Royal Academy says it better than I could say myself:

“Byron Cook is a world-renowned leader in the field of formal verification. For over 20 years Byron has worked to bring this field from academic hypothesis to mechanised industrial reality. Byron has made major research contributions, built influential tools, led teams that operationalised formal verification activities, and helped establish connections between others that have dramatically accelerated growth of the area. Byron’s tools have been applied to a wide array of topics, e.g. biological systems, computer operating systems, programming languages, and security. Byron’s Automated Reasoning Group at Amazon is leading the field to even greater success”.

Formal verification is the one term here that may be foreign to you, so perhaps I should explain. Formal verification is the use of mathematical logic to prove properties of systems. Euclid, for example, used formal verification in ~300 BC to prove that the Pythagorean theorem holds for all possible right-angled triangles. Today we are using formal verification to prove things about all possible configurations of a computer program might reach. When I founded Amazon’s Automated Reasoning Group, I named it that because my ambition was to automate all of the reasoning performed during formal verification.

Can you give us a bit of detail about some of the “research contributions and tools” mentioned in the text from Royal Academy of Engineering?

Probably my best-known work before joining Amazon was on the Terminator tool. Terminator was designed to reason at compile-time about what a given computer program would eventually do when running in production. For example, “Will the program eventually halt?” This is the famous “Halting problem,” proved undecidable in the 1930s. The Terminator tool piloted a new approach to the problem which is popular now, based on the idea of incrementally improving the best guess for a proof based on failed proof attempts. This was the first known approach capable of scaling termination proving to industrial problems. My colleagues and I used Terminator to find bugs in device drivers that could cause operating systems to become unresponsive. We found many bugs in device drivers that ran keyboards, mice, network devices, and video cards. The Terminator tool was also the basis of BioModelAnaylzer. It turns out that there’s a connection between diseases like Leukemia and the Halting problem: Leukemia is a termination bug in the genetic-regulatory pathways in your blood. You can think of it in the same way you think of a device driver that’s stuck in an infinite loop, causing your computer to freeze. My tools helped answer fundamental questions that no tool could solve before. Several pharmaceutical companies use BioModelAnaylzer today to understand disease and find new treatment options. And these days, there is an annual international competition with many termination provers that are much better than the Terminator. I think that this is what Royal Academy is talking about when they say I moved the area from “academic hypothesis to mechanized industrial reality.”

I have also worked on problems related to the question of P=NP, the most famous open problem in computing theory. From 2000-2006, I built tools that made NP feel equal to P in certain limited circumstances to try and understand the problem better. Then I focused on circumstances that aligned with important industrial problems, like proving the absence of bugs in microprocessors, flight control software, telecommunications systems, and railway control systems. These days the tools in this space are incredibly powerful. You should check out the software tools CVC4 or Z3.

And, of course, there’s my work with the Automated Reasoning Group, where I’ve built a team of domain experts that develop and apply formal verification tools to a wide variety of problems, helping make the cloud more secure. We have built tools that automatically reason about the semantics of policies, networks, cryptography, virtualization, etc. We reason about the implementation of Amazon Web Services (AWS) itself, and we’ve built tools that help customers prove the correctness of their AWS-based implementations.

Could you go into a bit more detail about how this work connects to Amazon and its customers?

AWS provides cloud services globally. Cloud is shorthand for on-demand access to IT resources such as compute, storage, and analytics via the Internet with pay-as-you-go pricing. AWS has a wide variety of customers, ranging from individuals to the largest enterprises, and practically all industries. My group develops mathematical proof tools that help make AWS more secure, and helps AWS customers understand how to build in the cloud more securely.

I first became an AWS customer myself when building BioModelAnaylzer. AWS allowed us working on this project to solve major scientific challenges (see this Nature Scientific Report for an example) using very large datacenters, but without having to buy the machines, maintain the machines, maintain the rooms that the machines would sit in, the A/C system that would keep them cool, etc. I was also able to easily provide our customers with access to the tool via the cloud, because it’s all over the internet. I just pointed people to the end-point on the internet and, presto, they were using the tool. About 5 years before developing BioModelAnalyzer, I was developing proof tools for device drivers and I gave a demo of the tool to my executive leadership. At the end of the demo, I was asked if 5,000 machines would help us do more proofs. Computationally, the answer was an obvious “yes,” but then I thought a minute about the amount of overhead required to manage a fleet of 5,000 machines and reluctantly replied “No, but thank you very much for the offer!” With AWS, it’s not even a question. Anyone with an Amazon account can provision 5,000 machines for practically nothing. In less than 5 minutes, you can be up and running and computing with thousands of machines.

What I love about working at AWS is that I can focus a very small team on proving the correctness of some aspect of AWS (for example, the cryptography) and, because of the size and importance of the customer base, we make much of the world meaningfully more secure. Just to name a few examples: s2n (the Amazon TLS implementation); the AWS Key Management Service (KMS), which allows customers to securely store crypto keys; and networking extensions to the IoT operating system Amazon FreeRTOS, which customers use to link cloud to IoT devices, such as robots in factories. We also focus on delivering service features that help customers prove the correctness of their AWS-based implementations. One example is Tiros, which powers a network reachability feature in Amazon Inspector. Another example is Zelkova, which powers features in services such as Amazon S3, AWS Config, and AWS IoT Device Defender.

When I think of mathematical logic I think of obscure theory and messy blackboards, not practical application. But it sounds like you’ve managed to balance the tension between theory and practical industrial problems?

I think that this is a common theme that great scientists don’t often talk about. Alan Turing, for example, did his best work during the war. John Snow, who made fundamental contributions to our understanding of germs and epidemics, did his greatest work while trying to figure out why people were dying in the streets of London. Christopher Stratchey, one of the founders of our field, wrote:

“It has long been my personal view that the separation of practical and theoretical work is artificial and injurious. Much of the practical work done in computing, both in software and in hardware design, is unsound and clumsy because the people who do it have not any clear understanding of the fundamental design principles in their work. Most of the abstract mathematical and theoretical work is sterile because it has no point of contact with real computing.”

Throughout my career, I’ve been at the intersection of practical and theoretical. In the early days, this was driven by necessity: I had two children during my PhD and, frankly, I needed the money. But I soon realized that my deep connection to real engineering problems was an advantage and not a disadvantage, and I’ve tried through the rest of my career to stay in that hot spot of commercially applicable problems while tackling abstract mathematical topics.

What’s next for you? For the Automated Reasoning Group? For your scientific field?

The Royal Academy of Engineering kindly said that I’ve brought “this field from academic hypothesis to mechanized industrial reality.” That’s perhaps true, but we are very far from done: it’s not yet an industrial standard. The full power of automated reasoning is not yet available to everyone because today’s tools are either difficult to use or weak. The engineering challenge is to make them both powerful and easy to use. With that I believe that they’ll become a key part of every software engineer’s daily routine. What excites me is that I believe that Amazon has a lot to teach me about how to operationalize the impossible. That’s what Amazon has done over and over again. That’s why I’m at Amazon today. I want to see these proof techniques operating automatically at Amazon scale.

Links:
Provable security webpage
Lecture: Fundamentals for Provable Security at AWS
Lecture: The evolution of Provable Security at AWS
Lecture: Automating compliance verification using provable security
Lecture: Byron speaks about Terminator at University of Colorado
https://biomodelanalyzer.org/

If you have feedback about this post, let us know in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

from AWS Security Blog

How to migrate symmetric exportable keys from AWS CloudHSM Classic to AWS CloudHSM

How to migrate symmetric exportable keys from AWS CloudHSM Classic to AWS CloudHSM

In August 2017, we announced the “new” AWS CloudHSM service, which had a lot of improvements over AWS CloudHSM Classic (for clarity in this post I will refer to the services as New CloudHSM and CloudHSM Classic). These advantages in security, scalability, usability, and economy, included FIPS 140-2 Level 3 certification, fully managed high availability and backup, a management console, and lower costs.

Now, we turn another page. The Luna 5 HSMs used for CloudHSM Classic are reaching end of life, and the CloudHSM Classic service is being subsequently decommissioned, so CloudHSM Classic users must migrate cryptographic key material to the New CloudHSM.

In this post, I’ll show you how to use the RSA OAEP (optimal asymmetric encryption padding) wrapping mechanism, which was introduced in the CloudHSM client version 2.0.0, to move key material from CloudHSM Classic to New CloudHSM without exposing the plain text of the key material outside the HSM boundaries. You’ll use an RSA public key to wrap the key material (export it in encrypted form) on CloudHSM Classic, then use the corresponding RSA Private Key to unwrap it on New CloudHSM.

NOTE: This solution only works for symmetric exportable keys. Asymmetric keys on CloudHSM Classic can’t be exported. To replace non-exportable and asymmetric keys, you must generate new keys on New CloudHSM, then use the old keys to decrypt and the new keys to re-encrypt your data.

Solution overview

My solution shows you how to use the CKDemo utility on CloudHSM Classic, and key_mgmt_util on New CloudHSM, to: generate an RSA wrapping key pair; use it to wrap keys on CloudHSM Classic; and then unwrap the keys on New CloudHSM. These are all done via the RSA OAEP mechanism.
The following diagram provides a summary of the steps involved in the solution:

Figure 1: Solution overview

Figure 1: Solution overview

  1. Generate the RSA wrapping key pair on New CloudHSM.
  2. Export the RSA Public Key to the New CloudHSM client instance.
  3. Move the RSA public key to the CloudHSM Classic client instance.
  4. Import the RSA public key to CloudHSM Classic.
  5. Wrap the key using the imported RSA public key.
  6. Move the wrapped key to the New CloudHSM client instance.
  7. Unwrap the key on New CloudHSM with the RSA Private Key.

NOTE: You can perform the same procedure using supported libraries, such as JCE (Java Cryptography Extension) and PKCS#11. For example, you can use the wrap_with_imported_rsa_key sample to import an RSA public key into CloudHSM Classic, use that key to wrap your CloudHSM Classic keys, and then use the rsa_wrapping sample (specifically the rsa_oaep_unwrap_key function) to unwrap the keys into New CloudHSM using the RSA OAEP mechanism.

Prerequisites

  1. An active New CloudHSM cluster with at least one active hardware security module (HSM). Follow the Getting Started Guide to create and initialize a New CloudHSM cluster.
  2. An Amazon Elastic Compute Cloud (Amazon EC2) instance with the New CloudHSM client installed and configured to connect to the New CloudHSM cluster. You can refer to the Getting Started Guide to configure and connect the client instance.
  3. New CloudHSM CU (crypto user) credentials.
  4. An EC2 instance with the CloudHSM Classic client installed and configured to connect to the CloudHSM Classic partition or the high-availability (HA) partition group that contains the keys you want to migrate. You can refer to this guide to install and configure a CloudHSM Classic Client.
  5. The Password of the CloudHSM Classic partition or HA partition group that contains the keys you want to migrate.
  6. The handle of the symmetric key on CloudHSM Classic you want to migrate.

Step 1: Generate the RSA wrapping key pair on CloudHSM

1.1. On the New CloudHSM client instance, run the key_mgmt_util command line tool, and log in as the CU, as described in Getting Started with key_mgmt_util.


Command:  loginHSM -u CU -s <CU user> -p <CU password>
    
	Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

1.2. Run the following
genRSAKeyPair command to generate an RSA key pair with the label
classic_wrap. Take note of the private and public key handles, as they’ll be used in the coming steps.


Command:  genRSAKeyPair -m 2048 -e 65537 -l classic_wrap

	Cfm3GenerateKeyPair returned: 0x00 : HSM Return: SUCCESS

	Cfm3GenerateKeyPair:    public key handle: 407    private key handle: 408

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Step 2: Export the RSA public key to the New CloudHSM client instance

2.1. Run the following exportPubKey command to export the RSA public key to the New CloudHSM client instance using the public key handle you received in step 1.2 (407, in my example). This will export the public key to a file named wrapping_public.pem.


Command:  exportPubKey -k <public key handle> -out wrapping_public.pem

PEM formatted public key is written to wrapping_public.pem

	Cfm3ExportPubKey returned: 0x00 : HSM Return: SUCCESS

Step 3: Move the RSA public key to the CloudHSM Classic client instance

Move the RSA Public Key to the CloudHSM Classic client instance using scp (or any other tool you prefer).

Step 4: Import the RSA public key to CloudHSM Classic

4.1. On the CloudHSM Classic instance, use the cmu command as shown below to import the RSA public key with the label classic_wrap. You’ll need the partition or HA partition group password for this command, plus the slot number of the partition or HA partition group (you can get the slot number of your partition or HA partition group using the vtl listSlots command).


# cmu import -inputFile=wrapping_public.pem -label classic_wrap
Select token
 [1] Token Label: partition1
 [2] Token Label: partition2
 [3] Token Label: partition3
 Enter choice: <slot number>
Please enter password for token in slot 1 : <password>

4.2. Run the below command to get the handle (highlighted below) of the imported key.


# cmu list -label classic_wrap
Select token
 [1] Token Label: partition1
 [2] Token Label: partition2
 [3] Token Label: partition3
 Enter choice: <slot number>
Please enter password for token in slot 1 : <password>
handle=149	label=classic_wrap

4.3. Run the CKDemo utility.


# ckdemo

4.4. Open a session to the partition or HA partition group slot.


Enter your choice : 1

Slots available:
	slot#1 - LunaNet Slot
	slot#2 - LunaNet Slot
	...
Select a slot: <slot number>

SO[0], normal user[1], or audit user[2]? 1

Status: Doing great, no errors (CKR_OK)

4.5. Log in using the partition or HA partition group pin.


Enter your choice : 3
Security Officer[0]
Crypto-Officer  [1]
Crypto-User     [2]:
Audit-User      [3]: 1
Enter PIN          : <password>

Status: Doing great, no errors (CKR_OK)

4.6. Change the CKA_WRAP attribute of the imported RSA public key to be able to use it for wrapping using the imported public key handle you received in step 4.2 above (149, in my example).


Enter your choice : 25

Which object do you want to modify (-1 to list available objects) : <imported public key handle>

Edit template for set attribute operation.

(1) Add Attribute   (2) Remove Attribute   (0) Accept Template :1

 0 - CKA_CLASS                  1 - CKA_TOKEN
 2 - CKA_PRIVATE                3 - CKA_LABEL
 4 - CKA_APPLICATION            5 - CKA_VALUE
 6 - CKA_XXX                    7 - CKA_CERTIFICATE_TYPE
 8 - CKA_ISSUER                 9 - CKA_SERIAL_NUMBER
10 - CKA_KEY_TYPE              11 - CKA_SUBJECT
12 - CKA_ID                    13 - CKA_SENSITIVE
14 - CKA_ENCRYPT               15 - CKA_DECRYPT
16 - CKA_WRAP                  17 - CKA_UNWRAP
18 - CKA_SIGN                  19 - CKA_SIGN_RECOVER
20 - CKA_VERIFY                21 - CKA_VERIFY_RECOVER
22 - CKA_DERIVE                23 - CKA_START_DATE
24 - CKA_END_DATE              25 - CKA_MODULUS
26 - CKA_MODULUS_BITS          27 - CKA_PUBLIC_EXPONENT
28 - CKA_PRIVATE_EXPONENT      29 - CKA_PRIME_1
30 - CKA_PRIME_2               31 - CKA_EXPONENT_1
32 - CKA_EXPONENT_2            33 - CKA_COEFFICIENT
34 - CKA_PRIME                 35 - CKA_SUBPRIME
36 - CKA_BASE                  37 - CKA_VALUE_BITS
38 - CKA_VALUE_LEN             39 - CKA_LOCAL
40 - CKA_MODIFIABLE            41 - CKA_ECDSA_PARAMS
42 - CKA_EC_POINT              43 - CKA_EXTRACTABLE
44 - CKA_ALWAYS_SENSITIVE      45 - CKA_NEVER_EXTRACTABLE
46 - CKA_CCM_PRIVATE           47 - CKA_FINGERPRINT_SHA1
48 - CKA_OUID                  49 - CKA_X9_31_GENERATED
50 - CKA_PRIME_BITS            51 - CKA_SUBPRIME_BITS
52 - CKA_USAGE_COUNT           53 - CKA_USAGE_LIMIT
54 - CKA_EKM_UID               55 - CKA_GENERIC_1
56 - CKA_GENERIC_2             57 - CKA_GENERIC_3
58 - CKA_FINGERPRINT_SHA256
Select which one: 16
Enter boolean value: 1

CKA_WRAP=01

(1) Add Attribute   (2) Remove Attribute   (0) Accept Template :0

Status: Doing great, no errors (CKR_OK)

Step 5: Wrap the key using the imported RSA public key

5.1. Check whether the symmetric key you want to migrate is exportable. This can be done by following the below command using the handle of the key you want to migrate, and confirming the value of the CKA_EXTRACTABLE attribute (highlighted below) is equal to 1. Otherwise, the key can’t be exported.


Enter your choice : 27

Enter handle of object to display (-1 to list available objects): <handle of the key to be migrated>
Object handle=120
CKA_CLASS=00000004
CKA_TOKEN=01
CKA_PRIVATE=01
CKA_LABEL=Generated AES Key
CKA_KEY_TYPE=0000001f
CKA_ID=
CKA_SENSITIVE=01
CKA_ENCRYPT=01
CKA_DECRYPT=01
CKA_WRAP=01
CKA_UNWRAP=01
CKA_SIGN=01
CKA_VERIFY=01
CKA_DERIVE=01
CKA_START_DATE=
CKA_END_DATE=
CKA_VALUE_LEN=00000020
CKA_LOCAL=01
CKA_MODIFIABLE=01
CKA_EXTRACTABLE=01
CKA_ALWAYS_SENSITIVE=01
CKA_NEVER_EXTRACTABLE=00
CKA_CCM_PRIVATE=00
CKA_FINGERPRINT_SHA1=f8babf341748ba5810be21acc95c6d4d9fac75aa
CKA_OUID=29010002f90900005e850700
CKA_EKM_UID=
CKA_GENERIC_1=
CKA_GENERIC_2=
CKA_GENERIC_3=
CKA_FINGERPRINT_SHA256=7a8efcbff27703e281617be3c3d484dc58df6a78f6b144207c1a54ad32a98c00

Status: Doing great, no errors (CKR_OK)

5.2. Wrap the key using the imported RSA public key. This will create a file called wrapped.key that contains the wrapped key. Make sure to use handles of public key handle you received in step 4.2 above (149, in my example), and the handle of the key you want to migrate.


Enter your choice : 60
[1]DES-ECB        [2]DES-CBC        [3]DES3-ECB       [4]DES3-CBC
                                    [7]CAST3-ECB      [8]CAST3-CBC
[9]RSA            [10]TRANSLA       [11]DES3-CBC-PAD  [12]DES3-CBC-PAD-IPSEC
[13]SEED-ECB      [14]SEED-CBC      [15]SEED-CBC-PAD  [16]DES-CBC-PAD
[17]CAST3-CBC-PAD [18]CAST5-CBC-PAD [19]AES-ECB       [20]AES-CBC
[21]AES-CBC-PAD   [22]AES-CBC-PAD-IPSEC [23]ARIA-ECB  [24]ARIA-CBC
[25]ARIA-CBC-PAD
[26]RSA_OAEP    [27]SET_OAEP
Select mechanism for wrapping: 26

Enter filename of OAEP Source Data [0 for none]: 0

Enter handle of wrapping key (-1 to list available objects) : <imported public key handle>

Enter handle of key to wrap (-1 to list available objects) : <handle of the key to be migrated>
Wrapped key was saved in file wrapped.key

Status: Doing great, no errors (CKR_OK)

Step 6: Move the wrapped key to the New CloudHSM client instance

Move the wrapped key to the New CloudHSM client instance using scp (or any other tool you prefer).

Step 7: Unwrap the key on New CloudHSM with the RSA Private Key

7.1. On the New CloudHSM client instance, run the key_mgmt_util and login as the CU.


Command:  loginHSM -s <CU user> -p <CU password>

	Cfm3LoginHSM returned: 0x00 : HSM Return: SUCCESS

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

7.2. Run the following unWrapKey command to unwrap the key using the RSA private key handle you received in step 1.2 (408, in my example). The output of the command should show the handle of the wrapped key (highlighted below).


Command:  unWrapKey -f wrapped.key -w <private key handle> -m 8 -noheader -l unwrapped_aes -kc 4 -kt 31

	Cfm3CreateUnwrapTemplate2 returned: 0x00 : HSM Return: SUCCESS

	Cfm2UnWrapWithTemplate3 returned: 0x00 : HSM Return: SUCCESS

	Key Unwrapped.  Key Handle: 410

	Cluster Error Status
	Node id 0 and err state 0x00000000 : HSM Return: SUCCESS

Conclusion

Using RSA OAEP for key migration ensures that your key material doesn’t leave the HSM boundary in plain text, as it’s encrypted using an RSA public key before being exported from CloudHSM Classic, and it can only be decrypted by New CloudHSM through the RSA private key that is generated and kept on New CloudHSM.

My post provides an example of how to use the ckdemo and key_mgmt_util utilities for the migration, but the same procedure can also be performed using the supported software libraries, such as the Java JCE library or the PKCS#11 library,a to migrate larger volumes of keys in an automated manner.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Mohamed AboElKheir

Mohamed AboElKheir is an Application Security Engineer who worksa with different team to ensure AWS services, applications, and websites are designed and implemented to the highest security standards. He is a subject matter expert for CloudHSM and is always enthusiastic about assisting CloudHSM customers with advanced issues and use cases. Mohamed is passionate about InfoSec, specifically cryptography, penetration testing (he’s OSCP certified), application security, and cloud security (he’s AWS Security Specialty certified).

from AWS Security Blog

Tips for building a cloud security operating model in the financial services industry

Tips for building a cloud security operating model in the financial services industry

My team helps financial services customers understand how AWS services operate so that you can incorporate AWS into your existing processes and security operations centers (SOCs). As soon as you create your first AWS account for your organization, you’re live in the cloud. So, from day one, you should be equipped with certain information: you should understand some basics about how our products and services work, you should know how to spot when something bad could happen, and you should understand how to recover from that situation. Below is some of the advice I frequently offer to financial services customers who are just getting started.

How to think about cloud security

Security is security – the principles don’t change. Many of the on-premises security processes that you have now can extend directly to an AWS deployment. For example, your processes for vulnerability management, security monitoring, and security logging can all be transitioned over.

That said, AWS is more than just infrastructure. I sometimes talk to customers who are only thinking about the security of their AWS Virtual Private Clouds (VPCs), and about the Amazon Elastic Compute Cloud (EC2) instances running in those VPCs. And that’s good; its traditional network security that remains quite standard. But I also ask my customers questions that focus on other services they may be using. For example:

  • How are you thinking about who has Database Administrator (DBA) rights for Amazon Aurora Serverless? Aurora Serverless is a managed database service that lets AWS do the heavy lifting for many DBA tasks.
  • Do you understand how to configure (and monitor the configuration of) your Amazon Athena service? Athena lets you query large amounts of information that you’ve stored in Amazon Simple Storage Service (S3).
  • How will you secure and monitor your AWS Lambda deployments? Lambda is a serverless platform that has no infrastructure for you to manage.

Understanding AWS security services

As a customer, it’s important to understand the information that’s available to you about the state of your cloud infrastructure. Typically, AWS delivers much of that information via the Amazon CloudWatch service. So, I encourage my customers to get comfortable with CloudWatch, alongside our AWS security services. The key services that any security team needs to understand include:

  • Amazon GuardDuty, which is a threat detection system for the cloud.
  • AWS Cloudtrail, which is the log of AWS API services.
  • VPC Flow Logs, which enables you to capture information about the IP traffic going to and from network interfaces in your VPC.
  • AWS Config, which records all the configuration changes that your teams have made to AWS resources, allowing you to assess those changes.
  • AWS Security Hub, which offers a “single pane of glass” that helps you assess AWS resources and collect information from across your security services. It gives you a unified view of resources per Region, so that you can more easily manage your security and compliance workflow.

These tools make it much quicker for you to get up to speed on your cloud security status and establish a position of safety.

Getting started with automation in the cloud

You don’t have to be a software developer to use AWS. You don’t have to write any code; the basics are straightforward. But to optimize your use of AWS and to get faster at automating, there is a real advantage if you have coding skills. Automation is the core of the operating model. We have a number of tutorials that can help you get up to speed.

Self-service cloud security resources for financial services customers

There are people like me who can come and talk to you. But to keep you from having to wait for us, we also offer a lot of self-service cloud security resources on our website.

We offer a free digital training course on AWS security fundamentals, plus webinars on financial services topics. We also offer an AWS security certification, which lets you show that your security knowledge has been validated by a third-party.

There are also a number of really good videos you can watch. For example, we had our inaugural security conference, re:Inforce, in Boston this past June. The videos and slides from the conference are now on YouTube, so you can sit and watch at your own pace. If you’re not sure where to start, try this list of popular sessions.

Finding additional help

You can work with a number of technology partners to help extend your security tools and processes to the cloud.

  • Our AWS Professional Services team can come and help you on site. In addition, we can simulate security incidents with you tohelp you get comfortable with security and cloud technology and how to respond to incidents.
  • AWS security consulting partners can also help you develop processes or write the code that you might need.
  • The AWS Marketplace is a wonderful self-service location where you can get all sorts of great security solutions, including finding a consulting partner.

And if you’re interested in speaking directly to AWS, you can always get in touch. There are forms on our website, or you can reach out to your AWS account manager and they can help you find the resources that are necessary for your business.

Conclusion

Financial services customers face some tough security challenges. You handle large amounts of data, and it’s really important that this data is stored securely and that its privacy is respected. We know that our customers do lots of due diligence of AWS before adopting our services, and they have many different regulatory environments within which they have to work. In turn, we want to help customers understand how they can build a cloud security operating model that meets their needs while using our services.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Stephen Quigg

Stephen Quigg is a Principal Securities Solutions Architect within AWS Financial Services. Quigg started his AWS career in Sydney, Australia, but returned home to Scotland three years ago having missed the wind and rain too much. He manages to fit some work in between being a husband and father to two angelic children and making music.

from AWS Security Blog

How to use AWS Secrets Manager to securely store and rotate SSH key pairs

How to use AWS Secrets Manager to securely store and rotate SSH key pairs

AWS Secrets Manager provides full lifecycle management for secrets within your environment. In this post, Maitreya and I will show you how to use Secrets Manager to store, deliver, and rotate SSH keypairs used for communication within compute clusters. Rotation of these keypairs is a security best practice, and sometimes a regulatory requirement. Traditionally, these keypairs have been associated with a number of tough challenges. For example, synchronizing key rotation across all compute nodes, enable detailed logging and auditing, and manage access to users in order to modify secrets.

However, rotating the keypair on all compute clusters’ nodes must be done in a tightly coordinated fashion, and failures generally result in availability risks. Moreover, the keypairs themselves are highly sensitive security credentials which must be carefully controlled with fine-grain access controls, detailed monitoring, and audit logging. These are precisely the types of tough challenges that AWS Secrets Manger solves for you.

In this post, we’ll show you how to secure, rotate, and use SSH keypairs for inter-cluster communication. You’ll use an AWS CloudFormation template to launch a cluster and configure Secrets Manager. Then we’ll show you how to use Secrets Manager to deliver the keypair to the cluster and use it for management operations, such as securely copying a file between nodes. Finally, we’ll use Secrets Manager to seamlessly rotate the keypair used by the cluster without any changes or outages. In this post, we’ve highlighted compute clusters, but you can use Secrets Manager to apply this solution directly to any SSH based use-case.

Solution overview

The following architecture diagram presents an overview of the solution:
 

Figure 1: Solution architecture

Figure 1: Solution architecture

The sample architecture created by CloudFormation includes one master node, three worker nodes, AWS Secret Manager—which utilizes a rotation AWS Lambda function—and AWS Systems Manager. Setting up the cluster is out of scope for this post; in our walkthrough, we’ll focus on the keypair rotation architecture.

Secrets Manager uses staging labels to identify different versions of a secret during rotation. A staging label is a text string. For example, by default, AWSCURRENT is attached to the current version of the secret, while AWSPENDING will be attached to new versions of the secret before they have been verified and deployed to corresponding resources.

As shown in the diagram:

  1. A secret is created in AWS Secrets Manager. The secret holds the SSH keypair that the master node will use to connect to the other nodes in the cluster. Upon keypair rotation, Secrets Manager will invoke a Lambda function (labeled 1.a in the diagram). The Lambda function will perform four steps:
    • 1.b: createSecret – create a new SSH keypair and store the private key as a new version of the secret.
    • 1.c: setSecret – label the newly created secret version with the label AWSPENDING and copy the public key to the worker nodes with AWS Systems Manager Run Command.

    The Lambda function will also perform two steps not shown in the diagram:

    • testSecret – verify that the new SSH keypair has been successfully deployed by invoking a test SSH connection.
    • finishSecret – set the staging label AWSCURRENT to the new secret version and remove the old keys from the worker nodes. This will also set the staging label AWSPREVIOUS to the old secret, allowing your administrator to have the ‘last known password’ if something goes wrong.

    An overview of the rotation Lambda function is available in the AWS Secrets Manager user guide. You have full control over the rotation function so that you can customize it to your needs. Note that no key is installed on the master node. Instead, the function will retrieve the private key from Secrets Manager only when it needs to securely communicate with the worker nodes. That private key is not saved on the master node’s filesystem but rather in volatile memory (per best practice, the private key variable is overwritten after successful authentication and deleted before the script exits); details about keeping secret data in volatile memory will follow later in this post.

  2. When the master node needs to communicate with any worker node, it will use an AWS SDK (Python Boto3) to read the SSH private key from Secrets Manager (2.a) and use the private key to establish an SSH tunnel with the worker nodes (2.b). The master node is authorized to read the private key from Secrets Manager because an AWS Identity and Access Management (IAM) role with a policy that allows it to access the secret is attached to the master node. The corresponding public key was deployed to each of the worker nodes during the rotation process in step one above.
  3. The secrets in Secrets Managers are encrypted with AWS Key Management System (KMS), and every version of the secret is encrypted with a unique data encryption key. The SSH key pair in the cluster will periodically rotate based on a configurable rotation interval, which you’ll configure from the Secrets Manager console later in this post. Each rotation repeats the process described in steps 1-2, resulting in a new version of the secret. Each new version will be encrypted using a new KMS data key, which provides an extra layer of security.
  4. The AWS Systems Manager Run Command will use the Amazon Elastic Compute Cloud (EC2) tag RotateSSHKeys with a value of True to identify the cluster’s worker node instances. Note that if you rely on tags as a security control, you must have clear governance and control over which users are able to change the tags and tag values on your EC2 instances.

Solution cost

Today, this solution will cost $0.48 an hour for the four T2.micro EC2 instances that comprise the sample cluster. Secrets manager has a 30-day trial period, after which one secret will cost $0.40 per month and $0.05 per 10,000 API calls. There is no additional charge for AWS Systems Manager.

Deploying the sample solution

In this section, you’ll deploy a test stack that demonstrates the entire solution. After deployment, you’ll log in to the master node and securely copy a file to one of the worker nodes. Finally, you’ll use Secrets Manager to rotate and deploy a new SSH keypair. The CloudFormation templates and secret rotation code are available in the AWS GitHub repository.

Set up the sample deployment by selecting the AWS CloudFormation Launch Stack button bellow; by default, the stack will be deployed in the us-east-1 (N. Virginia) Region.
 
Select this image to open a link that starts building the CloudFormation stack

The template creates an Amazon Amazon Virtual Private Cloud (VPC), private and public subnets, EC2 instances (master node and mock cluster), and the IAM role and policies used for the EC2 instances.

  1. Select your EC2 SSH key pair and input your IP range as stack parameters. In the YourIPRange field, enter the CIDR of your machine or network only, as this ensures only hosts from your network can access the master server. You may leave all other parameters as default. This CloudFormation template launches four t2.micro instances in a new VPC. One instance will be tagged as MasterServer and the rest will be tagged WorkerServer1-3.

    Note: The SSH keypair referenced here will be used to connect from your local computer to the master node. It is distinct from the SSH keypair used by the master node to connect to the worker nodes.

     

    Figure 2: Enter the CIDR of your machine or network

    Figure 2: Enter the CIDR of your machine or network

    Important: For simplicity, the master node you’ll create in this walkthrough will be in a public subnet, making it accessible from the CIDR you provided in Step 2. However, this is not the most secure approach possible. Follow the guidance in the Amazon EC2 VPC documentation to securely configure your cluster in a private subnet following the “defense in depth” principal.

  2. Monitor the status of the stack. When the status is CREATE_COMPLETE, the deployment is ready. Select the Outputs tab to find information about the newly created resources, and write down the master node’s public DNS and a worker node IP address. You’ll need both later in this post.
  3. Select the Launch Stack button to launch the AWS CloudFormation template that will deploy the Lambda function used by Secrets Manager, Accept the default values for the parameters. This template is designed for reusability; it can be applied to any SSH rotation use-case.
     
    Select this image to open a link that starts building the CloudFormation stack

Next, create and configure a new secret from the Secrets Manager console to store the cluster communication SSH keypair.

Configuring a secret in AWS Secrets Manager

The CloudFormation template did not deploy a secret, so follow these steps to create a secret from the console and rotation function configuration. To create a new secret:

  1. Open the AWS Secrets Manager console and select Store New Secret.
  2. Select Other type of secrets, then select the Plaintext tab.
  3. As shown in Figure 3, enter {} to create an empty JSON value with no properties. This value will be initially populated with a keypair by the rotation Lambda function.
     
    Figure 3: Create an empty JSON value with no properties

    Figure 3: Create an empty JSON value with no properties

  4. Keep the default encryption key and select Next. We’re keeping the default encryption key for the sake of simplicity in this example, but security best practices suggest using a Customer Master Key (CMK) that you’ve created.
  5. In Step 2: Name and description, name the secret /dev/ssh. The path of a secret can be used in the secret’s IAM policy to restrict users and roles to a secret or hierarchy of secrets. For example, the IAM policy could include /dev/* or /prod/* to control access to secrets in development or production, respectively.
  6. Add a description, then select Next.
     
    Figure 4: Add a description

    Figure 4: Add a description

  7. In Step 3: Configure rotation, choose Enable automatic rotation and enable a rotation interval of your choice, which you can configure using the rotation interval dropdown list.
  8. Select the Choose an AWS Lambda function drop-down and choose RotateSSH. This is the Lambda function that was deployed by the CloudFormation template.
  9. Select Next, then review your configuration and select Store. When the new secrets configuration is stored, the rotation Lambda function is immediately invoked, populating the value of the secret.
     
    Figure 5: Configure the rotation

    Figure 5: Configure the rotation

Testing the sample solution

With the secret configuration completed and the instances up and running, you’re now going to securely copy a file from the master node to one of the worker nodes, using the SSH key stored in Secrets Manager to test the solution.

  1. Log in to the master node via SSH, using the EC2 key that you specified in the CloudFormation template.
  2. Once connected, securely copy a file from the master node to the worker node using SCP (secure copy protocol) by entering the command below. Replace <private-ip-of-worker> with the worker node IP you copied down in step 3:
    
                python copy_file.py ec2-user <private-ip-of-worker>
            

Figure 6 shows ssh login to master node, and the copy_file.py command to worker node.
 

Figure 6: The <span style="font-family: courier">ssh</span> login to master node, and the <span style="font-family: courier">copy_file.py</span> command

Figure 6: The ssh login to master node, and the copy_file.py command

During execution, the python script will use the Secrets Manager get_secret_value API to retrieve the secret, which includes the private key. It will then use this key to establish a secure SSH connection with the worker nodes, without saving the private key on any master node storage.

You can review the copy_file.py on the master node or on GitHub. In the get_private_key() function, you can read the secret value, which includes the private key:


    get_secret_value_response = client.get_secret_value(
    SecretId=secret_name)           

In the copy_file function, create a secured SSH tunnel to copy a file using the private key from memory, using Paramiko, a Python implementation of SSHv2.


    private_key_str = io.StringIO()
    # Write private key to a memory file
    private_key_str.write(private_key)
    
    # Create key object
    key = paramiko.RSAKey.from_private_key(private_key_str)
    
    # Open a channel and authenticate 
    trans = paramiko.Transport(ip, 22) 
    trans.start_client()
    trans.auth_publickey(user, key)
    del key        

To demonstrate the rotation of the SSH keypair, you’ll now manually invoke the rotation function:

  1. Return to the Secrets Manager console, select your /dev/ssh secret, and choose Retrieve Secret Value to see the key pair.
  2. Select Rotate secret immediately. In the pop-up window, confirm your choice by selecting Rotate.
     
    Figure 7: Set the "Secret value" and "Rotation configuration"

    Figure 7: Set the “Secret value” and “Rotation configuration”

  3. Choose Rotate again to complete the rotation.
     
    Figure 8: Select "Rotate"

    Figure 8: Select “Rotate”

  4. Select the Close button to refresh the view, and then choose Retrieve Secret Value again.
  5. Once the rotation has completed, you can inspect the new keypair via the Secrets Manager console. Go back to the terminal and run the same python script to copy a file using SCP. Replace <private-ip-of-worker> with your own worker node ID:
    
                    python copy_file.py ec2-user <private-ip-of-worker>
            

The file has now been transferred successfully using a new key pair, with no updates required.

Auditing and monitoring

You can monitor and audit all APIs used to create and rotate your keys in Secrets Manager via AWS CloudTrail. To view CloudTrail events, follow these steps:

  1. Open the CloudTrail console and select Event history.
  2. From the Filter dropdown field, select Event source, enter secret in the filter field, then select secretsmanager.amazonaws.com from the dropdown menu.
  3. From here, you can review Secrets Manager’s events, such as GetSecretValue, PutSecretValue, UpdateSecretVersionStage (which modifies the staging labels attached to a version of a secret), and RotationSucceeded, in the CloudTrail event history. These event logs help to audit secrets configuration, rotation, and access.
     
    Figure 9: The "Event history" window

    Figure 9: The “Event history” window

Additionally, Secrets Manager can work with CloudWatch Events to trigger alerts when administrator-specified operations occur in an organization (for example, to notify you of a secret deletion attempt).

Cleaning up the CloudFormation Stack

To delete the entire CloudFormation stack:

  1. Select the stack named RotateSSH from the CloudFormation console.
  2. Select Actions, and then Delete Stack. This will delete all AWS resources created by the stack.
  3. Repeat the steps above to delete the stack named MasterWorkers.
  4. From the AWS Secrets Manager console, delete the secret /dev/ssh. Read more about Deleting and Restoring a Secret in the AWS Secrets Manager User Guide.

Conclusion

In this post, we demonstrate how you can use AWS Secrets Manager to store, rotate, and deliver SSH keypairs in order to secure communication within a compute cluster. Keys are securely encrypted and stored in AWS Secret Manager, which will also rotate the keys and install public keys on all nodes for you. By using this method, you won’t have to manually deploy SSH Keys on the various EC2 instances or manually rotate them. APIs associated with secrets management and rotation are logged in CloudTrail for auditing and monitoring. This key rotation solution is serverless. It does not require any servers to maintain and can scale rapidly.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on the AWS Secrets Manager forum.

Want more AWS Security news? Follow us on Twitter.

Author

Assaf Namer

Assaf is a Senior Solutions Architect. He likes coding, hackathons, and enjoys helping customers building reliable and secure cloud solutions. Outside of work, Assaf enjoys spinning and tennis.

Author

Maitreya Ranganath

Maitreya is a Solutions Architect with the Enterprise team. He has a focus on Security and Compliance and enjoys helping customers architect secure, scalable, and cost-effective solutions on AWS.

from AWS Security Blog

AWS and the European Banking Authority Guidelines on Outsourcing

AWS and the European Banking Authority Guidelines on Outsourcing

Financial institutions across the globe use AWS to transform the way they do business. It’s exciting to watch our customers in the financial services industry innovate on AWS in unique ways, across all geos and use cases. Regulations continue to evolve in this space, and we’re working hard to help customers proactively respond to new rules and guidelines. In many cases, the AWS Cloud makes it easier than ever before for customers to comply with different regulations and frameworks around the world.

The European Banking Authority (EBA), an EU financial supervisory authority, recently provided EU financial institutions (which includes credit institutions, certain investment firms, and payment institutions) with new outsourcing guidelines (PDF), which also apply to the use of cloud services. We’re ready and able to support our customers’ compliance with their obligations under the EBA Guidelines and to help meet and exceed their regulators’ expectations. We offer our customers a wide range of services that can simplify and directly assist in complying with the new guidelines, which take effect on September 30, 2019.

What do the EBA Guidelines mean for AWS customers?

The EBA Guidelines establish technology-neutral outsourcing requirements for EU financial institutions, and there is a particular focus on the outsourcing of “critical or important functions.” For AWS and our customers, the key takeaway is that the EBA Guidelines allow for EU financial institutions to use cloud services for material, regulated workloads. When considering or using third-party services, many EU financial institutions already follow due diligence, risk management, and regulatory notification processes that are similar to those processes laid out in the EBA Guidelines. To meet and exceed the EBA Guidelines’ requirements on security, resiliency, and assurance, EU financial institutions can use a variety of AWS security and compliance services.

Risk-based approach

The EBA Guidelines incorporate a risk-based approach that expects regulated entities to identify, assess, and mitigate the risks associated with any outsourcing arrangement. The risk-based approach outlined in the EBA Guidelines is consistent with the long-standing AWS shared responsibility model. This approach applies throughout the EBA Guidelines, including the areas of risk assessment, contractual and audit requirements, data location and transfer, and security implementation.

  • Risk assessment: The EBA Guidelines emphasize the need for EU financial institutions to assess the potential impact of outsourcing arrangements on their operational risk. The AWS shared responsibility model helps customers formulate their risk assessment approach because it illustrates how their security and management responsibilities change depending on the AWS services they use. For example, AWS operates some controls on behalf of customers, such as data center security, while customers operate other controls, such as event logging. In practice, AWS services help customers assess and improve their risk profile relative to traditional, on-premises environments.
  • Contractual and audit requirements: The EBA Guidelines lay out requirements for the written agreement between an EU financial institution and its service provider, including access and audit rights. For EU financial institutions running regulated workloads on AWS services, we offer the EBA Financial Services Addendum to address the EBA Guidelines’ contractual requirements. We also provide these institutions the ability to comply with the audit requirements in the EBA Guidelines through the AWS Security & Audit Series, including participation in an Audit Symposium, to facilitate customer audits. To align with regulatory requirements and expectations, our EBA addendum and audit program incorporate feedback that we’ve received from a variety of financial supervisory authorities across EU member states. EU financial services customers interested in learning more about the addendum or about the audit engagements offered by AWS can reach out to their AWS account teams.
  • Data location and transfer: The EBA Guidelines do not put restrictions on where an EU financial institution can store and process its data, but rather state that EU financial institutions should “adopt a risk-based approach to data storage and data processing location(s) (i.e. country or region) and information security considerations.” Our customers can choose which AWS Regions they store their content in, and we will not move or replicate your customer content outside of your chosen Regions unless you instruct us to do so. Customers can replicate and back up their customer content in more than one AWS Region to meet a variety of objectives, such as availability goals and geographic requirements.
  • Security implementation: The EBA Guidelines require EU financial institutions to consider, implement, and monitor various security measures. Using AWS services, customers can meet this requirement in a scalable and cost-effective way while improving their security posture. Customers can use AWS Config or AWS Security Hub to simplify auditing, security analysis, change management, and operational troubleshooting. As part of their cybersecurity measures, customers can activate Amazon GuardDuty, which provides intelligent threat detection and continuous monitoring, to generate detailed and actionable security alerts. Amazon Inspector automatically assesses a customer’s AWS resources for vulnerabilities or deviations from best practices and then produces a detailed list of security findings prioritized by level of severity. Customers can also enhance their security by using AWS Key Management Service (creation and control of encryption keys), AWS Shield (DDoS protection), and AWS WAF (filtering of malicious web traffic). These are just a few of the 500+ services and features we offer that enable strong availability, security, and compliance for our customers.

As reflected in the EBA Guidelines, it’s important to take a balanced approach when evaluating responsibilities in a cloud implementation. We are responsible for the security of the AWS Global Infrastructure. In the EU, we currently operate AWS Regions in Ireland, Frankfurt, London, Paris, and Stockholm, with our new Milan Region opening soon. For all of our data centers, we assess and manage environmental risks, employ extensive physical and personnel security controls, and guard against outages through our resiliency and testing procedures. In addition, independent, third-party auditors test more than 2,600 standards and requirements in the AWS environment throughout the year.

Conclusion

We encourage customers to learn about how the EBA Guidelines apply to their organization. Our teams of security, compliance, and legal experts continue to work with our EU financial services customers, both large and small, to support their journey to the AWS Cloud. AWS is closely following how regulatory authorities apply the EBA Guidelines locally and will provide further updates as needed. If you have any questions about compliance with the EBA Guidelines and their application to your use of AWS, or if you require the EBA Financial Services Addendum, please reach out to your account representative or request to be contacted.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Chad Woolf

Chad joined Amazon in 2010 and built the AWS compliance functions from the ground up, including audit and certifications, privacy, contract compliance, control automation engineering and security process monitoring. Chad’s work also includes enabling public sector and regulated industry adoption of the AWS Cloud, compliance with complex privacy regulations such as GDPR and operating a trade and product compliance team in conjunction with global region expansion. Prior to joining AWS, Chad spent 12 years with Ernst & Young as a Senior Manager working directly with Fortune 100 companies consulting on IT process, security, risk, and vendor management advisory work, as well as designing and deploying global security and assurance software solutions. Chad holds a Masters of Information Systems Management and a Bachelors of Accounting from Brigham Young University, Utah. Follow Chad on Twitter.

from AWS Security Blog

How to add DNS filtering to your NAT instance with Squid

How to add DNS filtering to your NAT instance with Squid

Note from September 4, 2019: We’ve updated this blog post, initially published on January 26, 2016. Major changes include: support of Amazon Linux 2, no longer having to compile Squid 3.5, and a high availability version of the solution across two availability zones.

Amazon Virtual Private Cloud (Amazon VPC) enables you to launch AWS resources on a virtual private network that you’ve defined. On an Amazon VPC, many people use network address translation (NAT) instances and NAT gateways to enable instances in a private subnet to initiate outbound traffic to the Internet, while preventing the instances from receiving inbound traffic initiated by someone on the Internet.

For security and compliance purposes, you might have to filter the requests initiated by these instances (also known as “egress filtering”). Using iptables rules, you could restrict outbound traffic with your NAT instance based on a predefined destination port or IP address. However, you might need to enforce more complex security policies, such as allowing requests to AWS endpoints only, or blocking fraudulent websites, which you can’t easily achieve by using iptables rules.

In this post, I discuss and give an example of how to use Squid, a leading open-source proxy, to implement a “transparent proxy” that can restrict both HTTP and HTTPS outbound traffic to a given set of Internet domains, while being fully transparent for instances in the private subnet.

The solution architecture

In this section, I present the architecture of the high availability NAT solution and explain how to configure Squid to filter traffic transparently. Later in this post, I’ll provide instructions about how to implement and test the solution.

The following diagram illustrates how the components in this process interact with each other. Squid Instance 1 intercepts HTTP/S requests sent by instances in Private Subnet 1, including the Testing Instance. Squid Instance 1 then initiates a connection with the destination host on behalf of the Testing Instance, which goes through the Internet gateway. This solution spans two Availability Zones, with Squid Instance 2 intercepting requests sent from the other Availability Zone. Note that you may adapt the solution to span additional Availability Zones.
 

Figure 1: The solution spans two Availability Zones

Figure 1: The solution spans two Availability Zones

Intercepting and filtering traffic

In each availability zone, the route table associated to the private subnet sends the outbound traffic to the Squid instance (see Route Tables for a NAT Device). Squid intercepts the requested domain, then applies the following filtering policy:

  • For HTTP requests, Squid retrieves the host header field included in all HTTP/1.1 request messages. This specifies the Internet host being requested.
  • For HTTPS requests, the HTTP traffic is encapsulated in a TLS connection between the instance in the private subnet and the remote host. Squid cannot retrieve the host header field because the header is encrypted. A feature called SslBump would allow Squid to decrypt the traffic, but this would not be transparent for the client because the certificate would be considered invalid in most cases. The feature I use instead, called SslPeekAndSplice, retrieves the Server Name Indication (SNI) from the TLS initiation. The SNI contains the requested Internet host. As a result, Squid can make filtering decisions without decrypting the HTTPS traffic.

Note 1: Some older client-side software stacks do not support SNI. Here are the minimum versions of some important stacks and programming languages that support SNI: Python 2.7.9 and 3.2, Java 7 JSSE, wget 1.14, OpenSSL 0.9.8j, cURL 7.18.1

Note 2: TLS 1.3 introduced an optional extension that allows the client to encrypt the SNI, which may prevent Squid from intercepting the requested domain.

The SslPeekAndSplice feature was introduced in Squid 3.5 and is implemented in the same Squid module as SslBump. To enable this module, Squid requires that you provide a certificate, though it will not be used to decode HTTPS traffic. The solution creates a certificate using OpenSSL.


mkdir /etc/squid/ssl
cd /etc/squid/ssl
openssl genrsa -out squid.key 4096
openssl req -new -key squid.key -out squid.csr -subj "/C=XX/ST=XX/L=squid/O=squid/CN=squid"
openssl x509 -req -days 3650 -in squid.csr -signkey squid.key -out squid.crt
cat squid.key squid.crt >> squid.pem        

The following code shows the Squid configuration file. For HTTPS traffic, note the ssl_bump directives instructing Squid to “peek” (retrieve the SNI) and then “splice” (become a TCP tunnel without decoding) or “terminate” the connection depending on the requested host.


visible_hostname squid
cache deny all

# Log format and rotation
logformat squid %ts.%03tu %6tr %>a %Ss/%03>Hs %<st %rm %ru %ssl::>sni %Sh/%<a %mt
logfile_rotate 10
debug_options rotate=10

# Handling HTTP requests
http_port 3128
http_port 3129 intercept
acl allowed_http_sites dstdomain "/etc/squid/whitelist.txt"
http_access allow allowed_http_sites

# Handling HTTPS requests
https_port 3130 cert=/etc/squid/ssl/squid.pem ssl-bump intercept
acl SSL_port port 443
http_access allow SSL_port
acl allowed_https_sites ssl::server_name "/etc/squid/whitelist.txt"
acl step1 at_step SslBump1
acl step2 at_step SslBump2
acl step3 at_step SslBump3
ssl_bump peek step1 all
ssl_bump peek step2 allowed_https_sites
ssl_bump splice step3 allowed_https_sites
ssl_bump terminate step2 all
http_access deny all       

The text file located at /etc/squid/whitelist.txt contains the list of whitelisted domains, with one domain per line. In this blog post, I’ll show you how to configure Squid to allow requests to *.amazonaws.com, which corresponds to AWS endpoints. Note that you can restrict access to a specific set of AWS services that you’ve defined (see Regions and Endpoints for a detailed list of endpoints), or you can set your own list of domains.

Note: An alternate approach is to use VPC endpoints to privately connect your VPC to supported AWS services without requiring access over the Internet (see VPC Endpoints). Some supported AWS services allow you to create a policy that controls the use of the endpoint to access AWS resources (see VPC Endpoint Policies, and VPC Endpoints for a list of supported services).

You may have noticed that Squid listens on port 3129 for HTTP traffic and 3130 for HTTPS. Because Squid cannot directly listen to 80 and 443, you have to redirect the incoming requests from instances in the private subnets to the Squid ports using iptables. You do not have to enable IP forwarding or add any FORWARD rule, as you would do with a standard NAT instance.


sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3129
sudo iptables -t nat -A PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 3130       

The solution stores the files squid.conf and whitelist.txt in an Amazon Simple Storage Service (S3) bucket and runs the following script every minute on the Squid instances to download and update the Squid configuration from S3. This makes it easy to maintain the Squid configuration from a central location. Note that it first validates the files with squid -k parse and then reload the configuration with squid -k reconfigure if no error was found.


        cp /etc/squid/* /etc/squid/old/
        aws s3 sync s3://<s3-bucket> /etc/squid
        squid -k parse && squid -k reconfigure || (cp /etc/squid/old/* /etc/squid/; exit 1)     

The solution then uses the CloudWatch Agent on the Squid instances to collect and store Squid logs in Amazon CloudWatch Logs. The log group /filtering-nat-instance/cache.log contains the error and debug messages that Squid generates and /filtering-nat-instance/access.log contains the access logs.

An access log record is a space-delimited string that has the following format:

<time> <response_time> <client_ip> <status_code> <size> <method> <url> <sni> <remote_host> <mime>

The following table describes the fields of an access log record.

Field Description
time Request time in seconds since epoch
response_time Response time in milliseconds
client_ip Client source IP address
status_code Squid request status and HTTP response code sent to the client. For example, a HTTP request to an unallowed domain logs TCP_DENIED/403, and a HTTPS request to a whitelisted domain logs TCP_TUNNEL/200
size Total size of the response sent to client
method Request method like GET or POST.
url Request URL received from the client. Logged for HTTP requests only
sni Domain name intercepted in the SNI. Logged for HTTPS requests only
remote_host Squid hierarchy status and remote host IP address
mime MIME content type. Logged for HTTP requests only

The following are some examples of access log records:


1563718817.184 14 10.0.0.28 TCP_DENIED/403 3822 GET http://example.com/ - HIER_NONE/- text/html
1563718821.573 7 10.0.0.28 TAG_NONE/200 0 CONNECT 172.217.7.227:443 example.com HIER_NONE/- -
1563718872.923 32 10.0.0.28 TCP_TUNNEL/200 22927 CONNECT 52.216.187.19:443 calculator.s3.amazonaws.com ORIGINAL_DST/52.216.187.19 –   

Designing a high availability solution

The Squid instances introduce a single point of failure for the private subnets. If a Squid instance fails, the instances in its associated private subnet cannot send outbound traffic anymore. The following diagram illustrates the architecture that I propose to address this situation within an Availability Zone.
 

Figure 2: The architecture to address if a Squid instance fails within an Availability Zone

Figure 2: The architecture to address if a Squid instance fails within an Availability Zone

Each Squid instance is launched in an Amazon EC2 Auto Scaling group that has a minimum size and a maximum size of one instance. A shell script is run at startup to configure the instances. That includes installing and configuring Squid (see Running Commands on Your Linux Instance at Launch).

The solution uses the CloudWatch Agent and its procstat plugin to collect the CPU usage of the Squid process every 10 seconds. For each Squid instance, the solution creates a CloudWatch alarm that watches this custom metric and goes to an ALARM state when a data point is missing. This can happen, for example, when Squid crashes or the Squid instance fails. Note that for my use case, I consider watching the Squid process a sufficient approach to determining the health status of a Squid instance, although it cannot detect eventual cases of the Squid process being alive but unable to forward traffic. As a workaround, you can use an end-to-end monitoring approach, like using witness instances in the private subnets to send test requests at regular intervals and collect the custom metric.

When an alarm goes to ALARM state, CloudWatch sends a notification to an Amazon Simple Notification Service (SNS) topic which then triggers an AWS Lambda function. The Lambda function marks the Squid instance as unhealthy in its Auto Scaling group, retrieves the list of healthy Squid instances based on the state of other CloudWatch alarms, and updates the route tables that currently route traffic to the unhealthy Squid instance to instead route traffic to the first available healthy Squid instance. While the Auto Scaling group automatically replaces the unhealthy Squid instance, private instances can send outbound traffic through the Squid instance in the other Availability Zone.

When the CloudWatch agent starts collecting the custom metric again on the replacement Squid instance, the alarm reverts to OK state. Similarly, CloudWatch sends a notification to the SNS topic, which then triggers the Lambda function. The Lambda function completes the lifecycle action (see Amazon EC2 Auto Scaling Lifecycle Hooks) to indicate that the replacement instance is ready to serve traffic, and updates the route table associated to the private subnet in the same availability zone to route traffic to the replacement instance.

Implementing and testing the solution

Now that you understand the architecture behind this solution, you can follow the instructions in this section to implement and test the solution in your AWS account.

Implementing the solution

First, you’ll use AWS CloudFormation to provision the required resources. Select the Launch Stack button below to open the CloudFormation console and create a stack from the template. Then, follow the on-screen instructions.

Select this image to open a link that starts building the CloudFormation stack

CloudFormation will create the following resources:

  • An Amazon Virtual Private Cloud (Amazon VPC) with an internet gateway attached.
  • Two public subnets and two private subnets on the Amazon VPC.
  • Three route tables. The first route table is associated to the public subnets to make them publicly accessible. The other two route tables are associated to the private subnets.
  • An S3 bucket to store the Squid configuration files, and two Lambda-based custom resources to add the files squid.conf and whitelist.txt to this bucket.
  • An IAM role to grant the Squid instances permissions to read from the S3 bucket and use the CloudWatch agent.
  • A security group to allow HTTP and HTTPS traffic from instances in the private subnets.
  • A launch configuration to specify the template of Squid instances. That includes commands to run at startup for automating the initial configuration.
  • Two Auto Scaling groups that use this launch configuration to launch the Squid instances.
  • A Lambda function to redirect the outbound traffic and recover a Squid instance when it fails.
  • Two CloudWatch alarms to watch the custom metric sent by Squid instances and trigger the Lambda function when the health status of Squid instances changes.
  • An EC2 instance in the first private subnet to test the solution, and an IAM role to grant this instance permissions to use the SSM agent. Session Manager, which I introduce in the next paragraph, uses this SSM agent (see Working with SSM Agent)

Testing the solution

After the stack creation has completed (it can take up to 10 minutes), connect onto the Testing Instance using Session Manager, a capability of AWS Systems Manager that lets you manage instances through an interactive shell without the need to open an SSH port:

  1. Open the AWS Systems Manager console.
  2. In the navigation pane, choose Session Manager.
  3. Choose Start Session.
  4. For Target instances, choose the option button to the left of Testing Instance.
  5. Choose Start Session.

Note: Session Manager makes calls to several AWS endpoints (see Working with SSM Agent). If you prefer to restrict access to a defined set of AWS services, make sure to whitelist the associated domains.

After the connection is made, you can test the solution with the following commands. Only the last three requests should return a valid response, because Squid allows traffic to *.amazonaws.com only.


curl http://www.amazon.com
curl https://www.amazon.com
curl http://calculator.s3.amazonaws.com/index.html
curl https://calculator.s3.amazonaws.com/index.html
aws ec2 describe-regions --region us-east-1         

To find the requests you just made in the access logs, here’s how to browse the Squid logs in Amazon CloudWatch Logs:

  1. Open the Amazon CloudWatch console.
  2. In the navigation pane, choose Logs.
  3. For Log Groups, choose the log group /filtering-nat-instance/access.log.
  4. Choose Search Log Group to view and search log records.

To test how the solution behaves when a Squid instance fails, you can terminate one of the Squid instances manually in the Amazon EC2 console. Then, watch the CloudWatch alarm change its state in the Amazon CloudWatch console, or watch the solution change the default route of the impacted route table in the Amazon VPC console.

You can now delete the CloudFormation stack to clean up the resources that were just created.

Discussion: Transparent or forward proxy?

The solution that I describe in this blog is fully transparent for instances in the private subnets, which means that instances don’t need to be aware of the proxy and can make requests as if they were behind a standard NAT instance. An alternate solution is to deploy a forward proxy in your Amazon VPC and configure instances in private subnets to use it (see the blog post How to set up an outbound VPC proxy with domain whitelisting and content filtering for an example). In this section, I discuss some of the differences between the two solutions.

Supportability

A major drawback with forward proxies is that the proxy must be explicitly configured on every instance within the private subnets. For example, you can configure the HTTP_PROXY and HTTPS_PROXY environment variables on Linux instances, but some applications or services, like yum, require their own proxy configuration, or don’t support proxy usage. Note also that some AWS services and features, like Amazon EMR or Amazon SageMaker notebook instances, don’t support using a forward proxy at the time of this post. However, with TLS 1.3, a forward proxy is the only option to restrict outbound traffic if the SNI is encrypted.

Scalability

Deploying a forward proxy on AWS usually consists of a load balancer distributing traffic to a set of proxy instances launched in an Auto Scaling group. Proxy instances can be launched or terminated dynamically depending on the demand (also known as “horizontal scaling”). With forward proxies, each route table can route traffic to a single instance at a time, and changing the type of the instance is the only way to increase or decrease the capacity (also known as “vertical scaling”).

The solution I present in this post does not dynamically adapt the instance type of the Squid instances based on the demand. However, you might consider a mechanism in which the traffic from a private subnet is temporarily redirected through another Availability Zone while the Squid instance is being relaunched by Auto Scaling with a smaller or larger instance type.

Mutualization

Deploying a centralized proxy solution and using it across multiple VPCs is a way of reducing cost and operational complexity.

With a forward proxy, instances in private subnets send IP packets to the proxy load balancer. Therefore, sharing a forward proxy across multiple VPCs only requires connectivity between the “instance VPCs” and a proxy VPC that has VPC Peering or equivalent capabilities.

With a transparent proxy, instances in private subnets sends IP packets to the remote host. VPC Peering does not support transitive routing (see Unsupported VPC Peering Configurations) and cannot be used to share a transparent proxy across multiple VPCs. However, you can now use an AWS Transit Gateway that acts as a network transit hub to share a transparent proxy across multiple VPCs. I give an example in the next section.

Sharing the solution across multiple VPCs using AWS Transit Gateway

In this section, I give an example of how to share a transparent proxy across multiple VPCs using AWS Transit Gateway. The architecture is illustrated in the following diagram. For the sake of simplicity, the diagram does not include Availability Zones.
 

Figure 3: The architecture for a transparent proxy across multiple VPCs using AWS Transit Gateway

Figure 3: The architecture for a transparent proxy across multiple VPCs using AWS Transit Gateway

Here’s how instances in the private subnet of “VPC App” can make requests via the shared transparent proxy in “VPC Shared:”

  1. When instances in VPC App make HTTP/S requests, the network packets they send have the public IP address of the remote host as the destination address. These packets are forwarded to the transit gateway, based on the route table associated to the private subnet.
  2. The transit gateway receives the packets and forwards them to VPC Shared, based on the default route of the transit gateway route table.
  3. Note that the transit gateway attachment resides in the transit gateway subnet. When the packets arrive in VPC Shared, they are forwarded to the Squid instance because the next destination has been determined based on the route table associated to the transit gateway subnet.
  4. The Squid instance makes requests on behalf of the source instance (“Instances” in the schema). Then, it sends the response to the source instance. The packets that it emits have the IP address of the source instance as the destination address and are forwarded to the transit gateway according to the route table associated to the public subnet.
  5. The transit gateway receives and forwards the response packets to VPC App.
  6. Finally, the response reaches the source instance.

In a high availability deployment, you could have one transit gateway subnet per Availability Zone that sends traffic to the Squid instance that resides in the same Availability Zone, or to the Squid instance in another Availability Zone if the instance in the same Availability Zone fails.

You could also use AWS Transit Gateway to implement a transparent proxy solution that scales horizontally. This allows you to add or remove proxy instances based on the demand, instead of changing the instance type. With this approach, you must deploy a fleet of proxy instances – launched by an Auto Scaling group, for example – and mount a VPN connection between each instance and the transit gateway. The proxy instances need to support ECMP (“Equal Cost Multipath routing”; see Transit Gateways) to equally spread the outbound traffic between instances. I don’t describe this alternative architecture further in this blog post.

Conclusion

In this post, I’ve shown how you can use Squid to implement a high availability solution that filters outgoing traffic to the Internet and helps meet your security and compliance needs, while being fully transparent for the back-end instances in your VPC. I’ve also discussed the key differences between transparent proxies and forward proxies. Finally, I gave an example of how to share a transparent proxy solution across multiple VPCs using AWS Transit Gateway.

If you have any questions or suggestions, please leave a comment below or on the Amazon VPC forum.

If you have feedback about this blog post, submit comments in the Comments section below.

Want more AWS Security news? Follow us on Twitter.

Nicolas Malaval

Nicolas is a Solution Architect for Amazon Web Services. He lives in Paris and helps our healthcare customers in France adopt cloud technology and innovate with AWS. Before that, he spent three years as a Consultant for AWS Professional Services, working with enterprise customers.

from AWS Security Blog

64 AWS services achieve HITRUST certification

64 AWS services achieve HITRUST certification

We’re excited to announce that 64 AWS services are now certified for the Health Information Trust Alliance (HITRUST) Common Security Framework (CSF).

The full list of AWS services that were audited by a third party auditor and certified under HITRUST CSF is available on our Services in Scope by Compliance Program page. You can view and download our HITRUST CSF certification here:

The HITRUST certification allows AWS customers to tailor their security control baselines to a variety of factors including, but not limited to, regulatory requirements and organization type.

The HITRUST Alliance has established the CSF as a certifiable framework that can be leveraged by organizations to comply with ISO/IEC 27000 series and HIPAA related requirements. The HITRUST CSF is already widely adopted by leading organizations in a variety of industries in their approach to security and privacy. Please visit the HITRUST Alliance website for more information.

As always, we value your feedback and questions and commit to helping customers achieve and maintain the highest standard of security and compliance. Please feel free to reach out to the team through the AWS Compliance Contact Us page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

from AWS Security Blog