Tag: Firewall

Introducing the AWS Security Incident Response Whitepaper

Introducing the AWS Security Incident Response Whitepaper

AWS recently released the AWS Security Incident Response whitepaper, to help you understand the fundamentals of responding to security incidents within your cloud environment. The whitepaper reviews how to prepare your organization for detecting and responding to security incidents, explores the controls and capabilities at your disposal, provides topical examples, and outlines remediation methods that leverage automation to improve response speed.

All AWS users within an organization should have a basic understanding of security incident response processes, and security staff must deeply understand how to react to security issues. While education and preparation are key components to this, we encourage customers to practice these skills through simulations in order to iterate and improve their processes. The foundation of a successful incident response program in the cloud is to educate, prepare, simulate, and iterate:

  • Educate your security operations and incident response staff about cloud technologies and how your organization intends to use them.
  • Prepare your incident response team to detect and respond to incidents in the cloud by enabling detective capabilities and by ensuring appropriate access to the necessary tools and cloud services. Additionally, prepare the necessary runbooks, both manual and automated, to ensure reliable and consistent responses. Work with other teams to establish expected baseline operations, and use that knowledge to identify deviations from normal operations.
  • Simulate both expected and unexpected security events within your cloud environment to understand the effectiveness of your preparation.
  • Iterate on the outcome of your simulation to increase the scale of your response posture, reduce delays, and further reduce risk.

The whitepaper dives deep into each of these considerations, helping you prepare or improve your security response capabilities during your journey to the cloud. If you’d like additional information about cloud security at AWS, please contact us.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Joshua Du Lac

Josh is a Senior Solutions Architect with AWS, specializing in security. Based out of Texas, he has helped dozens of enterprise, global, and financial services customers accelerate their journey to the cloud while improving their security along the way. Outside of work, Josh enjoys searching for the best tacos in Texas and practicing his handstands.

from AWS Security Blog

AWS Security Profiles: Mark Ryland, Director, Office of the CISO

AWS Security Profiles: Mark Ryland, Director, Office of the CISO

Author photo

Mark Ryland at the AWS Summit Berlin keynote

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS and what’s your current role?

I’ve been at AWS for almost eight years. For the first six and a half years, I built the Solutions Architecture and Professional Services teams for AWS’s worldwide public sector sales organization—from five people when I joined, to many hundreds some years later. It was an amazing ride to build such a great team of cloud technology experts.

About a year and a half ago, I transitioned to the AWS Security team. On the Security team, I run a much smaller team called the Office of the CISO. We help manage interaction between our customers and the leadership team for AWS Security. In addition, we have a number of internal projects that we work on to improve interaction and information flow between the Security team and various AWS service teams, and between the AWS security team and the Amazon.com security team.

Why is your team called “the Office of the CISO”?

A lot of people want to talk to Steve Schmidt, our Chief Information Security Officer (CISO) at AWS. If you want to talk to him, it’s very likely that you’re going to talk to me or to my team as a part of that process. There’s only one of him, and there are a few of us. We help Steve scale a bit, and help more customers have direct interaction with senior leadership in AWS Security.

We also provide guidance and leadership to the broader AWS security community, especially to the customer-facing side of AWS. For example, we’re leaders of the Security and Compliance Technical Field Community (TFC) for AWS. The Security TFC is made up of subject matter experts in solutions architecture, professional services, technical account management, and other technical disciplines. We help them to understand and communicate effectively with customers about important security and compliance topics, and to gather customer requirements and funnel them to the right places.

What’s your favorite part of your job?

I love communicating about technology — first diving deep to figure it out for myself, and then explaining it to others. And I love interacting with our customers, both to explain our platform and what we do, and, equally important, to get their feedback. We constantly get great input and great ideas from customers, and we try to leverage that feedback into continuous improvement of our products and services.

What does cloud security mean to you, personally? Why is it a topic you’re passionate about?

I remember being at a private conference on cybersecurity. It was government-oriented, and organized by a Washington DC-based think-tank. A number of senior government officials were talking about challenges in cybersecurity. In the middle of an intense discussion about the big challenges facing the industry, a former, very senior official in the U.S. Government intelligence community said (using a golfing colloquialism), “The great thing about the cloud is that it’s a Mulligan; it’s a do-over. When we make the cloud transition, we can finally do the right things when it comes to cybersecurity.

There’s a lot of truth to that, just in terms of general IT modernization. The cloud simply makes security easier. Not “easy” — there are still challenges. But you’re much more equipped to do the right thing—to build automation, to build tooling, and to take full advantage of the base protections that are built into the platform. With a little bit of care, what you do is going to be better than what you did before. The responsibility that remains for you as the customer is still significant, but because everything is software-defined, you get far more visibility and control. Because everything is API-driven, you can automate just about everything.

Challenges remain; I want to reiterate that it’s never easy to do security right. But it’s so much easier when you don’t have to run the entire stack from the concrete floor up to the application, and when you can rely on the inherent visibility and control provided by a software-defined environment. In short, cloud migration represents the industry’s best opportunity for making big improvements in IT security. I love being in the center of that change for the better, and helping to make it real.

What initiatives are you currently working on that you’re particularly excited about?

Two things. First, we’re laser-focused on improving our AWS Identity and Access Management capabilities. They’re already very sophisticated and very powerful, but they are somewhat uneven across our services, and not as easy to use as they should be. I’m on the periphery of that work, but I’m actively involved in scoping out improvements. One recent example is a big advance in the capabilities of Service Control Policies (SCPs) within AWS Organizations. These now allow extremely fine-grained controls — as expressive as IAM polices—that can easily be applied globally across dozens or hundreds of AWS accounts. For example, you can express a global policy like “nobody but [some very special role] can attach an internet gateway to my VPCs, full stop.”

I’m also a networking geek, and another area I’ve been actively working on is improvements to our built-in networking security features. People have been asking for greater visibility and control over their VPCs. We have a lot of great features like security groups and network ACLs, but there’s a lot more we can and will do. For example, customers are looking for more visibility into what’s going on inside their VPCs beyond our existing VPC Flow Logs feature. We have an exciting announcement at our re:Inforce conference this week about some new capabilities in this area!

You’ll be speaking at re:Inforce about the security benefits of running EC2 instances on the AWS Nitro architecture. At a high level, what’s so innovative about Nitro, and how does it enable better security?

The EC2 Nitro architecture is a fundamental re-imagining of the best way to build a secure virtualization platform. I don’t think there’s anything else like it in the industry. We’ve taken a lot of the complicated software that’s needed for virtualization, which normally runs in a privileged copy of an operating system — the “domain 0,” or “dom0” to use Xen terminology, but present in all modern hypervisors — and we’ve completely eliminated it. All those features are now implemented by custom software and hardware in a set of powerful co-processor computers inside the same physical box as the main Intel processor system board. The Nitro computers present virtual devices to the mainboard as if they were actual hardware devices. You might say the main system board — despite its powerful Intel Xeon processor and big chunks of memory — is really the “co-processor” in these systems; I call it the “customer workload co-processor!” It’s the main Nitro controller and not the system mainboard that’s fundamentally in charge of the overall system, providing a root of trust and a secure layer between the mainboard and the outside world.

There are bunch of great security benefits that flow from this redesign. For example, with the elimination of the dom0 trusted operating system running on the mainboard, we’ve completely eliminated interactive access to these hosts. There’s no SSH, no RDP, no interactive software mechanisms that allow direct human access. I could go on and on, but I’ll stop there — you’ll have to come to my talk on Wednesday! And of course, we’ll post the video online afterward.

You’re also involved with a session to encourage customers to set up “state-of-the-art encryption.” In your view, what are some of the key elements of a “state-of-the-art” approach to encryption?

I came up with the original idea for the session, but was able to hand it off to an even better-suited speaker, so now I’ll just be there to enjoy it. Colm MacCarthaigh will be presenting. Colm is a senior principal engineer in the EC2 networking team, but he’s also the genius behind a number of important innovations in security and networking across AWS. For example, he did some of the original design work on the “shuffle sharding” techniques we use broadly, across AWS, to improve availability and resiliency for multi-tenanted services. Later, he came up with the idea, and, in a few weeks of intense coding, wrote the first version of S2N, our open source TLS implementation that provides far better security than the implementations typically used in the industry. He was also a significant contributor to the TLS 1.3 specification. I encourage everyone to follow him on Twitter, where you’ll learn all kinds of interesting things about cryptography, networking, and the like.

Now, to finally answer your question: Colm will be talking about how AWS does more and more encryption for you automatically, and how multiple layers of encryption can help address different kinds of threats. For example, without actually breaking TLS encryption, researchers have shown that they can figure out the content of an encrypted voice-over-IP (VOIP) call simply by analyzing the timing and size of the packets. So, wrapping TLS sessions inside of other encryption layers is a really good idea. Colm will talk about the importance of layered encryption, plus a bunch of other great topics: how AWS makes it easy to use encryption; where we do it automatically even if you don’t ask for it; how we’re inventing new, more secure means for key distribution; and fun stuff like that. It will be a blast!

What changes do you hope we’ll see across the global security and compliance landscape over the next 5 years?

I think that with innovations like the Nitro architecture for EC2, and with our commitment to continually improving and strengthening other security features and enabling greater automation around things like identity management and anomaly detection, we will come to a point where people will realize that the cloud, in almost every case, is more secure than an on-premises environment. I don’t mean to say that you couldn’t go outside of the cloud and build something secure (as long as you are willing to spend a ton of money). But as a general matter, cloud will become the default option for secure processing of very sensitive data.

We’re not quite there yet, in terms of widespread perception and understanding. There are still quite a few people who haven’t dug very far below the surface of “what is cloud.” There is still a common, visceral reaction to the idea of “public cloud” as being risky. People object to ideas like multitenancy, where you’re sharing physical infrastructure with other customers, as if it’s somehow inherently risky. There are risks, but they are so well mitigated, and we have so much experience controlling those risks, that they’re far outweighed by the big security benefits. Very consistently, as customers become more educated and experienced with the cloud, they tell us that they feel more secure in their cloud infrastructure than they did in their on-premises world. Still, that’s not currently the first reaction. People still start by thinking of the cloud as risky, and it takes time to educate them and change that perspective. So there’s still some important work ahead of us.

What’s your favorite way to relax?

It’s funny, now that I’m getting old, I’m reverting to some of the pursuits and hobbies of my youth. When I was a teenager I was passionate about cycling. I raced bicycles extensively at the regional and national level on both road and track from ages 14 to 18. A few minutes of my claim to 15 minutes of Warholian fame was used up by being in a two-man breakaway with 17-year-old Greg LeMond in a road race in Arizona, although he beat me and everyone else resoundingly in the end! I’ve ridden road bikes and done a bit of mountain biking over the years, but I’m getting back into it now and enjoying it immensely. Of course, there’s far more technology to play with these days, and I can’t resist. I splurged on an expensive pair of pedals with power meters built in, and so now I get detailed data from every ride that I can analyze to prove mathematically that I’m not in very good shape.

One of my other hobbies back in my teenage years was playing guitar — mostly folk-rock acoustic, but also electric and bass guitar in garage bands. That’s another activity I’ve started again. Fortunately, my kids, who are now around college-age plus or minus, all love the music from the 60s and 70s that I dust off and play, and they have great voices, so we have a lot of fun jamming and singing harmonies together.

What’s one thing that a visitor to your hometown of Washington, DC should experience?

The Washington DC area is famous for lots of great tourist attractions. But if you enjoy Michelin Guide-level dining experiences, I’d recommend a restaurant right in my neighborhood. It’s called L’Auberge Chez François, and it’s quite famous. It features Alsatian food (from the eastern region of France, along the German border). It’s an amazing restaurant that’s been there for almost 50 years, and it continues to draw a clientele from across the region and around the world. It’s always packed, so get reservations well in advance!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mark Ryland

Mark is the director of the Office of the CISO for AWS. He has more than 28 years of experience in the technology industry and has served in leadership roles in cybersecurity, software engineering, distributed systems, technology standardization and public policy. Prior to his current role, he served as the Director of Solution Architecture and Professional Services for the AWS World Public Sector team.

from AWS Security Blog

New! Set permission guardrails confidently by using IAM access advisor to analyze service-last-accessed information for accounts in your AWS organization

New! Set permission guardrails confidently by using IAM access advisor to analyze service-last-accessed information for accounts in your AWS organization

You can use AWS Organizations to centrally govern and manage multiple accounts as you scale your AWS workloads. With AWS Organizations, central security administrators can use service control policies (SCPs) to establish permission guardrails that all IAM users and roles in the organization’s accounts adhere to. When teams and projects are just getting started, administrators may allow access to a broader range of AWS services to inspire innovation and agility. However, as developers and applications settle into common access patterns, administrators need to set permission guardrails to remove permissions for services that have not or should not be accessed by their accounts. Whether you’re just getting started with SCPs or have existing SCPs, you can now use AWS Identity and Access Management (IAM) access advisor to help you restrict permissions confidently.

IAM access advisor uses data analysis to help you set permission guardrails confidently by providing you service-last-accessed information for accounts in your organization. By analyzing last-accessed information, you can determine the services not used by IAM users and roles. You can implement permissions guardrails using SCPs that restrict access to those services. For example, you can identify services not accessed in an organizational units (OU) for the last 90 days, create an SCP that denies access to these services, and attach it to the OU to restrict access to all IAM users and roles across the accounts in the OU. You can view service-last-accessed information for your accounts, OUs, and your organization using the IAM console in the account you used to create your organization. You can access this information programmatically using IAM access advisor APIs with the AWS Command Line Interface (AWS CLI) or a programmatic client.

In this post, I first review the service-last-accessed information provided by IAM access advisor using the IAM console. Next, I walk through an example to demonstrate how you can use this information to remove permissions for services not accessed by IAM users and roles within your production OU by creating an SCP.

Use IAM access advisor to view service-last-accessed information using the AWS management console

Access advisor provides an access report that displays a list of services and the last-accessed timestamps for when an IAM principal accessed each service. To view the access report in the console, sign in to the IAM console using the account you used to create your organization. Additionally, you need to enable SCPs on your organization root to view the access report. You can view the service-last-accessed information in two ways. First, you can use the Organization activity view to review the service-last-accessed information for an organizational entity such as an account or OU. Second, you can use the SCP view to review the service-last-accessed information for services allowed by existing SCPs attached to your organizational entities.

The Organization activity view lists your OUs and accounts. You can select an OU or account to view the services that the entity is allowed to access and the service-last-accessed information for those services. This tells you services that have not been accessed in an organizational entity. Using this information, you can remove permissions for these services by creating a new SCP and attaching it the organizational entity or updating an existing SCP attached to the entity.

The SCP view lists all the SCPs in your organization. You can select a SCP to view the services allowed by the SCP and the service-last-accessed information for those services. The service-last-accessed information is the last-accessed timestamp across all the organizational entities that the SCP is attached to. This tells you services that have not been accessed but are allowed by the SCP. Using this information, you can refine your existing permission guardrails to remove permissions for services not accessed for your existing SCPs.

Figure 1 shows an example of the access report for an OU. You can see the service-last-accessed information for all services that IAM users and roles can access in all the accounts in ProductionOU. You can see that services such as AWS Ground Station and Amazon GameLift have not been used in the last year. You can also see that Amazon DynamoDB was last accessed in account Application1 10 days ago.
 

Figure 1: An example access report for an OU

Figure 1: An example access report for an OU

Now that I’ve described how to view service-last-accessed information, I will walk through an example.

Example: Restrict access to services not accessed in production by creating an SCP

For this example, assume ExampleCorp uses AWS Organizations to organize their development, test, and production environments into organizational units (OUs). Alice is a central security administrator responsible for managing the accounts in the production OU for ExampleCorp. She wants to ensure that her production OU called ProductionOU has permissions to only the services that are required to run existing workloads. Currently, Alice hasn’t set any permission guardrails on her production OU. I will show you how you can help Alice review the service-last-accessed information for her production OU and set a permission guardrail confidently using a SCP to restrict access to services not accessed by ExampleCorp developers and applications in production.

Prerequisites

  1. Ensure that the SCP policy type is enabled for the organization. If you haven’t enabled SCPs, you can enable it for your organization root by following the steps mentioned in Enabling and Disabling a Policy Type on a Root.
  2. Ensure that your IAM roles or users have appropriate permissions to view the access report, you can do so by attaching the IAMAccessAdvisorReadOnly managed policy.

How to review service-last-accessed information for ProductionOU in the IAM console

In this section, you’ll review the service-last-accessed information using IAM access advisor to determine the services that have not been accessed across all the accounts in ProductionOU.

  1. Start by signing in to the IAM console in the account that you used to create the organization.
  2. In the left navigation pane, under the AWS Organizations section, select the Organization activity view.

    Note: Enabling the SCP policy type does not set any permission guardrails for your organization unless you start attaching SCPs to accounts and OUs in your organization.

  3. In the Organization activity view, select ProductionOU from the organization structure displayed on the console so you can review the service last accessed information across all accounts in that OU.
     
    Figure 2: Select 'ProductionOU' from the organizational structure

    Figure 2: Select ‘ProductionOU’ from the organizational structure

  4. Selecting ProductionOU opens the Details and activity tab, which displays the access report for this OU. In this example, I have no permission guardrail set on the ProductionOU, so the default FULLAWSACCESS SCP is attached, allowing the ProductionOU to have access to all services. The access report displays all AWS services along with their last-accessed timestamps across accounts in the OU.
     
    Figure 3: The service access report

    Figure 3: The service access report

  5. Review the access report for ProductionOU to determine services that have not been accessed across accounts in this OU. In this example, there are multiple accounts in ProductionOU. Based on the report, you can identify that services Ground Station and GameLift have not been used in 365 days. Using this information, you can confidently set a permission guardrail by creating and attaching a new SCP that removes permissions for these services from ProductionOU. You can use a different time period, such as 90 days or 6 months, to determine if a service is not accessed based on your preference.
     
    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

    Figure 4: Amazon GameLift and AWS Ground Station are not accessed

Create and attach a new SCP to ProductionOU in the AWS Organizations console

In this section, you’ll use the access insights you gained from using IAM access advisor to create and attach a new SCP to ProductionOU that removes permissions to Ground Station and GameLift.

  1. In the AWS Organizations console, select the Policies tab, and then select Create policy.
  2. In the Create new policy window, give your policy a name and description that will help you quickly identify it. For this example, I use the following name and description.
    • Name: ProductionGuardrail
    • Description: Restricts permissions to services not accessed in ProductionOU.
  3. The policy editor provides you with an empty statement in the text editor to get started. Position your cursor inside the policy statement. The editor detects the content of the policy statement you selected, and allows you to add relevant Actions, Resources, and Conditions to it using the left panel.
     
    Figure 5: SCP editor tool

    Figure 5: SCP editor tool

  4. Next, add the services you want to restrict. Using the left panel, select services Ground Station and GameLift. Denying access to services using SCPs is a powerful action if these services are in use. From the service last accessed information I reviewed in step 6 of the previous section, I know these services haven’t been used for more than 365 days, so it is safe to remove access to these services. In this example, I’m not adding any resource or condition to my policy statement.
     
    Figure 6: Add the services you want to restrict

    Figure 6: Add the services you want to restrict

  5. Next, use the Resource policy element, which allows you to provide specific resources. In this example, I select the resource type as All Resources.
  6.  

    Figure 9: Select resource type as All Resources

    Figure 7: Select resource type as “All Resources”

  7. Select the Create Policy button to create your policy. You can see the new policy in the Policies tab.
     
    Figure 10: The new policy on the “Policies” tab

    Figure 8: The new policy on the “Policies” tab

  8. Finally, attach the policy to ProductionOU where you want to apply the permission guardrail.

Alice can now review the service-last-accessed information for the ProductionOU and set permission guardrails for her production accounts. This ensures that the permission guardrail Alice set for her production accounts provides permissions to only the services that are required to run existing workloads.

Summary

In this post, I reviewed how access advisor provides service-last-accessed information for AWS organizations. Then, I demonstrated how you can use the Organization activity view to review service-last-accessed information and set permission guardrails to restrict access only to the services that are required to run existing workloads. You can also retrieve service-last-accessed information programmatically. To learn more, visit the documentation for retrieving service last accessed information using APIs.

If you have comments about using IAM access advisor for your organization, submit them in the Comments section below. For questions related to reviewing the service last accessed information through the console or programmatically, start a thread on the IAM forum or contact AWS Support.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Ujjwal Pugalia

Ujjwal is the product manager for the console sign-in and sign-up experience at AWS. He enjoys working in the customer-centric environment at Amazon because it aligns with his prior experience building an enterprise marketplace. Outside of work, Ujjwal enjoys watching crime dramas on Netflix. He holds an MBA from Carnegie Mellon University (CMU) in Pittsburgh.

from AWS Security Blog

How to host and manage an entire private certificate infrastructure in AWS

How to host and manage an entire private certificate infrastructure in AWS

AWS Certificate Manager (ACM) Private Certificate Authority (CA) now offers the option for managing online root CAs and a full online PKI hierarchy. You can now host and manage your organization’s entire private certificate infrastructure in AWS. Supporting a full hierarchy expands AWS Certificate Manager (ACM) Private Certificate Authority capabilities.

CA administrators can use ACM Private CA to create a complete CA hierarchy, including root and subordinate CAs, with no need for external CAs. Customers can create secure and highly available CAs in any one of the AWS Regions in which ACM Private CA is available, without building and maintaining their own on-premises CA infrastructure. ACM Private CA provides essential security for operating a CA in accordance with your internal compliance rules and security best practices. ACM Private CA is secured with AWS-managed hardware security modules (HSMs), removing the operational and cost burden from customers.

An overview of CA hierarchy

Certificates are used to establish identity and secure connections. A resource presents a certificate to a server to establish its identity. If the certificate is valid, and a chain can be constructed from the certificate to a trusted root CA, the server can positively identify and trust the resource.

A CA hierarchy provides strong security and restrictive access controls for the most-trusted root CA at the top of the trust chain, while allowing more permissive access and bulk certificate issuance for subordinate CAs lower in the chain.

The root CA is a cryptographic building block (root of trust) upon which certificates can be issued. It’s comprised of a private key for signing (issuing) certificates and a root certificate that identifies the root CA and binds the private key to the name of the CA. The root certificate is distributed to the trust stores of each entity in an environment. When resources attempt to connect with one another, they check the certificates that each entity presents. If the certificates are valid and a chain can be constructed from the certificate to a root certificate installed in the trust store, a “handshake” occurs between resources that cryptographically prove the identity of each entity to the other. This creates an encrypted communication channel (TLS/SSL) between them.

How to configure a CA hierarchy with ACM Private CA

You can use root CAs to create a CA hierarchy without the need for an external root CA, and start issuing certificates to identify resources within your organizations. You can create root and subordinate CAs in nearly any configuration you want, including defining a CA structure to fit your needs or replicating an existing CA structure.

To get started, you can use the ACM Private CA console, APIs, or CLI to create a root and subordinate CA and issue certificates from the subordinate CA.
 

Figure 1: Issue certificates after creating a root and subordinate CA

Figure 1: Ceating a root CA

You can create a two-level CA hierarchy using the ACM console in less than 10 minutes using the ACM Private CA console wizard, which walks you through each step of creating a root or subordinate CA. When you create a subordinate CA, the wizard prompts you to chain the subordinate to a parent CA.
 

Figure 2: Walk through each step with the ACM Private CA console wizard

Figure 2: The “Install subordinate CA certificate” page

After creating a new root CA, you need to distribute the new root to the trust stores in your servers’ operating systems and browsers. If you want a simple, one-level CA hierarchy for development and testing, you can create a root certificate authority and start issuing private certificates directly from the root CA.

Note: The trade-off of this approach is that you can’t revoke the root CA certificate because the root CA certificate is installed in your trust stores. To effectively “untrust” the root CA in this scenario, you would need to replace the root CA in your trust stores with a new root CA.

Offline versus online root CAs

Some organizations, and all public CAs, keep their root CAs offline (that is, disconnected from the network) in a physical vault. In contrast, most organizations have root CAs that are connected to the network only when they’re used to sign the certificates of CAs lower in the chain. For example, customers might create a root CA with a 20-year lifetime, and disable it under normal circumstances to prevent it from being used except when enabled by a privileged administrator when it’s necessary to sign CA certificates for a child CA. Because using the root CA to issue a certificate is a rare and carefully controlled operation, customers monitor logs, audit reports, and generate alarms notifying them when their root CA is used to issue a certificate. Subordinate issuing CAs are the lowest in the hierarchy. They are typically used for bulk certificate issuance that identify devices and resources. Subordinate issuing CAs typically have shorter lifetimes (1-2 years), and fewer policy controls and monitors.

With ACM Private CA, you can create a trusted root CA with a lifetime of 10 or more years. All CA private keys are protected by FIPS 140-2 level 3 HSMs. You can verify the CA is used only for authorized purposes by reviewing AWS CloudTrail logs and audit reports. You can further protect against mis-issuance by configuring AWS Identity and Access Management (IAM) permissions that limit access to your CA. With an ACM Private CA, you can revoke certificates issued from your CA and use the certificate revocation list (CRL) generated by ACM Private CA to provide revocation information to clients. This simplifies configuration and deployment.

Customer use cases for root CA hierarchy

There are three common use cases for root CA hierarchy.

The most common use case is customers who are advanced PKI users and already have an offline root CA protected by an HSM. However, when it comes to development and network staging, they don’t want to use the same root CA and certificate. The new root CA hierarchy feature allows them to easily stand up a PKI for their test environment that mimics production, but uses a separate root of trust.

The second use case is customers who are interested in using a private CA but don’t have strong knowledge of PKI, nor have they invested in HSMs. These customers have gotten by, generating a root CA using OpenSSL. With the offering of root CA hierarchy, they’re now able to stand up a root CA within ACM Private CA that is protected by an HSM and restricted by IAM access policy. This increases the security of their hierarchy and simplifies their deployment.

The third use case is customers who are evaluating an internal PKI and also looking at managing an offline HSM. These customers recognize the significant process, management, cost, and training investments to stand up the full infrastructure required. Customers can remove these costs by managing their organization’s entire private certificate infrastructure in AWS.

How to get started

With ACM Private CA root CA hierarchy feature, you can create a PKI hierarchy and begin issuing private certificates for identity and securing TLS communication. To get started, open the ACM Private CA console. To learn more, read getting started with AWS Certificate Manager and getting started in the ACM Private CA user guide.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Josh Rosenthol

Josh is a Product Manager who helps solve customer problems with public and private certificate and CAs from AWS. He enjoys listening to customers describe their use cases and translate them into improvements to AWS Certificate Manager and ACM Private CA.

Author

Todd Cignetti

Todd Cignetti is a Principal Product Manager at Amazon Web Services. He is responsible for AWS Certificate Manager (ACM) and ACM Private CA. He focuses on helping AWS customers identify and secure their resources and endpoints with public and private certificates.

from AWS Security Blog

How to prompt users to reset their AWS Managed Microsoft AD passwords proactively

How to prompt users to reset their AWS Managed Microsoft AD passwords proactively

If you’re an AWS Directory Service administrator, you can reset your directory users’ passwords from the AWS console or the CLI when their passwords expire. However, you can improve your efficiency by reducing the number of requests for password resets. You can also help improve the security of your organization by having your users proactively reset their directory passwords before they expire. In this post, I describe the steps you can take to set up a solution to send regular reminders to your AWS Directory Service for Microsoft Active Directory (AWS Managed Microsoft AD) users to prompt them to change their password before it expires. This will help prevent users from being locked out when their passwords expire and also reduce the number of reset requests sent to administrators.

Solution Overview

When users’ passwords expire, they typically contact their directory service administrator to help them reset their password. For security reasons, they then need to reset their password again on their computer so that the administrator has no knowledge of the new password. This process is time-consuming and impacts productivity. In this post, I present a solution to remind users automatically to reset AWS Managed Microsoft AD passwords. The following diagram and description explains how the solution works.
 

Figure 1: Solution architecture

Figure 1: Solution architecture

  1. A script running on an AWS Managed Microsoft AD domain-joined Amazon Elastic Compute Cloud (Amazon EC2) instance (Notification Server) searches the AWS Managed Microsoft AD for all enabled user accounts and retrieves their names, email addresses, and password expiry dates.
  2. Using the permissions of the IAM role attached to the Notification Server, the script obtains the SES SMTP credentials stored in AWS Secrets Manager.
  3. With the SMTP credentials obtained in Step 2, the script then securely connects to Amazon Simple Email Service (Amazon SES.)
  4. Based on your preferences, Amazon SES sends domain password expiry notifications to the users’ mailboxes.

A separate process for updating the SES credentials stored in AWS Secrets Manager occurs as follows:

  1. A CloudWatch rule triggers a Lambda function.
  2. The Lambda function generates new SES SMTP credentials from the SES IAM Username.
  3. The Lambda function then updates AWS Secrets Manager with the new SES credentials.
  4. The Lambda function then deletes the previous IAM access key.

Prerequisites

The instructions in this post assume that you’re familiar with how to create Amazon EC2 for Windows Server instances, use Remote Desktop Protocol (RDP) to log in to the instances, and have completed the following tasks:

  1. Create an AWS Microsoft AD directory.
  2. Join an Amazon EC2 for Windows Server instance to the AWS Microsoft AD domain to use as your Notification Server.
  3. Sign up for Amazon Simple Email Service (Amazon SES).
  4. Remove Amazon EC2 throttling on port 25 for your EC2 instance.
  5. Remove your Amazon SES account from the Amazon SES sandbox so you can also send email to unverified recipients.

Note: You can use your AWS Microsoft Directory management instance as the Notification Server. For the steps below, use any account that is a member of the AWS delegated Administrators’ group.

Summary of the steps

  1. Verify an Amazon SES email address.
  2. Create Amazon SES SMTP credentials.
  3. Store the Amazon SES SMTP credentials in AWS Secrets Manager.
  4. Create an IAM role with read permissions to the secret in AWS Secrets Manager.
  5. Set up and test the notification script.
  6. Set up Windows Task Scheduler.
  7. Configure automatic rotation of the SES Credentials stored in Secrets Manager.

STEP 1: Verify an Amazon SES email address

To prevent unauthorized use, Amazon SES requires that you verify the email address that you use as a “From,” “Source,” “Sender,” or “Return-Path”.

To verify the email address you will use as the sending address, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation pane, under Identity Management, select Email Addresses.
  3. Select Verify a New Email Address, and then enter the email address.
  4. Select Verify This Email Address.

An email will be sent to the specified email address with a link to verify the email address. Once you verify the email, you’ll see the Verification Status as verified in the SES console.

In the image below, I have four verified email addresses:
 

Figure 2: Verified email addresses

Figure 2: Verified email addresses

STEP 2: Create Amazon SES SMTP credentials

You must create an Amazon SES SMTP user name and password to access the Amazon SES SMTP interface and send email using the service. To do this, complete the following steps:

  1. Sign in to the Amazon SES console.
  2. In the navigation bar, select SMTP Settings.
  3. In the content pane, make a note of the Server Name as you will use this when sending the email in Step 5. Select Create My SMTP Credentials.
     
    Figure 3: Make a note of the SES SMTP Server Name

    Figure 3: Make a note of the SES SMTP Server Name

  4. Specify a value for the IAM User Name field. Make a note of this IAM User Name as you will need in Step 7 later. In this post, I use the placeholder, ses-smtp-user-eu-west-1, as the user name (as shown below):
     
    Figure 4: Make a note of SES IAM User Name

    Figure 4: Make a note of SES IAM User Name

  5. Select Create.

Make a note of the SMTP Username and SMTP Password you created because you’ll use these in later steps. This is as shown below in my example.
 

Figure 5: Make a note of the SES SMTP Username and SMTP Password

Figure 5: Make a note of the SES SMTP Username and SMTP Password

STEP 3: Store the Amazon SES SMTP credentials in AWS Secrets Manager

In this step, use AWS Secrets Manager to store the Amazon SES SMTP credentials created in Step 2. You will reference this credential when you execute the script in the Notification Server.

Complete the following steps to store the Amazon SES SMTP credentials in AWS Secrets Manager:

  1. Sign in to the AWS Secrets Manager Console.
  2. Select Store a new secret, and then select Other types of secrets.
  3. Under Secret Key/value, enter the Amazon SES SMTP Username in the left box and the Amazon SES SMTP Password in the right box, and then select Next.
     
    Figure 6: Enter the Amazon SES SMTP user name and password

    Figure 6: Enter the Amazon SES SMTP user name and password

  4. In the next screen, enter the string AWS-SES as the name of the secret. Enter an optional description for the secret and add an optional tag and select Next.

    Note: I recommend using AWS-SES as the name of your secret. If you choose to use some other name, you will have to update PowerShell script in Step 5. I also recommend creating the secret in the same region as the Notification Server. If you create your secret in a different region, you will also have to update PowerShell script in Step 5.

     

    Figure 7: Enter "AWS-SES" as the secret name

    Figure 7: Enter “AWS-SES” as the secret name

  5. On next screen, leave the default setting as Disable automatic rotation and select Next. You will come back later in Step 7 where you will use a Lambda function to rotate the secret at specified intervals.
  6. To store the secret, in the last screen, select Store. Now select the secret and make a note of the ARN of the secret as shown in in Figure 8.
     
    Figure 8: Make a note of the Secret ARN

    Figure 8: Make a note of the Secret ARN

Step 4: Create IAM role with permissions to read the secret

Create an IAM role that grants permissions to read the secret created in Step 3. Then, attach this role to the Notification Server to enable your script to read this secret. Complete the following steps:

  1. Log in to the IAM Console.
  2. In the navigation bar, select Policies.
  3. In the content pane, select Create Policy, and then select JSON.
  4. Replace the content with the following snippet while specifying the ARN of the secret you created earlier in step 3:
    
        {
            "Version": "2012-10-17",
            "Statement": {
                "Effect": "Allow",
                "Action": "secretsmanager:GetSecretValue",
                "Resource": "<arn-of-the-secret-created-in-step-3>"
            }
        }                
        

    Here is how it looks in my example after I replace with the ARN of my Secrets Manager secret:
     

    Figure 9: Example policy

    Figure 9: Example policy

  5. Select Review policy.
  6. On the next screen, specify a name for the policy. In my example, I have specified Access-Ses-Secret as the name of the policy. Also specify a description for the policy, and then select Create policy.
  7. In the navigation pane, select Roles.
  8. In the content pane, select Create role.
  9. On the next page, select EC2, and then select Next: Permissions.
  10. Select the policy you created, and then select Next: Tags.
  11. Select Next: Review, provide a name for the role. In my example, I have specified SecretsManagerReadAccessRole as the name. Select Create Role.

Now, complete the following steps to attach the role to the Notification Server:

  1. From the Amazon EC2 Console, select the Notification Server instance.
  2. Select Actions, select Instance Settings, and then select Attach/Replace IAM Role.
     
    Figure 10: Select "Attach/Replace IAM Role"

    Figure 10: Select “Attach/Replace IAM Role”

  3. On the Attach/Replace IAM Role page, choose the role to attach from the drop-down list. For this post, I choose SecretsManagerReadAccessRole and select Apply.

    Here is how it looks in my example:
     

    Figure 11: Example "Attach/Replace IAM Role"

    Figure 11: Example “Attach/Replace IAM Role”

STEP 5: Setup and Test the Notification Script

In this section, you’re going to test the script by sending a sample notification email to an end user to remind the user to change their password. To test the script, log into your Notification Server using your AWS Microsoft Managed AD default Admin account. Then, complete the following steps:

  1. Install the PowerShell Module for Active Directory by opening PowerShell as Administrator and run the following command:

    Install-WindowsFeature -Name RSAT-AD-PowerShell

  2. Download the script to the Notification Server. In my example, I downloaded the script and stored in the location

    c:\scripts\PasswordExpiryNotify.ps1

  3. Create a new user in Active Directory and ensure you enter a valid email address for the new user.

    Note: Make sure to clear the User must change password at next logon check box when creating the user; otherwise, you will get an invalid output from the command in the next step.

    For this example, I created a test user named RandomUser in Active Directory.

  4. In the PowerShell Window, execute the following command to determine the number of days remaining before the password for the user expires. In this example, I run the following to determine the number of days remaining before the RandomUser account password expires:

    (New-TimeSpan -Start ((Get-Date).ToLongDateString()) -End ((Get-ADUser -Identity ‘RandomUser’ -Properties “msDS-UserPasswordExpiryTimeComputed”|Select @{Name=”exp”;Expression={[datetime]::FromFileTime($_.”msDS-UserPasswordExpiryTimeComputed”).tolongdatestring()}}) | Select -ExpandProperty exp)).Days

    In my example, I get “15” as the output.

  5. To test the script, navigate to the location of the script on your Notification Server and execute the following:

    .\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2> ” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <NUMBER OF DAYS>

    In this example, I navigate to c:\scripts\ and execute:

    .\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk [email protected]” -NotifyDays 15

A new email will be sent to user’s mailbox. Verify the user has received the email.

Note: I can update these instructions to send multiple email reminders to users. For example, if I want to notify users on three occasions (first notification 15 days before password expiration, then 7 days, and one more when there is only 1 day) I would execute the following:

.\PasswordExpiryNotify.ps1 -smtpServer “email-smtp.eu-west-1.amazonaws.com” -from “IT Servicedesk <[email protected]>” -NotifyDays 1,7,15

Step 6: Set up a Windows Task Scheduler

Now that you have tested the script and confirmed that the solution is working as expected, you can set up a Windows Scheduled Task to execute the script daily. To do this:

  1. Open Task Scheduler.
  2. Right-click Task Scheduler Library, and then select Create Task.
  3. Specify a name for the task.
  4. On the Triggers tab, select New.
  5. Select Daily, and then select OK.
  6. On the Actions tab, select New.
  7. Inside Program/Script, type PowerShell.exe
  8. In the Add arguments (optional) box, type the following command, including the full path to the script.

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer “<SES-SMTP-SERVER-NAME-NOTED-IN-STEP 2>” -from “<SENDER LABEL> <SES VERIFIED EMAIL ADDRESS>” -NotifyDays <DAY,DAY,DAY>

    In my example, I type the following:

    “C:\Scripts\PasswordExpiryNotify.ps1 -smtpServer ’email-smtp.eu-west-1.amazonaws.com’ -from ‘IT Servicedesk [email protected]’ -NotifyDays 1,7,15”

  9. Select OK twice, and then enter your password when prompted to complete the steps.

The script will now run daily at the specified time and will send password expiration email notifications to your AWS Managed Microsoft AD users. In my example, a password expiration reminder email is sent to my AWS Managed Microsoft AD users 15 days before expiration, 7 days before expiration, and then 1 day before expiration.

Here is a sample email:
 

Figure 12: Sample password expiration email

Figure 12: Sample password expiration email

Note: You can edit the script to change the notification message to suit your requirements.

Step 7: Configure automatic update of the SES credentials

In this final section, you’re going to setup the configuration to automatically update the secret (that is, the SES credentials stored in AWS Secrets Manager) at regular intervals. To achieve this, you will use an Amazon Lambda function that will do the following:

  1. Create a new access key using the IAM user you used to create the SES SMTP Credentials in Step 2 (ses-smtp-user-eu-west-1 in my example).
  2. Generate a new SES SMTP User password from the created IAM secret access key.
  3. Update the SES credentials stored in AWS Secrets Manager.
  4. Delete the old IAM access key.

Complete the following steps to enable automatic update of the SES credentials:

First you will create the IAM policy which you will attach to a role that will be assumed by the lambda function. This policy will provide the permissions to create new access keys for the SES IAM user and permissions to update the SES credentials stored in AWS Secrets Manager.

  1. Log in to the IAM Console, and in the navigation bar, select Policies.
  2. In the content pane, select Create Policy, and then select JSON.
  3. Replace the content with the following script while specifying the ARN of the IAM user we used to create the SES SMTP credentials in Step 2 and the ARN of the secret stored in Secrets Manager that you noted in Step 3.
    
        {
            "Version": "2012-10-17",
            "Statement": [
                {
                    "Effect": "Allow",
                    "Action": "iam:*AccessKey*",
                    "Resource": "<arn-of-iam-user-created-in-step-2>"
                },
                {
                    "Effect": "Allow",
                    "Action": "secretsmanager:UpdateSecret",
                    "Resource": "<arn-of-secret-stored-in-secret-manager>"
                }
                ]
        }              
        

    Here is the JSON for the policy in my example:
     

    Figure 13: Example policy

    Figure 13: Example policy

  4. Select Review Policy, and then specify a name and a description for the policy. In my example, I have specified the name of the policy as iam-secretsmanager-access-for-lambda.

    Here is how it looks in my example:
     

    Figure 14: Specify a name and description for the policy

    Figure 14: Specify a name and description for the policy

  5. Select Create Policy

Now, create an IAM role and attach this policy.

  1. In the navigation bar, select Roles and select Create Role.
  2. Under the Choose the service that will use this role, select Lambda, and then select Next: Permissions.
  3. On the next page, select the policy you just created and select Next: Tags. Add an optional tag and select Next: Review.
  4. Specify a name for the role and description, and then select Create role. In my example, I have named the role: LambdaRoleRotatateSesSecret.

Now, you will create a Lambda function that will assume the created role:

  1. Log on to the AWS Lambda console and select Create Function
  2. Specify a name for the function, and then, under Runtime, select Python 3.7.
  3. Under execution role, select User an existing role, and then select the role you created earlier.

    Here are the settings I used in my example:
     

    Figure 15: Settings on the "Create function" page

    Figure 15: Settings on the “Create function” page

  4. Select Create function, copy the following Python code, and then paste it in the Function Code section.
    
        import boto3
        import os      #required to fetch environment variables
        import hmac    #required to compute the HMAC key
        import hashlib #required to create a SHA256 hash
        import base64  #required to encode the computed key
        import sys     #required for system functions
        
        iam = boto3.client('iam')
        sm = boto3.client('secretsmanager')
        
        SES_IAM_USERNAME = os.environ['SES_IAM_USERNAME']
        SECRET_ID = os.environ['SECRET_ID']
        
        def lambda_handler(event, context):
            print("Getting current credentials...")
            old_key = iam.list_access_keys(UserName=SES_IAM_USERNAME)['AccessKeyMetadata'][0]['AccessKeyId']
        
            print("Creating new credentials...")
            new_key = iam.create_access_key(UserName=SES_IAM_USERNAME)
            print("New credentials created...")
            
            smtp_username = '%s' % (new_key['AccessKey']['AccessKeyId'])
            iam_sec_access_key = '%s' % (new_key['AccessKey']['SecretAccessKey'])
            
             
            # These variables are used when calculating the SMTP password.
            message = 'SendRawEmail'
            version = '\x02'
            
            # Compute an HMAC-SHA256 key from the AWS secret access key.
            signatureInBytes = hmac.new(iam_sec_access_key.encode('utf-8'),message.encode('utf-8'),hashlib.sha256).digest()
            # Prepend the version number to the signature.
            signatureAndVersion = version.encode('utf-8') + signatureInBytes
            # Base64-encode the string that contains the version number and signature.
            smtpPassword = base64.b64encode(signatureAndVersion)
            # Decode the string and print it to the console.
            ses_smtp_pass = smtpPassword.decode('utf-8')
            secret_string = '{"%s": "%s"}' % (new_key['AccessKey']['AccessKeyId'], ses_smtp_pass)
            print("Updating credentials in SecretsManager...")
            sm_res = sm.update_secret(
                SecretId=SECRET_ID,
                SecretString=secret_string
                )
            print(sm_res)
            
            print("Deleting old key")
            del_res = iam.delete_access_key(
                UserName=SES_IAM_USERNAME,
                AccessKeyId=old_key
                )
                print(del_res) 
        

    Here is what it will look like:
     

    Figure 16: The Python code pasted in the "Function Code" section

    Figure 16: The Python code pasted in the “Function Code” section

  5. In the Environment variables section, specify the two environment variables required by the Lambda Python code as follows:
    
                SECRET_ID: AWS-SES
                SES_IAM_USERNAME: <SES-IAM-USERNAME-NOTED-IN-STEP 2>  
        

    Here is how my environment variables look:
     

    Figure 17: The Python code pasted in the "Function Code" section page

    Figure 17: The Python code pasted in the “Function Code” section

  6. Select Save.

    You have now created a Lambda function that can update the SES credentials stored in AWS Secrets Manager.

    You will now set up CloudWatch to trigger the Lambda function at scheduled intervals.

  7. Open the Amazon CloudWatch Console.
  8. In the navigation pane, select Rules and, in the content pane, select Create Rule.
  9. Under Event Source, select Schedule, and then select Fixed rate of. Specify how often you would like CloudWatch to trigger the Lambda function. In my example, I have chosen to update the SES credentials every 30 days.
  10. Under Targets, select Add Target, and then select Lambda Function.
  11. In Function, select the Lambda function you just created, and then select Configure details.
     
    Figure 18: Create new CloudWatch rule

    Figure 18: Create new CloudWatch rule

  12. Specify a name for the rule, enter a description, make sure the State check box is selected, and then select Create rule.

The SES credentials stored in AWS Secrets Manager will now be updated based on the scheduled intervals you specified in CloudWatch.

Conclusion

In this post, I showed how you can set up a solution to remind your AWS Directory Service for Microsoft Active Directory users to change their passwords before expiration. I demonstrated how you can achieve this using a combination of a script and Amazon SES. I also showed you how you can configure rotation of the Amazon SES credentials on your preferred schedule.

If you have comments about this post, submit them in the “Comments” section below. If you have questions or suggestions, please start a new thread on the Amazon SES forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Tekena Orugbani

Tekena is a Cloud Support Engineer at the AWS Cape Town office. He has many years of experience working with Windows Systems, virtualization/cloud technologies, and directory services. When he’s not helping customers make the most of their cloud investments, he enjoys hanging out with his family and watching Premier League football (soccer).

from AWS Security Blog

How to sign up for a Leadership Session at re:Inforce 2019

How to sign up for a Leadership Session at re:Inforce 2019

The first annual re:Inforce conference is one week away and with two full days of security, identity, and compliance learning ahead, I’m looking forward to the community building opportunities (such as Capture the Flag) and the hundreds of sessions that dive deep into how AWS services can help keep businesses secure in the cloud. The track offerings are built around four main topics (Governance, Risk & Compliance; Security Deep Dive; Security Pioneers; and The Foundation) and to help highlight each track, AWS security experts will headline four Leadership Sessions that cover the overall track structure and key takeaways from the conference.

Join one—or all—of these Leadership Sessions to hear AWS security experts discuss top cloud security trends. But I recommend reserving your spot now – seating is limited for these sessions. (See below for instructions on how to reserve a seat.)

Leadership Sessions at re:Inforce 2019

When you attend a Leadership Session, you’ll learn about AWS services and solutions from the folks who are responsible for them end-to-end. These hour-long sessions are presented by AWS security leads who are experts in their fields. The sessions also provide overall strategy and best practices for safeguarding your environments. See below for the list of Leadership Sessions offered at re:Inforce 2019.

Leadership Session: Security Deep Dive

Tuesday, Jun 25, 12:00 PM – 1:00 PM
Speakers: Bill Reid (Sr Mgr, Security and Platform – AWS); Bill Shinn (Sr Principal, Office of the CISO – AWS)

In this session, Bill Reid, Senior Manager of Security Solutions Architects, and Bill Shinn, Senior Principal in the Office of the CISO, walk attendees through the ways in which security leadership and security best practices have evolved, with an emphasis on advanced tooling and features. Both speakers have provided frontline support on complex security and compliance questions posed by AWS customers; join them in this master class in cloud strategy and tactics.

Leadership Session: Foundational Security

Tuesday, Jun 25, 3:15 PM – 4:15 PM
Speakers: Don “Beetle” Bailey (Sr Principal Security Engineer – AWS); Rohit Gupta (Global Segment Leader, Security – AWS); Philip “Fitz” Fitzsimons (Lead, Well-Architected – AWS); Corey Quinn (Cloud Economist – The Duckbill Group)

Senior Principal Security Engineer Don “Beetle” Bailey and Corey Quinn from the highly acclaimed “Last Week in AWS” newsletter present best practices, features, and security updates you may have missed in the AWS Cloud. With more than 1,000 service updates per year being released, having expert distillation of what’s relevant to your environment can accelerate your adoption of the cloud. As techniques for operationalizing cloud security, compliance, and identity remain a critical business need, this leadership session considers a strategic path forward for all levels of enterprises and users, from beginner to advanced.

Leadership Session: Aspirational Security

Wednesday, Jun 26, 11:45 AM – 12:45 PM
Speaker: Eric Brandwine (VP/Distinguished Engineer – AWS)

How does the cloud foster innovation? Join Vice President and Distinguished Engineer Eric Brandwine as he details why there is no better time than now to be a pioneer in the AWS Cloud, discussing the changes that next-gen technologies such as quantum computing, machine learning, serverless, and IoT are expected to make to the digital and physical spaces over the next decade. Organizations within the large AWS customer base can take advantage of security features that would have been inaccessible even five years ago; Eric discusses customer use cases along with simple ways in which customers can realize tangible benefits around topics previously considered mere buzzwords.

Leadership Session: Governance, Risk, and Compliance

Wednesday, Jun 26, 2:45 PM – 3:45 PM
Speakers: Chad Woolf (VP of Security – AWS); Rima Tanash (Security Engineer – AWS); Hart Rossman (Dir, Global Security Practice – AWS)

Vice President of Security Chad Woolf, Director of Global Security Practice Hart Rossman, and Security Engineer Rima Tanash explain how governance functionality can help ensure consistency in your compliance program. Some specific services covered are Amazon GuardDuty, AWS Config, AWS CloudTrail, Amazon CloudWatch, Amazon Macie, and AWS Security Hub. The speakers also discuss how customers leverage these services in conjunction with each other. Additional attention is paid to the concept of “elevated assurance,” including how it may transform the audit industry going forward. Finally, the speakers discuss how AWS secures its own environment, as well as talk about the control frameworks of specific compliance regulations.

How to reserve a seat

Unlike the Keynote session delivered by AWS CISO Steve Schmidt, you must reserve a seat for Leadership Sessions to guarantee entrance. Seats are limited, so put down that coffee, pause your podcast, and follow these steps to secure your spot.

  1. Log into the re:Inforce Session Catalog with your registration credentials. (Not registered yet? Head to the Registration page and sign up.)
  2. Select Event Catalog from the Dashboard.
  3. Enter “Leadership Session” in the Keyword Search box and check the “Exact Match” box to filter your results.
  4. Select the Scheduling Options dropdown to view the date and location of the session.
  5. Select the plus mark to add it to your schedule.
  6. How to add a leadership session to your schedule

And that’s it! Your seat is now reserved. While you’re at it, check out the other available sessions, chalk talks, workshops, builders sessions, and security jams taking place during the event. You can customize your schedule to focus on security topics most relevant to your role, or take the opportunity to explore something new. The session catalog is subject to change, so be sure to check back to see what’s been added. And if you have any questions, email the re:Inforce team at [email protected].

Hope to see you there!

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

author photo

Ashley Nelson

Ashley is a Content Manager within AWS Security. Ashley oversees both print and digital content, and has over six years of experience in editorial and project management roles. Originally from Boston, Ashley attended Lesley University where she earned her degree in English Literature with a minor in Psychology. Ashley is passionate about books, food, video games, and Oxford Commas.

from AWS Security Blog

Working backward: From IAM policies and principal tags to standardized names and tags for your AWS resources

Working backward: From IAM policies and principal tags to standardized names and tags for your AWS resources

When organizations first adopt AWS, they have to make many decisions that will lay the foundation for their future footprint in the cloud. Part of this includes making decisions about the number of AWS accounts you choose to operate, but another fundamental task is constructing practical access control policies so that your application teams can’t affect each other’s resources within the same account. With AWS Identity and Access Management (IAM), you can customize granular access control policies that are appropriate for your organization, helping you follow security best practices such as separation-of-duties and least-privilege. As every organization is different, you’ll need to carefully consider what your cloud security policies are and how they relate to your cloud engineering teams. Things to consider include who should be authorized to perform which actions, how your teams operate with one another, and which IAM mechanisms are suitable for ensuring that only authorized access is allowed.

In this blog post, I’ll show you an approach that works backwards, starting with a set of customer requirements, then utilizing AWS features such as IAM conditions and principal tagging. Combined with an AWS resource naming and tagging strategy, this approach can help you meet your access control objectives. AWS recently enabled tags on IAM principals (users and roles), which allows you to create a single reusable policy that provides access based on the tags of the IAM principal. When you combine this feature with a standardized resource naming and tagging convention, you can craft a set of IAM roles and policies suitable for your organization.

AWS features used in this approach

To follow along, you should have a working knowledge of IAM and tagging, and familiarity with the following concepts:

Introducing Example Corporation

To illustrate the strategies I discuss, I’ll refer to a fictitious customer throughout my post: Example Corporation is a large organization that wants to use their existing Microsoft Active Directory (AD) as their identity store, with Active Directory Federation Services (AD FS) as the means to federate into their AWS accounts. They also have multiple business projects, some of which will need their own AWS accounts, and others that will share AWS accounts due to the dependencies of the applications within those projects. Each project has multiple application teams who do not need to access each other’s AWS resources.

Example Corporation’s access control requirements

Example Corporation doesn’t always dedicate a single AWS account to one team or one environment. Sometimes, multiple project teams work within the same account, and sometimes they have more than one environment in an account. Figure 1 shows how the Website Marketing and Customer Marketing project teams (each of which has multiple application teams) share two AWS accounts: a development and staging AWS account and a production AWS account. Although production has a dedicated AWS account, Example Corporation has decided that a shared development and staging account is acceptable.
 

Figure 1: AWS accounts shared by Example Corp's teams

Figure 1: AWS accounts shared by Example Corp’s teams

The development and staging environments share an AWS account, and the two teams do work closely together. All projects within an account will be allowed access to the read-only metadata of other resources, such as EC2 instance names, tags, and IAM information. However, each project team wants to prevent their application resources from being modified by the other team’s members.

Initial decisions for supporting shared account access control

Example Corporation decides to continue using their existing identity federation solution for access to AWS, as the existing processes for handling joiners, movers, and leavers can be extended to manage identities within AWS. They will enable this via Security Assertion Markup Language (SAML) provided by ADFS to allow Example Corporation’s AD users to access AWS by assuming IAM roles. Initially, they will create three IAM roles—project administrator, application administrator, and application operator—with additional roles to come later.

The company knows they need to implement access controls through IAM, and they’ve created an initial list of AWS services (EC2, RDS, S3, SNS, and Amazon CloudWatch) to secure. Infrastructure as code (IaC) is a new concept at Example Corporation, so they want to keep initial IAM roles and policies as simple as possible. IAM principal tags will help them reuse standard policies across accounts. Principal tags are global condition keys assigned to a user or role. They can be used within a condition to ensure that a new resource is tagged on creation with a value that matches your principal. They can also be used to verify that an existing resource has a matching tag prior to allowing an action against that resource.

Many, but not all, AWS services support tag-based authorization of AWS resources. For services that don’t support tag-based authorization, Example Corporation will enable access control by utilizing ARN paths with wildcards (ARN matching). The name of the resource and its ARN path will explicitly state which projects, applications, and operators have access to that resource. This will require the company to design and enforce a mandatory naming convention.

Please see the IAM user guide for an up-to-date a list of resources that support tag-based authorization.

Using multiple tags to meet access control requirements

The web and marketing teams have settled on three common roles and have decided their access levels as follows:

  • Project administrator: Able to access and modify all resources for a specific project, including all the resources belonging to application teams under the project.
  • Application administrator: Able to access and modify only the resources owned by a particular application team.
  • Application operator: Able to access and modify only the resources owned by a specific application team, plus those that reside within one of three environments: development, staging, or production.

 

Figure 2: Example Corp's teams - administrators and operators with AWS access

Figure 2: Example Corp’s teams—administrators and operators with AWS access

As for the principal tags, there will be three unique tags named with the prefix access-, with tag values that differentiate the roles and their resources from other projects, applications, and environments.

Finally, because the AWS account is shared, Example Corporation needs to account for the service usage costs of the two teams. By adding a mandatory tag for “cost center,” they can determine the costs of the web team’s resources versus the marketing team’s resources in AWS Cost Explorer and AWS Cost and Usage Report.

Below is an example of the web team’s tags.

IAM principal tags used for the website project administrator role:

Tag name Tag value
access-project web
cost-center 123456

Tags for the website application administrator role:

Tag name Tag value
access-project web
access-application nginx
cost-center 123456

Tags for the website application operator role—specifically for developer access to the dev environment:

Tag name Tag value
access-project web
access-application nginx
access-environment dev
cost-center 123456

Access control for AWS services and resources that support tag-based authorization

Example Corporation now needs to write IAM policies for their targeted resources. They begin with EC2, as that will be their most widely used service. The IAM documentation for EC2 shows that most write actions (create, modify, delete) support tag-based authorization, allowing the principal to execute the action only if the resource’s tag matches a predefined value.

For example, the following policy statement will only allow EC2 instances to be started or stopped if the resource tag value matches the “web” project name:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"web"
        }
    }
}         

However, if Example Corporation uses a policy variable instead of hardcoding the project name, the company can reuse the policy by taking advantage of the aws:PrincipalTag condition key:


{
    "Action":[
        "ec2:StartInstances",
        "ec2:StopInstances"
    ],
    "Resource":[
        "arn:aws:ec2:*:*:instance/*"
    ],
    "Effect":"Allow",
    "Condition":{
        "StringEquals":{
            "ec2:ResourceTag/access-project":"${aws:PrincipalTag/access-project}"
        }
    }
}    

Without policy variables, every IAM policy for every project would need a unique value to control access to the resource. Because the text of every policy document would be different, Example Corporation wouldn’t be able to reuse policies from one account to another or from one environment to another. Variables allow them to deploy the same policy file to all of their accounts, while allowing the effect of the policy to differ based on the tags that are used in each account.

As a result, Example Corporation will base the right to manipulate resources like EC2 on resource tags as much as possible. It is important, then, for their teams to tag each resource at the time of creation, if the resource supports it. Untagged resources won’t be manageable, but resources tagged properly will become automatically manageable. The company will use the aws:RequestTag IAM condition key to ensure that the requested access tags and cost allocation tags are assigned at the time of EC2 creation. The IAM policy associated with the application-operator role will therefore be:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": "${aws:PrincipalTag/access-environment}",
            "ec2:CreateAction": "RunInstances"
        }
    }
}

If someone tries to create an EC2 instance without setting proper tags, the RunInstances API call will fail. The application-administrator policy will be similar, with the added ability to create a resource in any environment:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-zone": [ "dev", "stg", "prd" ],   
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}    

And finally, the project-administrator policy will have the most access. Note that even though this policy is for a project administrator, the user is still limited to modifying resources only within three environments. In addition, to ensure that all resources have the required access-application tag, Example Corporation has added a null condition to verify that the tag value is non-empty:


{       
    "Sid": "AllowEC2ResourceCreationWithRequiredTags",
    "Action": [
        "ec2:CreateVolume",
        "ec2:RunInstances"
    ],      
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],      
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "aws:RequestTag/cost-center": "${aws:PrincipalTag/cost-center}"
        },
        "Null": {
            "aws:RequestTag/access-application": false
        }
    }
},
{       
    "Sid": "AllowCreateTagsIfRequestingValidTags",
    "Action": [
        "ec2:CreateTags"
    ],
    "Resource": [
        "arn:aws:ec2:*:*:instance/*",
        "arn:aws:ec2:*:*:volume/*"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ],
            "ec2:CreateAction": "RunInstances"  
        }
    }
}

Access control for AWS services and resources without tag-based authorization

Some services don’t support tag-based authorization. In those cases, Example Corporation will use ARN pattern matching. Many AWS resources use ARNs that contain a user-created name. Therefore, the company’s proposal is to name resources following a naming convention. A name will look like: [project]-[application]-[environment]-myresourcename. For resources that are globally unique, such as S3, Example Corporation additionally requires its abbreviated name, “exco,” to be at the beginning of the resource so as to avoid a naming collision with another corporation’s buckets:


arn:aws:s3:::exco-web-nginx-dev-staticassets

To enforce this naming convention, they craft a reusable IAM policy that ensures that only intended users with matching access-project, access-application, and access-environment tag values can modify their resources. In addition, using * wildcard matches, they are able to allow for custom resource name suffixes such as staticassets in the above example. Using an AWS SNS topic as an example, a snippet of the IAM policy associated with the application-operator role will look like this:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-${aws:PrincipalTag/access-environment}-*"
    ]
} 

And here’s an IAM policy for an application-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:CreateTopic",
        "sns:DeleteTopic",
        ...
    ],            
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-dev-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-stg-*",
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-prd-*"
    ]
}

And finally, here’s the IAM policy for a project-admin:


{       
    "Sid": "AllowSNSListAccess",
    "Effect": "Allow",
    "Action": [
        "sns:ListTopics",
        "sns:ListSubscriptions*",
        ...
    ],      
    "Resource": "*"
},
{       
    "Sid": "AllowSNSAccessBasedOnArnMatching",
    "Effect": "Allow",
    "Action": [
        "sns:*" 
    ],      
    "Resource": [
        "arn:aws:sns:*:*:${aws:PrincipalTag/access-project}-*"
    ]
}

The above policies have two caveats, however. First, they require that the principal tags have values that do not include a hyphen, as it is used as a delimiter according to Example Corporation’s new tag-based convention for access control. In addition, a forward slash cannot be used, as it is in use within ARNs by many AWS resources, such as S3 buckets:


arn:aws:s3:::awsexamplebucket/exampleobject.png

It is important that the company doesn’t let users create resources with disallowed or invalid tags. The following application admin permissions boundary policy uses a condition to permit IAM roles to be created, but only if they are tagged appropriately. Please note that these are just snippets of the boundary policy for the sake of illustration:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-application": "${aws:PrincipalTag/access-application}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-${aws:PrincipalTag/access-application}-*"
    ]
}

And likewise, this permissions boundary policy attached to the project admin will do the same:


{       
    "Sid": "AllowIamCreateTagsOnUserOrRole",
    "Action": [
        "iam:TagUser",
        "iam:TagRole"
    ],
    "Effect": "Allow",
    "Condition": {
        "StringEquals": {
            "aws:RequestTag/access-project": "${aws:PrincipalTag/access-project}",
            "aws:RequestTag/access-environment": [ "dev", "stg", "prd" ]
        },      
        "StringNotLike": {
            "aws:RequestTag/access-project": [ "*-*", "*/*" ],
            "aws:RequestTag/access-application": [ "*-*", "*/*" ]            
        }       
    },      
    "Resource": [
        "arn:aws:iam::*:user/${aws:PrincipalTag/access-project}-*",
        "arn:aws:iam::*:role/${aws:PrincipalTag/access-project}-*"
    ]
}

Note that the above boundary policies can be also be crafted using allow statements and multiple explicit deny statements.

Example Corporation’s resource naming convention requirements

As shown in the above examples, Example Corporation has given project teams the ability to create resources with name-based access control for services that currently do not support tag-based authorization (such as SQS and S3). Through the use of wildcards, teams can still provide custom names to their resources to differentiate from other resources created within the same team.

AWS resources have various limits on the structure and composition of names, so the company restricts the character length on access tags. For example, Amazon ElastiCache cluster names must be 20 alphanumeric characters or less, including hyphens. Most AWS resources have higher character limits, but Example Corporation limits their [project]-[application]-[environment] prefix to a 3-character project ID, 5-character application ID, and 3-character maximum environment name to satisfy their requirements, as this will equal a total of 14 characters (for example, web-nginx-prd-), which leaves 6 characters remaining for the user-specified cluster name.

Summary of Key Decisions

  • Services that support tag-based authorization (TBA) must have resources that follow a tagging convention for access control. Tagging on resource creation will be enforced where possible.
  • Services that do not support TBA must have resources that follow a naming convention. The cost center tag will still be required and will be applied after resource creation.
  • Services that do not support TBA, and cannot have user-specified names in their ARN (less common), will be addressed on a case-by-case basis. They will either need to allow access for all projects and application teams sharing the same account, or allow access via a custom IAM policy created on a case-by-case basis so that only the desired team can access the resource. Each IAM role should leave a few unused slots short of the maximum number of policies allowed per role in order to accommodate custom policies.
  • It is acceptable to allow basic List* and Describe* IAM permissions for AWS resources for all users who log in to the account, as the company’s project teams work closely together.
  • IAM user and role names created by project and application admins must adhere to the approved resource naming conventions. Admins themselves will have a permissions boundary policy applied to their roles. This policy, in turn, will require that all users and roles the admins create have a permissions boundary policy. This is especially important for roles associated with resources that can potentially create or modify IAM resources, such as EC2 and Lambda.
  • Active Directory users who need access to AWS resources must assume different IAM roles in order to utilize the different levels of access that the project admin, application admin, and application operator each provide. Users must also assume a different role if they need access to a different project. This is because each role’s tag has a single value. In this scheme, a single role cannot be assigned to multiple projects or application teams.

Conclusion

Example Corporation was able to allow their project teams to share the same AWS account while still limiting access to a majority of the account’s AWS resources. Through the use of IAM principal tagging, combined with a resource naming and tagging convention, they created a reusable set of IAM policies that restricted access not only between project and application admins and users, but also between different development, stage, and production users.

If you have comments about this post, submit them in the “Comments” section below. If you have questions about or issues implementing this solution, please start a new thread on the IAM forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Michael Chan

Michael is a Professional Services Consultant who has assisted commercial and Federal customers with their journey to AWS. He enjoys understanding customer problems and working backwards to provide practical solutions.

from AWS Security Blog

Definitely not an AWS Security Profile: Corey Quinn, a “Cloud Economist” who doesn’t work here

Definitely not an AWS Security Profile: Corey Quinn, a “Cloud Economist” who doesn’t work here

platypus scowling beside cloud

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


You don’t work at AWS, but you do have deep experience with AWS Services. Can you talk about how you developed that experience and the work that you do as a “Cloud Economist?”

I see those sarcastic scare-quotes!

I’ve been using AWS for about a decade in a variety of environments. It sounds facile, but it turns out that being kinda good at something starts with being abjectly awful at it first. Once you break things enough times, you start to learn how to wield them in more constructive ways.

I have a background in SRE-style work and finance. Blending those together into a made-up thing called “Cloud Economics” made sense and focused on a business problem that I can help solve. It starts with finding low-effort cost savings opportunities in customer accounts and quickly transitions into building out costing predictions, allocating spend—and (aligned with security!) building out workable models of cloud governance that don’t get in an engineer’s way.

This all required me to be both broad and deep across AWS’s offerings. Somewhere along the way, I became something of a go-to resource for the community. I don’t pretend to understand how it happened, but I’m incredibly grateful for the faith the broader community has placed in me.

You’re known for your snarky newsletter. When you meet AWS employees, how do they tend to react to you?

This may surprise you, but the most common answer by far is that they have no idea who I am.

It turns out AWS employs an awful lot of people, most of whom have better things to do than suffer my weekly snarky slings and arrows.

Among folks who do know who I am, the response has been nearly universal appreciation. It seems that the newsletter is received in which the spirit I intend it—namely, that 90–95% of what AWS does is awesome. The gap between that and perfection offers boundless opportunities for constructive feedback—and also hilarity.

The funniest reaction I ever got was when someone at a Summit registration booth saw “Last Week in AWS” on my badge and assumed I was an employee serving out the end of his notice period.

“Senior RageQuit Engineer” at your service, I suppose.

You’ve been invited to present during the Leadership Session for the re:Inforce Foundation Track with Beetle. What have you got planned?

Ideally not leaving folks asking incredibly pointed questions about how the speaker selection process was mismanaged! If all goes well, I plan on being able to finish my talk without being dragged off the stage by AWS security!

I kid. But my theory of adult education revolves around needing to grab people’s attention before you can teach them something. For better or worse, my method for doing that has always been humor. While I’m cognizant that messaging to a large audience of security folks requires a delicate touch, I don’t subscribe to the idea that you can’t have fun with it as well.

In short: if nothing else, it’ll be entertaining!

What’s one thing that everyone should stop reading and go do RIGHT NOW to improve their security posture?

Easy. Log into the console of your organization’s master account and enable AWS CloudTrail for all regions and all accounts in your organization. Direct that trail to a locked-down S3 bucket in a completely separate, highly restricted account, and you’ve got a forensic log of all management options across your estate.

Worst case, you’ll thank me later. Best case, you’ll never need it.

It’s important, so what’s another security thing everyone should do?

Log in to your AWS accounts right now and update your security contact to your ops folks. It’s not used for marketing; it’s a point of contact for important announcements.

If you’re like many rapid-growth startups, your account is probably pointing to your founder’s personal email address— which means critical account notices are getting lost among Amazon.com sock purchase receipts.

That is not what being “SOC-compliant” means.

From a security perspective, what recent AWS release are you most excited about?

It was largely unheralded, but I was thrilled to see AWS Systems Manager Parameter Store (it’s a great service, though the name could use some work) receive higher API rate limits; it went from 40 to 1,000 requests per second.

This is great for concurrent workloads and makes it likelier that people will manage secrets properly without having to roll their own.

Yes, I know that AWS Secrets Manager is designed around secrets, but KMS-encrypted parameters in Parameter Store also get the job done. If you keep pushing I’ll go back to using Amazon Route 53 TXT records as my secrets database… (Just kidding. Please don’t do this.)

In your opinion, what’s the biggest challenge facing cloud security right now?

The same thing that’s always been the biggest challenge in security: getting people to care before a disaster happens.

We see the same thing in cloud economics. People care about monitoring and controlling cloud spend right after they weren’t being diligent and wound up with an unpleasant surprise.

Thankfully, with an unexpectedly large bill, you have a number of options. But you don’t get a do-over with a data breach.

The time to care is now—particularly if you don’t think it’s a focus area for you. One thing that excites me about re:Inforce is that it gives an opportunity to reinforce that viewpoint.

Five years from now, what changes do you think we’ll see across the cloud security landscape?

I think we’re already seeing it now. With the advent of things like AWS Security Hub and AWS Control Tower (both currently in preview), security is moving up the stack.

Instead of having to keep track of implementing a bunch of seemingly unrelated tooling and rulesets, higher-level offerings are taking a lot of the error-prone guesswork out of maintaining an effective security posture.

Customers aren’t going to magically reprioritize security on their own. So it’s imperative that AWS continue to strive to meet them where they are.

What are the comparative advantages of being a cloud economist vs. a platypus keeper?

They’re more alike than you might expect. The cloud has sharp edges, but platypodes are venomous.

Of course, large bills are a given in either space.

You sometimes rename or reimagine AWS services. How should the Security Blog rebrand itself?

I think the Security Blog suffers from a common challenge in this space.

It talks about AWS’s security features, releases, and enhancements—that’s great! But who actually identifies as its target market?

Ideally, everyone should; security is everyone’s job, after all.

Unfortunately, no matter what user persona you envision, a majority of the content on the blog isn’t written for that user. This potentially makes it less likely that folks read the important posts that apply to their use cases, which, in turn, reinforces the false narrative that cloud security is both impossibly hard and should be someone else’s job entirely.

Ultimately, I’d like to see it split into different blogs that emphasize CISOs, engineers, and business tracks. It could possibly include an emergency “this is freaking important” feed.

And as to renaming it, here you go: you’d be doing a great disservice to your customers should you name it anything other than “AWS Klaxon.”

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Corey Quinn

Corey is the Cloud Economist at the Duckbill Group. Corey specializes in helping companies fix their AWS bills by making them smaller and less horrifying. He also hosts the AWS Morning Brief and Screaming in the Cloud podcasts and curates Last Week in AWS, a weekly newsletter summarizing the latest in AWS news, blogs, and tools, sprinkled with snark.

from AWS Security Blog

Singapore financial services: new resources for customer side of the shared responsibility model

Singapore financial services: new resources for customer side of the shared responsibility model

Based on customer feedback, we’ve updated our AWS User Guide to Financial Services Regulations and Guidelines in Singapore whitepaper, as well as our AWS Monetary Authority of Singapore Technology Risk Management Guidelines (MAS TRM Guidelines) Workbook, which is available for download via AWS Artifact. Both resources now include considerations and best practices for the customer portion of the AWS Shared Responsibility Model.

The whitepaper provides considerations for financial institutions as they assess their responsibilities when using AWS services with regard to the MAS Outsourcing Guidelines, MAS TRM Guidelines, and Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide.

The MAS TRM Workbook provides best practices for the customer portion of the AWS Shared Responsibility Model—that is, guidance on how you can manage security in the AWS Cloud. The guidance and best practices are sourced from the AWS Well-Architected Framework.

The Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions, and is not an audit mechanism. We believe that having well-architected systems greatly increases the likelihood of business success. For more information, see the AWS Well-Architected homepage.

The compliance controls provided by the workbook also continue to address the AWS side of the Shared Responsibility Model (security of the AWS Cloud).

View the updated whitepaper here, or download the updated AWS MAS TRM Guidelines Workbook via AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond… Cx0 to <code>

from AWS Security Blog

AWS Security Profiles: Fritz Kunstler, Principal Consultant, Global Financial Services

AWS Security Profiles: Fritz Kunstler, Principal Consultant, Global Financial Services


In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been here for three years. My job is Security Transformation, which is a technical role in AWS Professional Services. It’s a fancy way of saying that I help customers build the confidence and technical capability to run their most sensitive workloads in the AWS Cloud. Much of my work lives at the intersection of DevOps and information security.

Broadly, how does the role of Consultant differ from positions like “Solutions Architect”?

Depth of engagement is one of the main differences. On many customer engagements, I’m involved for three months, or six months, or nine months. I have one customer now that I’ve been working with for more than a year. Consultants are also more integrated—I’m often embedded in the customer’s team, working side-by-side with their employees, which helps me learn about their culture and needs.

What’s your favorite part of your job?

There’s a lot I like about working at Amazon, but a couple of things stand out. First, the people I work with. Amazon culture—and the people who comprise that culture—are amazing. I’m constantly interacting with really smart people who are willing to go out of their way to make good things happen for customers. At companies I’ve worked for in the past, I’ve encountered individuals like this. But being surrounded by so many people who behave like this day in and day out is something special.

The customers that we have the privilege of working with at AWS also represent some very large brands. They serve many, many consumers all over the world. When I help these customers achieve their security and privacy goals, I’m doing something that has an impact on the world at large. I’ve worked in tech my entire career, in roles ranging from executive to coder, but I’ve never had a job that lets me make such a broad impact before. It’s really cool.

What does cloud security mean to you, personally?

I work in Global Financial Services, so my customers are the world’s biggest banks, investment firms, and independent software vendors. These are companies that we all rely on every day, and they put enormous effort into protecting their customers’ data and finances. As I work to support their efforts, I think about it in terms of my wife, kids, parents, siblings—really, my entire extended family. I’m working to protect us, to ensure that the online world we live in is a safer one.

In your opinion, what’s the biggest cloud security challenge facing the Financial Services industry right now?

How to transform the way they do security. It’s not only a technical challenge—it’s a human challenge. For FinServe customers to get the most value out of the cloud, a lot of people need to be willing to change their minds.

Highly regulated customers like financial services firms tend to have sophisticated security organizations already in place. They’ve been doing things effectively in a particular way for quite a while. It takes a lot of evidence to convince them to change their processes—and to convince them that those changes can drive increased value and performance while reducing risk. Security leaders tend to be a skeptical lot, and that has its place, but I think that we should strive to always be the most optimistic people in the room. The cloud lets people experiment with big ideas that may lead to big innovation, and security needs to enable that. If the security leader in the room is always saying no, then who’s going to say yes? That’s the essence of security transformation – developing capabilities that enable your organization to say yes.

What’s a trend you see currently happening in the Financial Services space that you’re excited about?

AWS has been working hard alongside some of our financial services customers for several years. Moving to the cloud is a big transition, and there’s been some FUD—some fear, uncertainty, and doubt—to work through, so not everyone has been able to adopt the cloud as quickly as they might’ve liked. But I feel we’re approaching an inflection point. I’m seeing increasing comfort, increasing awareness, and an increasingly trained workforce among my customers.

These changes, in conjunction with executive recognition that “the cloud” is not only worthwhile, but strategically significant to the business, may signal that we’re close to a breakthrough. These are firms that have the resources to make things happen when they’re ready. I’m optimistic that even the more conservative of our financial services customers will soon be taking advantage of AWS in a big way.

Five years from now, what changes do you think we’ll see across the Financial Services/Cloud Security landscape?

I think cloud adoption will continue to accelerate on the business side. I also expect to see the security orgs within these firms leverage the cloud more for their own workloads – in particular, to integrate AI and machine learning into security operations, and further left in the systems development lifecycle. Security teams still do a lot of manual work to analyze code, policies, logs, and so on. This is critical stuff, but it’s also very time consuming and much of it is ripe for automation. Skilled security practitioners are in high demand. They should be focused on high-value tasks that enable the business. Amazon GuardDuty is just one example of how security teams can use the cloud toward that end.

What’s one thing that people outside of Financial Services can learn from what’s happening in this industry?

As more and more Financial Services customers adopt AWS, I think that it becomes increasingly hard for leaders in other sectors to suggest that the cloud isn’t secure, reliable, or capable enough for any given use case. I love the quote from Capital One’s CIO about why they chose AWS.

You’re leading a re:Inforce session that focuses on “IAM strategy for financial services.” What are some of the unique considerations that the financial services industry faces when it comes to IAM?

Financial services firms and other highly regulated customers tend to invest much more into tools and processes to enforce least privilege and separation of duties, due to regulatory and compliance requirements. Traditional, centralized approaches to implementing those two principles don’t always work well in the cloud, where resources can be ephemeral. If your goal is to enable builders to experiment and fail fast, then it shouldn’t take weeks to get the approvals and access required for a proof-of-concept than can be built in two days.

AWS Identity and Access Management (IAM) capabilities have changed significantly in the past year. Those changes make it easier and safer than ever to do things like delegate administrative access to developers. But they aren’t the sort of high-profile announcement that you’d hear a keynote speaker talk about at re:Invent. So I think a lot of customers aren’t fully aware of them, or of what you can accomplish by combining them with automation and CI/CD techniques.

My talk will offer a strategy and examples for using those capabilities to provide the same level of security—if not a better level of security—without so many of the human reviews and approvals that often become bottlenecks.

What are you hoping that your audience will do differently as a result of attending your session?

I’d like them to investigate and holistically implement the handful of IAM capabilities that we’ll discuss during the session. I also hope that they’ll start working to delegate IAM responsibilities to developers and automate low-value human reviews of policy code. Finally, I think it’s critical to have CI/CD or other capabilities that enable rapid, reliable delivery of updates to IAM policies across many AWS accounts.

Can you talk about some of the recent enhancements to IAM that you’re excited about?

Permissions boundaries and IAM resource tagging are two features that are really powerful and that I don’t see widely used today. In some cases, customers may not even be aware of them. Another powerful and even more recent development is the introduction of conditional support to the service control policy mechanism provided by AWS Organizations.

You’re an avid photographer: What’s appealing to you about photography? What’s your favorite photo you’ve ever taken?

I’ve always struggled to express myself artistically. I take a very technical, analytical approach to life. I started programming computers when I was six. That’s how I think. Photography is sufficiently technical for me to wrap my brain around, which is how I got started. It took me a long time to begin to get comfortable with the creative aspects. But it fits well with my personality, while enabling expression that I’d never be able to find, say, as a painter.

I won’t claim to be an amazing photographer, but I’ve managed a few really good shots. The photo that comes to mind is one I captured in Bora Bora. There was a guy swimming through a picturesque, sheltered part of the ocean, where a reef stopped the big waves from coming in. This swimmer was towing a surfboard with his dog standing on it, and the sun was going down in the background. The colors were so vibrant it felt like a Disneyland attraction, and from a distance, you could just see a dog on a surfboard. Everything about that moment – where I was, how I was feeling, how surreal it all was, and the fact that I was on a honeymoon with my wife – made for a poignant photo.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Fritz Kunstler

Fritz is a Principal Consultant in AWS Professional Services, specializing in security. His first computer was a Commodore 64, which he learned to program in BASIC from the back of a magazine. Fritz has spent more than 20 years working in tech and has been an AWS customer since 2008. He is an avid photographer and is always one batch away from baking the perfect chocolate chip cookie.

from AWS Security Blog