Tag: Compliance

Definitely not an AWS Security Profile: Corey Quinn, a “Cloud Economist” who doesn’t work here

Definitely not an AWS Security Profile: Corey Quinn, a “Cloud Economist” who doesn’t work here

platypus scowling beside cloud

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


You don’t work at AWS, but you do have deep experience with AWS Services. Can you talk about how you developed that experience and the work that you do as a “Cloud Economist?”

I see those sarcastic scare-quotes!

I’ve been using AWS for about a decade in a variety of environments. It sounds facile, but it turns out that being kinda good at something starts with being abjectly awful at it first. Once you break things enough times, you start to learn how to wield them in more constructive ways.

I have a background in SRE-style work and finance. Blending those together into a made-up thing called “Cloud Economics” made sense and focused on a business problem that I can help solve. It starts with finding low-effort cost savings opportunities in customer accounts and quickly transitions into building out costing predictions, allocating spend—and (aligned with security!) building out workable models of cloud governance that don’t get in an engineer’s way.

This all required me to be both broad and deep across AWS’s offerings. Somewhere along the way, I became something of a go-to resource for the community. I don’t pretend to understand how it happened, but I’m incredibly grateful for the faith the broader community has placed in me.

You’re known for your snarky newsletter. When you meet AWS employees, how do they tend to react to you?

This may surprise you, but the most common answer by far is that they have no idea who I am.

It turns out AWS employs an awful lot of people, most of whom have better things to do than suffer my weekly snarky slings and arrows.

Among folks who do know who I am, the response has been nearly universal appreciation. It seems that the newsletter is received in which the spirit I intend it—namely, that 90–95% of what AWS does is awesome. The gap between that and perfection offers boundless opportunities for constructive feedback—and also hilarity.

The funniest reaction I ever got was when someone at a Summit registration booth saw “Last Week in AWS” on my badge and assumed I was an employee serving out the end of his notice period.

“Senior RageQuit Engineer” at your service, I suppose.

You’ve been invited to present during the Leadership Session for the re:Inforce Foundation Track with Beetle. What have you got planned?

Ideally not leaving folks asking incredibly pointed questions about how the speaker selection process was mismanaged! If all goes well, I plan on being able to finish my talk without being dragged off the stage by AWS security!

I kid. But my theory of adult education revolves around needing to grab people’s attention before you can teach them something. For better or worse, my method for doing that has always been humor. While I’m cognizant that messaging to a large audience of security folks requires a delicate touch, I don’t subscribe to the idea that you can’t have fun with it as well.

In short: if nothing else, it’ll be entertaining!

What’s one thing that everyone should stop reading and go do RIGHT NOW to improve their security posture?

Easy. Log into the console of your organization’s master account and enable AWS CloudTrail for all regions and all accounts in your organization. Direct that trail to a locked-down S3 bucket in a completely separate, highly restricted account, and you’ve got a forensic log of all management options across your estate.

Worst case, you’ll thank me later. Best case, you’ll never need it.

It’s important, so what’s another security thing everyone should do?

Log in to your AWS accounts right now and update your security contact to your ops folks. It’s not used for marketing; it’s a point of contact for important announcements.

If you’re like many rapid-growth startups, your account is probably pointing to your founder’s personal email address— which means critical account notices are getting lost among Amazon.com sock purchase receipts.

That is not what being “SOC-compliant” means.

From a security perspective, what recent AWS release are you most excited about?

It was largely unheralded, but I was thrilled to see AWS Systems Manager Parameter Store (it’s a great service, though the name could use some work) receive higher API rate limits; it went from 40 to 1,000 requests per second.

This is great for concurrent workloads and makes it likelier that people will manage secrets properly without having to roll their own.

Yes, I know that AWS Secrets Manager is designed around secrets, but KMS-encrypted parameters in Parameter Store also get the job done. If you keep pushing I’ll go back to using Amazon Route 53 TXT records as my secrets database… (Just kidding. Please don’t do this.)

In your opinion, what’s the biggest challenge facing cloud security right now?

The same thing that’s always been the biggest challenge in security: getting people to care before a disaster happens.

We see the same thing in cloud economics. People care about monitoring and controlling cloud spend right after they weren’t being diligent and wound up with an unpleasant surprise.

Thankfully, with an unexpectedly large bill, you have a number of options. But you don’t get a do-over with a data breach.

The time to care is now—particularly if you don’t think it’s a focus area for you. One thing that excites me about re:Inforce is that it gives an opportunity to reinforce that viewpoint.

Five years from now, what changes do you think we’ll see across the cloud security landscape?

I think we’re already seeing it now. With the advent of things like AWS Security Hub and AWS Control Tower (both currently in preview), security is moving up the stack.

Instead of having to keep track of implementing a bunch of seemingly unrelated tooling and rulesets, higher-level offerings are taking a lot of the error-prone guesswork out of maintaining an effective security posture.

Customers aren’t going to magically reprioritize security on their own. So it’s imperative that AWS continue to strive to meet them where they are.

What are the comparative advantages of being a cloud economist vs. a platypus keeper?

They’re more alike than you might expect. The cloud has sharp edges, but platypodes are venomous.

Of course, large bills are a given in either space.

You sometimes rename or reimagine AWS services. How should the Security Blog rebrand itself?

I think the Security Blog suffers from a common challenge in this space.

It talks about AWS’s security features, releases, and enhancements—that’s great! But who actually identifies as its target market?

Ideally, everyone should; security is everyone’s job, after all.

Unfortunately, no matter what user persona you envision, a majority of the content on the blog isn’t written for that user. This potentially makes it less likely that folks read the important posts that apply to their use cases, which, in turn, reinforces the false narrative that cloud security is both impossibly hard and should be someone else’s job entirely.

Ultimately, I’d like to see it split into different blogs that emphasize CISOs, engineers, and business tracks. It could possibly include an emergency “this is freaking important” feed.

And as to renaming it, here you go: you’d be doing a great disservice to your customers should you name it anything other than “AWS Klaxon.”

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Corey Quinn

Corey is the Cloud Economist at the Duckbill Group. Corey specializes in helping companies fix their AWS bills by making them smaller and less horrifying. He also hosts the AWS Morning Brief and Screaming in the Cloud podcasts and curates Last Week in AWS, a weekly newsletter summarizing the latest in AWS news, blogs, and tools, sprinkled with snark.

from AWS Security Blog

Singapore financial services: new resources for customer side of the shared responsibility model

Singapore financial services: new resources for customer side of the shared responsibility model

Based on customer feedback, we’ve updated our AWS User Guide to Financial Services Regulations and Guidelines in Singapore whitepaper, as well as our AWS Monetary Authority of Singapore Technology Risk Management Guidelines (MAS TRM Guidelines) Workbook, which is available for download via AWS Artifact. Both resources now include considerations and best practices for the customer portion of the AWS Shared Responsibility Model.

The whitepaper provides considerations for financial institutions as they assess their responsibilities when using AWS services with regard to the MAS Outsourcing Guidelines, MAS TRM Guidelines, and Association of Banks in Singapore (ABS) Cloud Computing Implementation Guide.

The MAS TRM Workbook provides best practices for the customer portion of the AWS Shared Responsibility Model—that is, guidance on how you can manage security in the AWS Cloud. The guidance and best practices are sourced from the AWS Well-Architected Framework.

The Well-Architected Framework helps you understand the pros and cons of decisions you make while building systems on AWS. By using the Framework, you will learn architectural best practices for designing and operating reliable, secure, efficient, and cost-effective systems in the cloud. It provides a way for you to consistently measure your architectures against best practices and identify areas for improvement. The process for reviewing an architecture is a constructive conversation about architectural decisions, and is not an audit mechanism. We believe that having well-architected systems greatly increases the likelihood of business success. For more information, see the AWS Well-Architected homepage.

The compliance controls provided by the workbook also continue to address the AWS side of the Shared Responsibility Model (security of the AWS Cloud).

View the updated whitepaper here, or download the updated AWS MAS TRM Guidelines Workbook via AWS Artifact.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Boyd author photo

Darran Boyd

Darran is a Principal Security Solutions Architect at AWS, responsible for helping remove security blockers for our customers and accelerating their journey to the AWS Cloud. Darran’s focus and passion is to deliver strategic security initiatives that un-lock and enable our customers at scale across the financial services industry and beyond… Cx0 to <code>

from AWS Security Blog

AWS Security Profiles: Fritz Kunstler, Principal Consultant, Global Financial Services

AWS Security Profiles: Fritz Kunstler, Principal Consultant, Global Financial Services


In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been here for three years. My job is Security Transformation, which is a technical role in AWS Professional Services. It’s a fancy way of saying that I help customers build the confidence and technical capability to run their most sensitive workloads in the AWS Cloud. Much of my work lives at the intersection of DevOps and information security.

Broadly, how does the role of Consultant differ from positions like “Solutions Architect”?

Depth of engagement is one of the main differences. On many customer engagements, I’m involved for three months, or six months, or nine months. I have one customer now that I’ve been working with for more than a year. Consultants are also more integrated—I’m often embedded in the customer’s team, working side-by-side with their employees, which helps me learn about their culture and needs.

What’s your favorite part of your job?

There’s a lot I like about working at Amazon, but a couple of things stand out. First, the people I work with. Amazon culture—and the people who comprise that culture—are amazing. I’m constantly interacting with really smart people who are willing to go out of their way to make good things happen for customers. At companies I’ve worked for in the past, I’ve encountered individuals like this. But being surrounded by so many people who behave like this day in and day out is something special.

The customers that we have the privilege of working with at AWS also represent some very large brands. They serve many, many consumers all over the world. When I help these customers achieve their security and privacy goals, I’m doing something that has an impact on the world at large. I’ve worked in tech my entire career, in roles ranging from executive to coder, but I’ve never had a job that lets me make such a broad impact before. It’s really cool.

What does cloud security mean to you, personally?

I work in Global Financial Services, so my customers are the world’s biggest banks, investment firms, and independent software vendors. These are companies that we all rely on every day, and they put enormous effort into protecting their customers’ data and finances. As I work to support their efforts, I think about it in terms of my wife, kids, parents, siblings—really, my entire extended family. I’m working to protect us, to ensure that the online world we live in is a safer one.

In your opinion, what’s the biggest cloud security challenge facing the Financial Services industry right now?

How to transform the way they do security. It’s not only a technical challenge—it’s a human challenge. For FinServe customers to get the most value out of the cloud, a lot of people need to be willing to change their minds.

Highly regulated customers like financial services firms tend to have sophisticated security organizations already in place. They’ve been doing things effectively in a particular way for quite a while. It takes a lot of evidence to convince them to change their processes—and to convince them that those changes can drive increased value and performance while reducing risk. Security leaders tend to be a skeptical lot, and that has its place, but I think that we should strive to always be the most optimistic people in the room. The cloud lets people experiment with big ideas that may lead to big innovation, and security needs to enable that. If the security leader in the room is always saying no, then who’s going to say yes? That’s the essence of security transformation – developing capabilities that enable your organization to say yes.

What’s a trend you see currently happening in the Financial Services space that you’re excited about?

AWS has been working hard alongside some of our financial services customers for several years. Moving to the cloud is a big transition, and there’s been some FUD—some fear, uncertainty, and doubt—to work through, so not everyone has been able to adopt the cloud as quickly as they might’ve liked. But I feel we’re approaching an inflection point. I’m seeing increasing comfort, increasing awareness, and an increasingly trained workforce among my customers.

These changes, in conjunction with executive recognition that “the cloud” is not only worthwhile, but strategically significant to the business, may signal that we’re close to a breakthrough. These are firms that have the resources to make things happen when they’re ready. I’m optimistic that even the more conservative of our financial services customers will soon be taking advantage of AWS in a big way.

Five years from now, what changes do you think we’ll see across the Financial Services/Cloud Security landscape?

I think cloud adoption will continue to accelerate on the business side. I also expect to see the security orgs within these firms leverage the cloud more for their own workloads – in particular, to integrate AI and machine learning into security operations, and further left in the systems development lifecycle. Security teams still do a lot of manual work to analyze code, policies, logs, and so on. This is critical stuff, but it’s also very time consuming and much of it is ripe for automation. Skilled security practitioners are in high demand. They should be focused on high-value tasks that enable the business. Amazon GuardDuty is just one example of how security teams can use the cloud toward that end.

What’s one thing that people outside of Financial Services can learn from what’s happening in this industry?

As more and more Financial Services customers adopt AWS, I think that it becomes increasingly hard for leaders in other sectors to suggest that the cloud isn’t secure, reliable, or capable enough for any given use case. I love the quote from Capital One’s CIO about why they chose AWS.

You’re leading a re:Inforce session that focuses on “IAM strategy for financial services.” What are some of the unique considerations that the financial services industry faces when it comes to IAM?

Financial services firms and other highly regulated customers tend to invest much more into tools and processes to enforce least privilege and separation of duties, due to regulatory and compliance requirements. Traditional, centralized approaches to implementing those two principles don’t always work well in the cloud, where resources can be ephemeral. If your goal is to enable builders to experiment and fail fast, then it shouldn’t take weeks to get the approvals and access required for a proof-of-concept than can be built in two days.

AWS Identity and Access Management (IAM) capabilities have changed significantly in the past year. Those changes make it easier and safer than ever to do things like delegate administrative access to developers. But they aren’t the sort of high-profile announcement that you’d hear a keynote speaker talk about at re:Invent. So I think a lot of customers aren’t fully aware of them, or of what you can accomplish by combining them with automation and CI/CD techniques.

My talk will offer a strategy and examples for using those capabilities to provide the same level of security—if not a better level of security—without so many of the human reviews and approvals that often become bottlenecks.

What are you hoping that your audience will do differently as a result of attending your session?

I’d like them to investigate and holistically implement the handful of IAM capabilities that we’ll discuss during the session. I also hope that they’ll start working to delegate IAM responsibilities to developers and automate low-value human reviews of policy code. Finally, I think it’s critical to have CI/CD or other capabilities that enable rapid, reliable delivery of updates to IAM policies across many AWS accounts.

Can you talk about some of the recent enhancements to IAM that you’re excited about?

Permissions boundaries and IAM resource tagging are two features that are really powerful and that I don’t see widely used today. In some cases, customers may not even be aware of them. Another powerful and even more recent development is the introduction of conditional support to the service control policy mechanism provided by AWS Organizations.

You’re an avid photographer: What’s appealing to you about photography? What’s your favorite photo you’ve ever taken?

I’ve always struggled to express myself artistically. I take a very technical, analytical approach to life. I started programming computers when I was six. That’s how I think. Photography is sufficiently technical for me to wrap my brain around, which is how I got started. It took me a long time to begin to get comfortable with the creative aspects. But it fits well with my personality, while enabling expression that I’d never be able to find, say, as a painter.

I won’t claim to be an amazing photographer, but I’ve managed a few really good shots. The photo that comes to mind is one I captured in Bora Bora. There was a guy swimming through a picturesque, sheltered part of the ocean, where a reef stopped the big waves from coming in. This swimmer was towing a surfboard with his dog standing on it, and the sun was going down in the background. The colors were so vibrant it felt like a Disneyland attraction, and from a distance, you could just see a dog on a surfboard. Everything about that moment – where I was, how I was feeling, how surreal it all was, and the fact that I was on a honeymoon with my wife – made for a poignant photo.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author photo

Fritz Kunstler

Fritz is a Principal Consultant in AWS Professional Services, specializing in security. His first computer was a Commodore 64, which he learned to program in BASIC from the back of a magazine. Fritz has spent more than 20 years working in tech and has been an AWS customer since 2008. He is an avid photographer and is always one batch away from baking the perfect chocolate chip cookie.

from AWS Security Blog

How to securely provide database credentials to Lambda functions by using AWS Secrets Manager

How to securely provide database credentials to Lambda functions by using AWS Secrets Manager

As a solutions architect at AWS, I often assist customers in architecting and deploying business applications using APIs and microservices that rely on serverless services such as AWS Lambda and database services such as Amazon Relational Database Service (Amazon RDS). Customers can take advantage of these fully managed AWS services to unburden their teams from infrastructure operations and other undifferentiated heavy lifting, such as patching, software maintenance, and capacity planning.

In this blog post, I’ll show you how to use AWS Secrets Manager to secure your database credentials and send them to Lambda functions that will use them to connect and query the backend database service Amazon RDS—without hardcoding the secrets in code or passing them through environment variables. This approach will help you secure last-mile secrets and protect your backend databases. Long living credentials need to be managed and regularly rotated to keep access into critical systems secure, so it’s a security best practice to periodically reset your passwords. Manually changing the passwords would be cumbersome, but AWS Secrets Manager helps by managing and rotating the RDS database passwords.

Solution overview

This is sample code: you’ll use an AWS CloudFormation template to deploy the following components to test the API endpoint from your browser:

  • An RDS MySQL database instance on a db.t2.micro instance
  • Two Lambda functions with necessary IAM roles and IAM policies, including access to AWS Secrets Manager:
    • LambdaRDSCFNInit: This Lambda function will execute immediately after the CloudFormation stack creation. It will create an “Employees” table in the database, where it will insert three sample records.
    • LambdaRDSTest: This function will query the Employees table and return the record count in an HTML string format
  • RESTful API with “GET” method on AWS API Gateway

Here’s the high level setup of the AWS services that will be created from the CloudFormation stack deployment:
 

Figure 1: Solution architecture

Figure 1: Architecture diagram

  1. Clients call the RESTful API hosted on AWS API Gateway
  2. The API Gateway executes the Lambda function
  3. The Lambda function retrieves the database secrets using the Secrets Manager API
  4. The Lambda function connects to the RDS database using database secrets from Secrets Manager and returns the query results

You can access the source code for the sample used in this post here: https://github.com/awslabs/automating-governance-sample/tree/master/AWS-SecretsManager-Lambda-RDS-blog.

Deploying the sample solution

Set up the sample deployment by selecting the Launch Stack button below. If you haven’t logged into your AWS account, follow the prompts to log in.

By default, the stack will be deployed in the us-east-1 region. If you want to deploy this stack in any other region, download the code from the above GitHub link, place the Lambda code zip file in a region-specific S3 bucket and make the necessary changes in the CloudFormation template to point to the right S3 bucket. (Please refer to the AWS CloudFormation User Guide for additional details on how to create stacks using the AWS CloudFormation console.)
 
Select this image to open a link that starts building the CloudFormation stack

Next, follow these steps to execute the stack:

  1. Leave the default location for the template and select Next.
     
    Figure 2: Keep the default location for the template

    Figure 2: Keep the default location for the template

  2. On the Specify Details page, you’ll see the parameters pre-populated. These parameters include the name of the database and the database user name. Select Next on this screen
     
    Figure 3: Parameters on the "Specify Details" page

    Figure 3: Parameters on the “Specify Details” page

  3. On the Options screen, select the Next button.
  4. On the Review screen, select both check boxes, then select the Create Change Set button:
     
    Figure 4: Select the check boxes and "Create Change Set"

    Figure 4: Select the check boxes and “Create Change Set”

  5. After the change set creation is completed, choose the Execute button to launch the stack.
  6. Stack creation will take between 10 – 15 minutes. After the stack is created successfully, select the Outputs tab of the stack, then select the link.
     
    Figure 5:  Select the link on the "Outputs" tab

    Figure 5: Select the link on the “Outputs” tab

    This action will trigger the code in the Lambda function, which will query the “Employee” table in the MySQL database and will return the results count back to the API. You’ll see the following screen as output from the RESTful API endpoint:
     

    Figure 6:   Output from the RESTful API endpoint

    Figure 6: Output from the RESTful API endpoint

At this point, you’ve successfully deployed and tested the API endpoint with a backend Lambda function and RDS resources. The Lambda function is able to successfully query the MySQL RDS database and is able to return the results through the API endpoint.

What’s happening in the background?

The CloudFormation stack deployed a MySQL RDS database with a randomly generated password using a secret resource. Now that the secret resource with randomly generated password has been created, the CloudFormation stack will use dynamic reference to resolve the value of the password from Secrets Manager in order to create the RDS instance resource. Dynamic references provide a compact, powerful way for you to specify external values that are stored and managed in other AWS services, such as Secrets Manager. The dynamic reference guarantees that CloudFormation will not log or persist the resolved value, keeping the database password safe. The CloudFormation template also creates a Lambda function to do automatic rotation of the password for the MySQL RDS database every 30 days. Native credential rotation can improve security posture, as it eliminates the need to manually handle database passwords through the lifecycle process.

Below is the CloudFormation code that covers these details:


#This is a Secret resource with a randomly generated password in its SecretString JSON.
MyRDSInstanceRotationSecret:
    Type: AWS::SecretsManager::Secret
    Properties:
    Description: 'This is my rds instance secret'
    GenerateSecretString:
        SecretStringTemplate: !Sub '{"username": "${!Ref RDSUserName}"}'
        GenerateStringKey: 'password'
        PasswordLength: 16
        ExcludeCharacters: '"@/\'
    Tags:
    -
        Key: AppNam
        Value: MyApp

#This is a RDS instance resource. Its master username and password use dynamic references to resolve values from
#SecretsManager. The dynamic reference guarantees that CloudFormation will not log or persist the resolved value
#We use a ref to the Secret resource logical id in order to construct the dynamic reference, since the Secret name is being
#generated by CloudFormation
MyDBInstance2:
    Type: AWS::RDS::DBInstance
    Properties:
    AllocatedStorage: 20
    DBInstanceClass: db.t2.micro
    DBName: !Ref RDSDBName
    Engine: mysql
    MasterUsername: !Ref RDSUserName
    MasterUserPassword: !Join ['', ['' ]]
    MultiAZ: False
    PubliclyAccessible: False      
    StorageType: gp2
    DBSubnetGroupName: !Ref myDBSubnetGroup
    VPCSecurityGroups:
    - !Ref RDSSecurityGroup
    BackupRetentionPeriod: 0
    DBInstanceIdentifier: 'rotation-instance'

#This is a SecretTargetAttachment resource which updates the referenced Secret resource with properties about
#the referenced RDS instance
SecretRDSInstanceAttachment:
    Type: AWS::SecretsManager::SecretTargetAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    TargetId: !Ref MyDBInstance2
    TargetType: AWS::RDS::DBInstance
#This is a RotationSchedule resource. It configures rotation of password for the referenced secret using a rotation lambda
#The first rotation happens at resource creation time, with subsequent rotations scheduled according to the rotation rules
#We explicitly depend on the SecretTargetAttachment resource being created to ensure that the secret contains all the
#information necessary for rotation to succeed
MySecretRotationSchedule:
    Type: AWS::SecretsManager::RotationSchedule
    DependsOn: SecretRDSInstanceAttachment
    Properties:
    SecretId: !Ref MyRDSInstanceRotationSecret
    RotationLambdaARN: !GetAtt MyRotationLambda.Arn
    RotationRules:
        AutomaticallyAfterDays: 30

#This is a lambda Function resource. We will use this lambda to rotate secrets
#For details about rotation lambdas, see https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html     https://docs.aws.amazon.com/secretsmanager/latest/userguide/rotating-secrets.html
#The below example assumes that the lambda code has been uploaded to a S3 bucket, and that it will rotate a mysql database password
MyRotationLambda:
    Type: AWS::Serverless::Function
    Properties:
    Runtime: python2.7
    Role: !GetAtt MyLambdaExecutionRole.Arn
    Handler: mysql_secret_rotation.lambda_handler
    Description: 'This is a lambda to rotate MySql user passwd'
    FunctionName: 'cfn-rotation-lambda'
    CodeUri: 's3://devsecopsblog/code.zip'      
    Environment:
        Variables:
        SECRETS_MANAGER_ENDPOINT: !Sub 'https://secretsmanager.${AWS::Region}.amazonaws.com' 

Verifying the solution

To be certain that everything is set up properly, you can look at the Lambda code that’s querying the database table by following the below steps:

  1. Go to the AWS Lambda service page
  2. From the list of Lambda functions, click on the function with the name scm2-LambdaRDSTest-…
  3. You can see the environment variables at the bottom of the Lambda Configuration details screen. Notice that there should be no database password supplied as part of these environment variables:
     
    Figure 7: Environment variables

    Figure 7: Environment variables

    
        import sys
        import pymysql
        import boto3
        import botocore
        import json
        import random
        import time
        import os
        from botocore.exceptions import ClientError
        
        # rds settings
        rds_host = os.environ['RDS_HOST']
        name = os.environ['RDS_USERNAME']
        db_name = os.environ['RDS_DB_NAME']
        helperFunctionARN = os.environ['HELPER_FUNCTION_ARN']
        
        secret_name = os.environ['SECRET_NAME']
        my_session = boto3.session.Session()
        region_name = my_session.region_name
        conn = None
        
        # Get the service resource.
        lambdaClient = boto3.client('lambda')
        
        
        def invokeConnCountManager(incrementCounter):
            # return True
            response = lambdaClient.invoke(
                FunctionName=helperFunctionARN,
                InvocationType='RequestResponse',
                Payload='{"incrementCounter":' + str.lower(str(incrementCounter)) + ',"RDBMSName": "Prod_MySQL"}'
            )
            retVal = response['Payload']
            retVal1 = retVal.read()
            return retVal1
        
        
        def openConnection():
            print("In Open connection")
            global conn
            password = "None"
            # Create a Secrets Manager client
            session = boto3.session.Session()
            client = session.client(
                service_name='secretsmanager',
                region_name=region_name
            )
            
            # In this sample we only handle the specific exceptions for the 'GetSecretValue' API.
            # See https://docs.aws.amazon.com/secretsmanager/latest/apireference/API_GetSecretValue.html
            # We rethrow the exception by default.
            
            try:
                get_secret_value_response = client.get_secret_value(
                    SecretId=secret_name
                )
                print(get_secret_value_response)
            except ClientError as e:
                print(e)
                if e.response['Error']['Code'] == 'DecryptionFailureException':
                    # Secrets Manager can't decrypt the protected secret text using the provided KMS key.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InternalServiceErrorException':
                    # An error occurred on the server side.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidParameterException':
                    # You provided an invalid value for a parameter.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'InvalidRequestException':
                    # You provided a parameter value that is not valid for the current state of the resource.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
                elif e.response['Error']['Code'] == 'ResourceNotFoundException':
                    # We can't find the resource that you asked for.
                    # Deal with the exception here, and/or rethrow at your discretion.
                    raise e
            else:
                # Decrypts secret using the associated KMS CMK.
                # Depending on whether the secret is a string or binary, one of these fields will be populated.
                if 'SecretString' in get_secret_value_response:
                    secret = get_secret_value_response['SecretString']
                    j = json.loads(secret)
                    password = j['password']
                else:
                    decoded_binary_secret = base64.b64decode(get_secret_value_response['SecretBinary'])
                    print("password binary:" + decoded_binary_secret)
                    password = decoded_binary_secret.password    
            
            try:
                if(conn is None):
                    conn = pymysql.connect(
                        rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
                elif (not conn.open):
                    # print(conn.open)
                    conn = pymysql.connect(
                        rds_host, user=name, passwd=password, db=db_name, connect_timeout=5)
        
            except Exception as e:
                print (e)
                print("ERROR: Unexpected error: Could not connect to MySql instance.")
                raise e
        
        
        def lambda_handler(event, context):
            if invokeConnCountManager(True) == "false":
                print ("Not enough Connections available.")
                return False
        
            item_count = 0
            try:
                openConnection()
                # Introducing artificial random delay to mimic actual DB query time. Remove this code for actual use.
                time.sleep(random.randint(1, 3))
                with conn.cursor() as cur:
                    cur.execute("select * from Employees")
                    for row in cur:
                        item_count += 1
                        print(row)
                        # print(row)
            except Exception as e:
                # Error while opening connection or processing
                print(e)
            finally:
                print("Closing Connection")
                if(conn is not None and conn.open):
                    conn.close()
                invokeConnCountManager(False)
        
            content =  "Selected %d items from RDS MySQL table" % (item_count)
            response = {
                "statusCode": 200,
                "body": content,
                "headers": {
                    'Content-Type': 'text/html',
                }
            }
            return response        
        

In the AWS Secrets Manager console, you can also look at the new secret that was created from CloudFormation execution by following the below steps:

  1. Go to theAWS Secret Manager service page with appropriate IAM permissions
  2. From the list of secrets, click on the latest secret with the name MyRDSInstanceRotationSecret-…
  3. You will see the secret details and rotation information on the screen, as shown in the following screenshot:
     
    Figure 8: Secret details and rotation information

    Figure 8: Secret details and rotation information

Conclusion

In this post, I showed you how to manage database secrets using AWS Secrets Manager and how to leverage Secrets Manager’s API to retrieve the secrets into a Lambda execution environment to improve database security and protect sensitive data. Secrets Manager helps you protect access to your applications, services, and IT resources without the upfront investment and ongoing maintenance costs of operating your own secrets management infrastructure. To get started, visit the Secrets Manager console. To learn more, visit Secrets Manager documentation.

If you have feedback about this post, add it to the Comments section below. If you have questions about implementing the example used in this post, open a thread on the Secrets Manager Forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Ramesh Adabala

Ramesh is a Solution Architect on the Southeast Enterprise Solution Architecture team at AWS.

from AWS Security Blog

AWS Security Profiles: Matthew Campagna, Sr. Principal Security Engineer, Cryptography

AWS Security Profiles: Matthew Campagna, Sr. Principal Security Engineer, Cryptography

AWS Security Profiles: Matthew Campagna, Senior Principal Security Engineer, Cryptography

In the weeks leading up to re:Inforce, we’ll share conversations we’ve had with people at AWS who will be presenting at the event so you can learn more about them and some of the interesting work that they’re doing.


How long have you been at AWS, and what do you do in your current role?

I’ve been with AWS for almost 6 years. I joined as a Principal Security Engineer, but my focus has always been cryptography. I’m a cryptographer. At the start of my Amazon career, I worked on designing our AWS Key Management Service (KMS). Since then, I’ve gotten involved in other projects—working alongside a group of volunteers in the AWS Cryptography Bar Raisers group.

Today, the Crypto Bar Raisers are a dedicated portion of my team that work with any AWS team who’s designed a novel application of cryptography. The underlying cryptographic mechanisms aren’t novel, but the engineer has figured out how to apply them in a non-standard way, often to solve a specific problem for a customer. We provide these AWS employees with a deep analysis of their applications to ensure that the applications meet our high cryptographic security bar.

How do you explain your job to non-tech friends?

I usually tell people that I’m a mathematician. Sometimes I’ll explain that I’m a cryptographer. If anyone wants detail beyond that, I say I design security protocols or application uses of cryptography.

What’s the most challenging part of your job?

I’m convinced the most challenging part of any job is managing email.

Apart from that, within AWS there’s lots of demand for making sure we’re doing security right. The people who want us to review their projects come to us via many channels. They might already be aware of the Crypto Bar Raisers, and they want our advice. Or, one of our internal AWS teams—often, one of the teams who perform security reviews of our services—will alert the project owner that they’ve deviated from the normal crypto engineering path, and the team will wind up working with us. Our requests tend to come from smart, enthusiastic engineers who are trying to deliver customer value as fast as possible. Our ability to attract smart, enthusiastic engineers has served us quite well as a company. Our engineering strength lies in our ability to rapidly design, develop, and deploy features for our customers.

The challenge of this approach is that it’s not the fastest way to achieve a secure system. That is, you might end up designing things before you can demonstrate that they’re secure. Cryptographers design in the opposite way: We consider “ability to demonstrate security” in advance, as a design consideration. This approach can seem unusual to a team that has already designed something—they’re eager to build the thing and get it out the door. There’s a healthy tension between the need to deliver the right level of security and the need to deliver solutions as quickly as possible. It can make our day-to-day work challenging, but the end result tends to be better for customers.

Amazon’s s2n implementation of the Transport Layer Security protocol was a pretty big deal when it was announced in 2015. Can you summarize why it was a big deal, and how you were involved?

It was a big deal, and it was a big decision for AWS to take ownership of the TLS libraries that we use. The decision was predicated on the belief we could do a better job than other open source TLS packages by providing a smaller, simpler—and inherently more secure—version of TLS that would raise the security bar for us and for our customers.

To do this, the Automated Reasoning Group demonstrated the formal correctness of the code to meet the TLS specification. For the most part, my involvement in the initial release was limited to scenarios where the Amazon contributors did their own cryptographic implementations within TLS (that is, within the existing s2n library), which was essentially like any other Crypto Bar Raiser review for me.

Currently, my team and I are working on additional developments to s2n—we’re deploying something called “quantum-safe cryptography” into it.

You’re leading a session at re:Inforce that provides “an introduction to post-quantum cryptography.” How do you explain post-quantum cryptography to a beginner?

Post-quantum cryptography, or quantum-safe cryptography, refers to cryptographic techniques that remain secure even against the power of a large-scale quantum computer.

A quantum computer would be fundamentally different than the computers we use today. Today, we build computers based off of certain mathematical assumptions—that certain cryptographic ciphers cannot be cracked without an immense, almost impossible amount of computing power. In particular, a basic assumption that cryptographers build upon today is that the discreet log problem, or integer factorization, is hard. We take it for granted that this type of problem is fundamentally difficult to solve. It’s not a task that can be completed quickly or easily.

Well, it turns out that if you had the computing power of a large-scale quantum computer, those assumptions would be incorrect. If you could figure out how to build a quantum computer, it could unravel the security aspects of the TLS sessions we create today, which are built upon those assumptions.

The reason that we take this “if” so seriously is that, as a company, we have data that we know we want to keep secure. The probability of such a quantum computer coming into existence continues to rise. Eventually, the probability that a quantum computer exists during the lifetime of the sensitivity of the data we are protecting will rise above the risk threshold that we’re willing to accept.

It can take 10 to 15 years for the cryptographic community to study new algorithms well enough to have faith in the core assumptions about how they work. Additionally, it takes time to establish new standards and build high quality and certified implementations of these algorithms, so we’re investing now.

I research post-quantum cryptographic techniques, which means that I’m basically looking for quantum-safe techniques that can be designed to run on the classical computers that we use now. Identifying these techniques lets us implement quantum-safe security well in advance of a quantum computer. We’ll remain secure even if someone figures out how to create one.

We aren’t doing this alone. We’re working within in the larger cryptographic community and participating in the NIST Post-Quantum Cryptography Standardization process.

What do you hope that people will do differently as a result of attending your re:Inforce session?

First, I hope people download and use s2n in any form. S2n is a nice, simple Transport Layer Socket (TLS) implementation that reduces overall risk for people who are currently using TLS.

In addition, I’d encourage engineers to try the post-quantum version of s2n and see how their applications work with it. Post-quantum cryptographic schemes are different. They have a slightly different “shape,” or usage. They either take up more bandwidth, which will change your application’s latency and bandwidth use, or they require more computational power, which will affect battery life and latency.

It’s good to understand how this increase in bandwidth, latency, and power consumption will impact your application and your user experience. This lets you make proactive choices, like reducing the frequency of full TLS handshakes that your application has to complete, or whatever the equivalent would be for the security protocol that you’re currently using.

What implications do post-quantum s2n developments have for the field of cloud security as a whole?

My team is working in the public domain as much as possible. We want to raise the cryptography bar not just for AWS, but for everyone. In addition to the post-quantum extension to s2n that we’re writing, we’re writing specifications. This means that any interested party can inspect and analyze precisely how we’re doing things. If they want to understand nuances of TLS 1.2 or 1.3, they can look at those specifications, and see how these post-quantum extensions apply to those standards.

We hope that documenting our work in the public space, where others can build interoperable systems, will raise the bar for all cloud providers, so that everyone is building upon a more secure foundation.

What resources would you recommend to someone interested in learning more about s2n or post-quantum cryptography?

For s2n, we do a lot of our communication through Security Blog posts. There’s also the AWS GitHub repository, which houses our source code. It’s available to anyone who wants to look at it, use it, or become a contributor. Any issues that arise are captured in issue pages there.

For quantum-safe crypto, a fairly influential paper was released in 2015. It’s the European Telecommunications Standards Institute’s Quantum-Safe Whitepaper (PDF file). It provides a gentle introduction to quantum computing and the impact it has on information systems that we’re using today to secure our information. It sets forth all of the reasons we need to invest now. It helped spur a shift in thinking about post-quantum encryption, from “research project” to “business need.”

There are certainly resources that allow you to go a lot deeper. There’s a highly technical conference called PQ Crypto that’s geared toward cryptographers and focuses on post-quantum crypto. For resources ranging from executive to developer level, there’s a quantum-safe cryptography workshop organized every year by the Institute for Quantum Computing at the University of Waterloo (IQC) and the European Telecommuncations Standards Institute (ETSI). AWS is partnering with ETSI/IQC to host the 2019 workshop in Seattle.

What’s one fact about cryptography that you think everyone—even laypeople—should be aware of?

People sometimes speak about cryptography like it’s a fact or a mathematical science. And it’s not, precisely. Cryptography doesn’t guarantee outcomes. It deals with probabilities based upon core assumptions. Cryptographic engineering requires you to understand what those assumptions are and closely monitor any challenges to them.

In the business world, if you want to keep something secret or confidential, you need to be able to express the probability that the cryptographic method fails to provide the desired security property. Understanding this probability is how businesses evaluate risk when they’re building out a new capability. Cryptography can enable new capabilities that might otherwise represent too high a risk. For instance, public-key cryptography and certificate authorities enabled the development of the Secure Socket Layer (SSL) protocol, and this unlocked e-Commerce, making it possible for companies to authenticate to end users, and for end users to engage in a confidential session to conduct business transactions with very little risk. So at the end of the day, I think of cryptography as essentially a tool to reduce the risk of creating new capabilities, especially for business.

Anything else?

Don’t think of cryptography as a guarantee. Think about it as a probability that’s tied to how often you use the cryptographic method.

You have confidentiality if you use the system based on an assumption that you can understand, like “this cryptographic primitive (or block cipher) is a pseudo-random permutation.” Then, if you encrypt 232 messages, the probability that all your data stays secure (confidential or authentic) is, let’s say, 2-72. Those numbers are where people’s eyes may start to gloss over when they hear them, but most engineers can process that information if it’s written down. And people should be expecting that from their solutions.

Once you express it like that, I think it’s clear why we want to move to quantum-safe crypto. The probabilities we tolerate for cryptographic security are very small, typically smaller than 2-32, around the order of one in four billion. We’re not willing to take much risk, and we don’t typically have to from our cryptographic constructions.

That’s especially true for a company like Amazon. We process billions of objects a day. Even if there’s a one in the 232 chance that some information is going to spill over, we can’t tolerate such a probability.

Most of cryptography wasn’t built with the cloud in mind. We’re seeing that type of cryptography develop now—for example, cryptographic computing models where you encrypt the data before you store it in the cloud, and you maintain the ability to do some computation on its encrypted form, and the plaintext never exists within the cloud provider’s systems. We’re also seeing core crypto primitives, like the Advanced Encryption Standard, which wasn’t designed for the cloud, begin to show some age. The massive use cases and sheer volume of things that we’re encrypting require us to develop new techniques, like the derived-key mode of AES-GCM that we use in AWS KMS.

What does cloud security mean to you, personally?

I’ll give you a roundabout answer. Before I joined Amazon, I’d been working on quantum-safe cryptography, and I’d been thinking about how to securely distribute an alternative cryptographic solution to the community. I was focused on whether this could be done by tying distribution into a user’s identity provider.

Now, we all have a trust relationship with some entity. For example, you have a trust relationship between yourself and your mobile phone company that creates a private, encrypted tunnel between the phone and your local carrier. You have a similar relationship with your cable or internet provider—a private connection between the modem and the internet provider.

When I looked around and asked myself who’d make a good identity provider, I found a lot of entities with conflicting interests. I saw few companies positioned to really deliver on the promise of next-generation cryptographic solutions, but Amazon was one of them, and that’s why I came to Amazon.

I don’t think I will provide the ultimate identity provider to the world. Instead, I’ve stayed to focus on providing Amazon customers the security they need, and I’m thrilled to be here because of the sheer volume of great cryptographic engineering problems that I get to see on a regular basis. More and more people have their data in a cloud. I have data in the cloud. I’m very motivated to continue my work in an environment where the security and privacy of customer data is taken so seriously.

You live in the Seattle area: When friends from out of town visit, what hidden gem do you take them to?

When friends visit, I bring them to the Amazon Spheres, which are really neat, and the MoPOP museum. For younger people, children, I take them on the Seattle Underground Tour. It has a little bit of a Harry Potter-like feel. Otherwise, the great outdoors! We spend a lot of time outside, hiking or biking.

The AWS Security team is hiring! Want to find out more? Check out our career page.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Campagna bio photo

Matthew Campagna

Matthew is a Sr. Principal Engineer for Amazon Web Services’s Cryptography Group. He manages the design and review of cryptographic solutions across AWS. He is an affiliate of Institute for Quantum Computing at the University of Waterloo, a member of the ETSI Security Algorithms Group Experts (SAGE), and ETSI TC CYBER’s Quantum Safe Cryptography group. Previously, Matthew led the Certicom Research group at BlackBerry managing cryptographic research, standards, and IP, and participated in various standards organizations, including ANSI, ZigBee, SECG, ETSI’s SAGE, and the 3GPP-SA3 working group. He holds a Ph.D. in mathematics from Wesleyan University in group theory, and a bachelor’s degree in mathematics from Fordham University.

from AWS Security Blog

How to use AWS Secrets Manager client-side caching in .NET

How to use AWS Secrets Manager client-side caching in .NET

AWS Secrets Manager now has a client-side caching library for.NET that makes it easier to access secrets from .NET applications. This is in addition to client-side caching libraries for Java, JDBC, Python, and Go. These libraries help you improve availability, reduce latency, and reduce the cost of retrieving your secrets. Secrets Manager cache library does this by serving secrets out of a local cache and eliminating frequent Secrets Manager API calls.

AWS Secrets Manager enables you to automatically rotate, manage, and retrieve secrets throughout their lifecycle. Users and applications can access secrets through a call to Secrets Manager APIs, eliminating the need to hardcode sensitive information in plain text. It offers secret rotation with built-in integration for AWS services such as Amazon Relational Database Service (Amazon RDS) and Amazon Redshift, and it’s also extensible to other types of secrets. Secrets Manager enables you to control access to secrets using fine-grained permissions, and all actions on secrets, including retrievals, are traceable and auditable through AWS CloudTrail.

AWS Secrets Manager client-side caching for .NET extends benefits of the AWS Secrets Manager to wider use cases in .NET applications. These extra benefits are now available without having to spend precious time and effort on developing your own caching solution.

In this post, I’ll discuss the following topics:

  • The benefits offered by Secrets Manager client-side caching library for .NET
  • How Secrets Manager client-side caching library for .NET works
  • How to use Secrets Manager client-side library in .NET applications
  • How to extend Secrets Manager client-side library with your own custom logic

The benefits offered by Secrets Manager client-side caching library for .NET

Client-side caching is benefitial in following ways:

  • Availability: Network links sometimes suffer slowdowns or intermittent breaks. Client-side caching can significantly improve availability by eliminating a large number of API calls.
  • Latency: Retrieving secrets through API calls includes the network latency. Retrieving secrets from the local cache eliminates that latency and, therefore, improves performance.
  • Cost: Each API call to a Secrets Manager endpoint encounters a small charge. Using a local cache saves costs associated with API calls.

Using a client-side cache is a best practice; however, in the same way that you don’t want to reinvent the wheel everytime you need one, crafting your own client-side caching solution is suboptimal. The Secrets Manager client-side caching library relieves you from writing your own client-side caching solution while still giving you its benefits. Furthermore, it includes best practices such as:

  • Automatically refreshing cached secrets: the library periodically updates secrets to ensure your application gets the most recent version of a secret. You can control and change refresh intervals using configuration properties.
  • Integration with your applications: To use this library, just add the dependency to your .NET project and provide the identifier to the secret you want to access in your code.

How Secrets Manager client-side caching library for .NET works

The library is implemented in .NET Standard. This means you can reuse the same library in projects of all flavors of .NET, including .NET Framework, .NET Core, and Xamarin.

Note: Because the AWS Secrets Manager client-side caching library depends on Microsoft.Extensions.caching.memory, make sure you add it to your project dependencies.

As an extension to Secrets Manager .NET SDK, the cache library provides you an alternative to direct invocation of Secrets Manager API methods. You invoke cache library methods, and if the value doesn’t exist in the cache, the cache library invokes Secrets Manager methods on your behalf.

The default refresh interval for “current” version of secrets (the latest value stored in Secrets Manager for that secret) is 1 hour. This is because latest version may change from time to time. The library allows you to configure this frequency to higher or lower per your specific application requirements.

If you request a specific version of a secret by specifying both secret ID and secret version parameters, by default the library sets refresh interval to 48 hours. Since each version of a secret is immutable, there is no need to refresh them frequently.

You can also enable “Last known good value caching” to provide some protection in cases of transient network issues or service outages. If this is enabled, the cache will keep track of the last known good secret value, and in the event of an error occurring while refreshing a secret value from the service, the cache will return the last known good value. This feature is disabled by default, and can be enabled by setting the EnableLastKnownGoodValueCaching property of SecretsManagerCacheOptions class to true. You can pass your instance of SecretsManagerCacheOptions to SecretsManagerCache constructor.

The cache library provides a thread-safe implementation for both cache-check as well as entry populations. Therefore, simultaneous requests for a secret that is not available in the cache will result in a single API request to SecretsManager.

How to use Secrets Manager client-side caching library in .NET applications

You can add Secrets Manager client-side caching library to your projects either directly or through dependency injection. The dependency package is also available through NuGet. In this example, I use NuGet to add the library to my project. Open the NuGet Package Manager console and browse for AWSSDK.SecretsManager.Caching. Select the library and install it.
 

Figure 1: Select the AWSSDK.SecretsManager.Caching library

Figure 1: Select the AWSSDK.SecretsManager.Caching library

Before using the cache, you need to have at least one secret stored in your account using AWS Secrets Manager. To create a test secret:

  1. Go to the AWS Console, and then select AWS Secrets Manager.
  2. Select Store a new secret.
  3. For secret type, select Other type of secret, and then add three key/value pairs as shown here:
    
        {
          "Domain": "<yourDomainName>",
          "UserName": "<yourUserName>",
          "Password": "<yourPassword>"
        }  
        

  4. Next, create a cache object, and then invoke its methods with appropriate parameters. Below is a code snippet using AWS Secrets Manager client-side cache library to access our secret. Notice this snippet assumes you’ve added Newtonsoft.Json library to your project:
    
        public MyClass : IDisposable
        {
                private readonly IAmazonSecretsManager secretsManager;
                private readonly ISecretsManagerCache cache;
            
                public MyClass()
                {
                    this.secretsManager = new AmazonSecretsManagerClient();
                    this.cache = new SecretsManagerCache(this.secretsManager);
                }
        
                public void Dispose()
                {
                    this.secretsManager.Dispose();
                    this.cache.Dispose();
                }
            
                public async Task<NetworkCredential> GetNetworkCredential(string secretId)
                    {
                        var sec = await this.cache.GetSecretAsync(secretId);
                        var jo = Newtonsoft.Json.Linq.JObject.Parse(sec.SecretString);
                        return new NetworkCredential(
                            domain: jo["Domain"].ToObject<string>(),
                            userName: jo["Username"].ToObject<string>(),
                            password: jojo["Password"].ToObject<string>());
                    }
                }        
        

For ASP.NET projects, you can use the library with dependency-injection. To do this, you first have to register Secrets Manager caching to the dependency injection service collection in the Startup class of your ASP.NET project:


public class Startup
{
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddSecretsManagerCaching();
    }
}

Then, you’ll be able to consume the cache using constructor injection in your classes.


public MyClass : IDisposable
    {
        private readonly ISecretsManagerCache cache;

        public MyClass(ISecretsManagerCache cache)
        {
            this.cache = cache;
        }

        public async Task<NetworkCredential> GetNetworkCredential(string secretId)
        {
            var sec = await this.cache.GetSecretAsync(secretId);
            var jo = Newtonsoft.Json.Linq.JObject.Parse(sec.SecretString);
                        return new NetworkCredential(
                        domain: jo["Domain"].ToObject<string>(),
                        userName: jo["Username"].ToObject<string>(),
                        password: jojo["Password"].ToObject<string>());
        }
    }    

How to add in-memory encryption and other custom extensions

The Secrets Manager caching library is designed to be extendable with your own custom logic. One possibility is to extend its implementation to include in-memory encryption of cached secrets to add another layer of protection on your retrieved secrets. For this purpose, you have to manually implement two of the interfaces included in the library. The library includes SecretCacheEntry class, implementing the interface ISecretCacheEntry. This is the object that stores secrets in memory. You could create another class implementing the same ISecretCacheEntry interface to add in-memory encryption/decryption.


public class EncryptedSecretCacheEntry : ISecretCacheEntry
    {
        public EncryptedSecretCacheEntry(GetSecretValueResponse response, TimeSpan expiry)
        {
            this.VersionId = response.VersionId;
            this.LastRetreived = DateTime.UtcNow;
            this.Name = response.Name;
            this.Expires = this.LastRetreived.Add(expiry);

            if (response.SecretBinary != null && response.SecretBinary.Length > 0)
            {
                using (var ms = response.SecretBinary)
                {
                    this.SecretBinary = ms.ToArray();
                }
            }
            else
            {
                this.SecretString = response.SecretString; 
            }
        }
    
        private byte[] _EncryptedSecretString;
        public string SecretString
        {
            get { return MyCustomCipherService.DecryptString(_EncryptedSecretString); }
            set { _EncryptedSecretString = MyCustomCipherService.EncryptString(value); }
        }
    
        private byte[] _EncryptedSecretBinary;
        public byte[] SecretBinary
        {
            get { return MyCustomCipherService.Decrypt(_EncryptedSecretBinary); }
            set { _EncryptedSecretBinary = MyCustomCipherService.Encrypt(value); }
        }
    
        public string VersionId { get; private set; }

        public string Name { get; private set; }

        public string LocalId => $"{this.Name}:{this.VersionId}";

        public DateTime LastRetreived { get; private set; }

        public DateTime Expires { get; private set; }
    } 

The second step is to implement the ISecretCacheEntryFactory class:


public class EncryptedSecretCacheEntryFactory : ISecretCacheEntryFactory
    {
        public ISecretCacheEntry CreateEntry(GetSecretValueResponse response, TimeSpan expiry)
        {
            return new EncryptedSecretCacheEntry(response, expiry);
        }
    }

Having these two classes, I can now modify the constructor of my SecretsUserClass to add my custom encryption logic to Secrets Manager cache library:


public SecretsUserClass()
    {
        this.secretsManager = new AmazonSecretsManagerClient();
        this.cache = new SecretsManagerCache(this.secretsManager, new   EncryptedSecretCacheEntryFactory(), new SecretsManagerCacheOptions(), null);
    }

You could even go further and fully customize the cache by implementing ISecretsManagerCache or implementing a child class that inherits functionality of SecretsManagerCache and adds new methods to it.

Conclusion

It’s critical for enterprises to protect secrets from unauthorized access and adhere to various industry or legislative compliance requirements. Mitigating the risk of compromise often involves complex techniques, significant effort, and costs, such as applying encryption, managing vaults and HSM modules, rotating secrets, audit access, and so on. Because the level of effort is high, many developers tend to use the much simpler, but substantially riskier, alternative of hard-coding secrets in application code, or simply storing secrets in plain-text format. These practices are problematic from the security and compliance point of view, but they need to be understood as symptoms of the more fundamental problem of complexity in the systems enterprises have built. To address the problem of weak security and compliance practices, you have to address the problem of complexity. Complex systems can be simplified and made more secure when they are reusable, accessible, and are automated, needing no human interaction.

In this post, I’ve shown how you can improve availability, reduce latency, and reduce the cost of using your secrets by using the Secrets Manager client-side caching library for .NET. I also showed how to extend it by implementing your own custom logic for more advanced use-cases, such as in-memory encryption of secrets.

To get started managing secrets, open the Secrets Manager console. To learn more, read
How to Store, Distribute, and Rotate Credentials Securely with Secret Manager or refer to the Secrets Manager documentation. See AWS Region Table for the list of AWS regions where Secrets Manager is available.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread in the Secrets Manager forum.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Sepehr Samiei

Sepehr is currently a Senior Solutions Architect at AWS. He started his professional career as a .Net developer, which continued for more than 10 years. Early on, he quickly became a fan of cloud computing and loves to help customers utilise the power of Microsoft tech on AWS. His wife and daughter are the most precious parts of his life, and he and his wife expect to have a son soon!

from AWS Security Blog

New whitepaper available: Architecting for PCI DSS Segmentation and Scoping on AWS

New whitepaper available: Architecting for PCI DSS Segmentation and Scoping on AWS

AWS has published a whitepaper, Architecting for PCI DSS Scoping and Segmentation on AWS, to provide guidance on how to properly define the scope of your Payment Card Industry (PCI) Data Security Standard (DSS) workloads running on the AWS Cloud. The whitepaper looks at how to define segmentation boundaries between your in-scope and out-of-scope resources using cloud native AWS services.

The whitepaper is intended for engineers and solution builders, but it also serves as a guide for Qualified Security Assessors (QSAs) and internal security assessors (ISAs) to better understand the different segmentation controls available within AWS products and services, along with associated scoping considerations.

Compared to on-premises environments, software defined networking on AWS transforms the scoping process for applications by providing additional segmentation controls beyond network segmentation. Thoughtful design of your applications and selection of security-impacting services for implementing your required controls can reduce the number of systems and services in your cardholder data environment (CDE).

The whitepaper is based on the PCI Council’s Information Supplement: Guidance for PCI DSS Scoping and Network Segmentation.

If you have questions or want to learn more, contact your account executive, or leave a comment below.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Avik Mukherjee

Avik is a Security Architect with more than a decade of experience in IT governance, security, risk, and compliance. He’s a Qualified Security Assessor (QSA) for Payment Card Industry (PCI) Data Security Standard (DSS) and Point-to-Point-Encryption (P2PE) and has deep knowledge of security advisory and assessment work in various industries, including retail, financial, and technology. He’s part of the AWS professional services teams that work with clients to assist them in their journey to transform the security posture of their resources running on AWS. He loves spending time with his family and working on his culinary skills.

from AWS Security Blog

Simplify DNS management in a multi-account environment with Route 53 Resolver

Simplify DNS management in a multi-account environment with Route 53 Resolver

In a previous post, I showed you a solution to implement central DNS in a multi-account environment that simplified DNS management by reducing the number of servers and forwarders you needed when implementing cross-account and AWS-to-on-premises domain resolution. With the release of the Amazon Route 53 Resolver service, you now have access to a native conditional forwarder that will simplify hybrid DNS resolution even more.

In this post, I’ll show you a modernized solution to centralize DNS management in a multi-account environment by using Route 53 Resolver. This solution allows you to resolve domains across multiple accounts and between workloads running on AWS and on-premises without the need to run a domain controller in AWS.

Solution overview

My solution will show you how to solve three primary use-cases for domain resolution:

  • Resolving on-premises domains from workloads running in your VPCs.
  • Resolving private domains in your AWS environment from workloads running on-premises.
  • Resolving private domains between workloads running in different AWS accounts.

The following diagram explains the high-level full architecture.
 

Figure 1: Solution architecture diagram

Figure 1: Solution architecture diagram

In this architecture:

  1. This is the Amazon-provided default DNS server for the central DNS VPC, which we’ll refer to as the DNS-VPC. This is the second IP address in the VPC CIDR range (as illustrated, this is 172.27.0.2). This default DNS server will be the primary domain resolver for all workloads running in participating AWS accounts.
  2. This shows the Route 53 Resolver endpoints. The inbound endpoint will receive queries forwarded from on-premises DNS servers and from workloads running in participating AWS accounts. The outbound endpoint will be used to forward domain queries from AWS to on-premises DNS.
  3. This shows conditional forwarding rules. For this architecture, we need two rules, one to forward domain queries for onprem.private zone to the on-premises DNS server through the outbound gateway, and a second rule to forward domain queries for awscloud.private to the resolver inbound endpoint in DNS-VPC.
  4. This indicates that these two forwarding rules are shared with all other AWS accounts through AWS Resource Access Manager and are associated with all VPCs in these accounts.
  5. This shows the private hosted zone created in each account with a unique subdomain of awscloud.private.
  6. This shows the on-premises DNS server with conditional forwarders configured to forward queries to the awscloud.private zone to the IP addresses of the Resolver inbound endpoint.

Note: This solution doesn’t require VPC-peering or connectivity between the source/destination VPCs and the DNS-VPC.

How it works

Now, I’m going to show how the domain resolution flow of this architecture works according to the three use-cases I’m focusing on.

First use case

 

 Figure 2:  Use case for resolving on-premises domains from workloads running in AWS

Figure 2: Use case for resolving on-premises domains from workloads running in AWS

First, I’ll look at resolving on-premises domains from workloads running in AWS. If the server with private domain host1.acc1.awscloud.private attempts to resolve the address host1.onprem.private, here’s what happens:

  1. The DNS query will route to the default DNS server of the VPC that hosts host1.acc1.awscloud.private
  2. Because the VPC is associated with the forwarding rules shared from the central DNS account, these rules will be evaluated by the default Amazon-provided DNS in the VPC.
  3. In this example, one of the rules indicates that queries for onprem.private should be forwarded to an on-premises DNS server. Following this rule, the query will be forwarded to an on-premises DNS server.
  4. The forwarding rule is associated with the Resolver outbound endpoint, so the query will be forwarded through this endpoint to an on-premises DNS server.

In this flow, the DNS query that was initiated in one of the participating accounts has been forwarded to the centralized DNS server which, in turn, forwarded this to the on-premises DNS.

Second use case

Next, here’s how on-premises workloads will be able to resolve private domains in your AWS environment:
 

Figure 3: Use case for how on-premises workloads will be able to resolve private domains in your AWS environment

Figure 3: Use case for how on-premises workloads will be able to resolve private domains in your AWS environment

In this case, the query for host1.acc1.awscloud.private is initiated from an on-premises host. Here’s what happens next:

  1. The domain query is forwarded to on-premises DNS server.
  2. The query is then forwarded to the Resolver inbound endpoint via a conditional forwarder rule on the on-premises DNS server.
  3. The query reaches the default DNS server for DNS-VPC.
  4. Because DNS-VPC is associated with the private hosted zone acc1.awscloud.private, the default DNS server will be able to resolve this domain.

In this case, the DNS query has been initiated on-premises and forwarded to centralized DNS on the AWS side through the inbound endpoint.

Third use case

Finally, you might need to resolve domains across multiple AWS accounts. Here’s how you could achieve this:
 

Figure 4: Use case for how to resolve domains across multiple AWS accounts

Figure 4: Use case for how to resolve domains across multiple AWS accounts

Let’s say that host1 in host1.acc1.awscloud.private attempts to resolve the domain host2.acc2.awscloud.private. Here’s what happens:

  1. The domain query is sent to the default DNS server for the VPC hosting source machine (host1).
  2. Because the VPC is associated with the shared forwarding rules, these rules will be evaluated.
  3. A rule indicates that queries for awscloud.private zone should be forwarded to the resolver endpoint in DNS-VPC (for inbound endpoint IP addresses), which will then use the Amazon-provided default DNS to resolve the query.
  4. Because DNS-VPC is associated with the acc2.awscloud.private hosted zone, the default DNS will use auto-defined rules to resolve this domain.

This use case explains the AWS-to-AWS case where the DNS query has been initiated on one participating account and forwarded to central DNS for resolution of domains in another AWS account. Now, I’ll look at what it takes to build this solution in your environment.

How to deploy the solution

I’ll show you how to configure this solution in four steps:

  1. Set up a centralized DNS account.
  2. Set up each participating account.
  3. Create private hosted zones and Route 53 associations.
  4. Configure on-premises DNS forwarders.

Step 1: Set up a centralized DNS account

In this step, you’ll set up resources in the centralized DNS account. Primarily, this includes the DNS-VPC, Resolver endpoints, and forwarding rules.

  1. Create a VPC to act as DNS-VPC according to your business scenario, either using the web console or from an AWS Quick Start. You can review common scenarios in the Amazon VPC user guide; one very common scenarios is a VPC with public and private subnets.
  2. Create resolver endpoints. You need to create an outbound endpoint to forward DNS queries to on-premises DNS and an inbound endpoint to receive DNS queries forwarded from on-premises workloads and other AWS accounts.
  3. Create two forwarding rules. The first rule is to forward DNS queries for zone onprem.private to your on-premises DNS server IP addresses, and the second rule is to forward DNS queries for zone awscloud.private to the IP addresses of the resolver inbound endpoint.
  4. After creating the rules, associate them with DNS-VPC that was created in step #1. This will allow the Route 53 Resolver to start forwarding domain queries accordingly.
  5. Finally, you need to share the two forwarding rules with all participating accounts. To do that, you’ll use AWS Resource Access Manager and you can share the rules with your entire AWS Organization or with specific accounts.

Note: To be able to forward domain queries to your on-premises DNS server, you need connectivity between your data center and DNS-VPC, which could be established either using site-to-site VPN or AWS Direct Connect.

Step 2: Set up participating accounts

For each participating account, you need to configure your VPCs to use the shared forwarding rules, and you need to create a private hosted zone for each account.

  • Accept the shared rules from AWS Resource Access Manager. This step is not required if the rules were shared to your AWS Organization. Then, associate the forwarding rules with the VPCs that host your workloads in each account. Once associated, the resolver will start forwarding DNS queries according to the rules.

At this point, you should be able to resolve on-premises domains from workloads running in any VPC associated with the shared forwarding rules. To create private domains in AWS, you need to create Private Hosted Zones.

Step 3: Create private hosted zones

In this step, you need to create a private hosted zone in each account with a subdomain of awscloud.private. Use unique names for each private hosted zone to avoid domain conflicts in your environment (for example, acc1.awscloud.private or dev.awscloud.private).

  1. Create a private hosted zone in each participating account with a subdomain of awscloud.private and associate it with VPCs running in that account.
  2. Associate the private hosted zone with DNS-VPC. This allows the centralized DNS-VPC to resolve domains in the private hosted zone and act as a DNS resolver between AWS accounts.

Because the private hosted zone and DNS-VPC are in different accounts, you need to associate the private hosted zone with DNS-VPC. To do that, you need to create authorization from the account that owns the private hosted zone and accept this authorization from the account that owns DNS-VPC. You can do that using AWS CLI:

  1. In each participating account, create the authorization using the private hosted zone ID, the region, and the VPC ID that you want to associate (DNS-VPC).
    
        aws route53 create-vpc-association-authorization --hosted-zone-id <hosted-zone-id>  --vpc VPCRegion=<region> ,VPCId=<vpc-id>    
    

  2. In the centralized DNS account, associate the DNS-VPC with the hosted zone in each participating account.
    
        aws route53 associate-vpc-with-hosted-zone --hosted-zone-id <hosted-zone-id> --vpc VPCRegion=<region>,VPCId=<vpc-id>    
    

Step 4: Configure on-premises DNS forwarders

To be able to resolve subdomains within the awscloud.private domain from workloads running on-premises, you need to configure conditional forwarding rules to forward domain queries to the two IP addresses of resolver inbound endpoints that were created in the central DNS account. Note that this requires connectivity between your data center and DNS-VPC, which could be established either using site-to-site VPN or
AWS Direct Connect.

Additional considerations and limitations

Thanks to the flexibility of Route 53 Resolver and conditional forwarding rules, you can control which queries to send to central DNS and which ones to resolve locally in the same account. This is particularly important when you plan to use some AWS services, such as AWS PrivateLink or Amazon Elastic File System (EFS) because domain names associated with these services need to be resolved local to the account that owns them. In this section, I will name two use-cases that require additional considerations.

  1. Interface VPC Endpoints (AWS PrivateLink)

    When you create an AWS PrivateLink interface endpoint, AWS generates endpoint-specific DNS hostnames that you can use to communicate with the service. For AWS services and AWS Marketplace partner services, you can optionally enable private DNS for the endpoint. This option associates a private hosted zone with your VPC. The hosted zone contains a record set for the default DNS name for the service (for example, ec2.us-east-1.amazonaws.com) that resolves to the private IP addresses of the endpoint network interfaces in your VPC. This enables you to make requests to the service using its default DNS hostname instead of the endpoint-specific DNS hostnames.

    If you use private DNS for your endpoint, you have to resolve DNS queries to the endpoint local to the account and use the default DNS provided by AWS. So, in this case, I recommend that you resolve domain queries in amazonaws.com locally and not forward these queries to central DNS.

  2. Mounting EFS with a DNS name

    You can mount an Amazon EFS file system on an Amazon EC2 instance using DNS names. The file system DNS name automatically resolves to the mount target’s IP address in the Availability Zone of the connecting Amazon EC2 instance. To be able to do that, the VPC must use the default DNS provided by Amazon to resolve EFS DNS names.

    If you plan to use EFS in your environment, I recommend that you resolve EFS DNS names locally and avoid sending these queries to central DNS because clients in that case would not receive answers optimized for their availability zone, which might result in higher operation latencies and less durability.

Summary

In this post, I introduced a simplified solution to implement central DNS resolution in a multi-account and hybrid environment. This solution uses AWS Route 53 Resolver, AWS Resource Access Manager, and native Route 53 capabilities and it reduces complexity and operations effort by removing the need for custom DNS servers or forwarders in AWS environment.

If you have feedback about this blog post, submit comments in the Comments section below. If you have questions about this blog post, start a new thread on in the AWS forums.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

Author

Mahmoud Matouk

Mahmoud is part of our world-wide public sector Solutions Architects, helping higher education customers build innovative, secured, and highly available solutions using various AWS services.

from AWS Security Blog

AWS and the CLOUD Act

AWS and the CLOUD Act

While news of Brexit dominates headlines in the United Kingdom, another important event took place recently in London. U.S. Deputy Assistant Attorney General Richard W. Downing addressed the myths and realities of the Clarifying Lawful Overseas Use of Data Act (“CLOUD Act”), in a speech at the Academy of European Law Conference. Following the speech, the U.S. Department of Justice (DOJ) published a whitepaper and FAQ clarifying the purpose and scope of the CLOUD Act and addressing many of the misunderstandings of this law. I strongly encourage people to read the speech, the DOJ’s whitepaper, and the FAQ to understand what the CLOUD Act actually does and does not do. Simply put, the CLOUD Act provides minor updates to a decades-old law that is strictly limited to helping law enforcement agencies fight and deter international criminal and terrorist activity. It does not, as some have suggested, give U.S. law enforcement agencies free access to data stored in the cloud.

We see the DOJ’s speech and guidance as a step in the right direction, but more needs to be done by governments around the world to educate cloud computing customers about important issues regarding access to data. This is why I want to take some time today to highlight a few of the key misunderstandings about the CLOUD Act in order to help customers understand that this law should not change how they use cloud services.

Law enforcement access to data over the last 30 years

In 1986, Congress enacted the Stored Communications Act (“SCA”), which addressed law enforcement access to electronic communications. Although the SCA was considered forward-looking at the time, courts have struggled over the years to apply it to technologies like internet applications and cloud computing that did not exist when the SCA was passed. One area of debate related to whether U.S. law enforcement agencies could obtain data located outside the United States. The CLOUD Act resolved this debate. It made clear that providers subject to U.S. law, such as an entity doing business in the United States (including foreign-based entities with U.S. subsidiaries) can be served with a warrant and court order under the SCA to provide data under their control, regardless of where it is stored.

To be clear, despite suggestions to the contrary, the CLOUD Act does not introduce a new concept. Governments across the globe have long had the ability to obtain evidence of crimes located outside of their jurisdiction. As the DOJ noted in its whitepaper, most countries require disclosure of data wherever it is stored, consistent with the Budapest Convention, which was the first international treaty aimed at improving cooperation and investigations in cyber and computer crimes. Indeed, French courts have long allowed police to obtain data outside of France so long as it is accessible from a computer in France. Most recently, in February 2019, the United Kingdom passed the Crime (Overseas Production Orders) Act, which allows U.K. law enforcement agencies to obtain stored electronic data from a company or person based outside of the United Kingdom.

This practice is consistent with a centuries old principle of international cooperation. Countries use a number of tools, ranging from domestic laws to international treaties, to seek potential evidence located beyond their borders and establish a tradition of cross-border cooperation. This serves as the foundation for what trusted and respected organizations like Europol do, and the CLOUD Act simply reflects what these other law enforcement agencies and other countries have been doing for many years.

Understanding the CLOUD Act

One of the most common misunderstandings about the CLOUD Act is that it is applicable to only U.S. companies. This is not true. The CLOUD Act applies to all electronic communication service or remote computing service providers that are subject to U.S. jurisdiction, including email providers, telecom companies, social media sites, and cloud providers, whether they are established in the United States or in another country. This means any foreign company with an office or subsidiary in the United States is subject to the CLOUD Act. As Mr. Downing said in his speech, U.S. courts have ruled that even non-U.S. websites that have been used by customers based in the United States have been subject to U.S. jurisdiction and therefore could be subject to the CLOUD Act.

Another common misunderstanding about the CLOUD Act is that it somehow provides the U.S. government with unfettered access to data held by cloud providers. This is simply false. The CLOUD Act does not grant law enforcement agencies free access to data stored in the cloud. Law enforcement can compel service providers to provide data only by meeting the rigorous legal standards for a warrant issued by a U.S. court. U.S. law sets a high bar for obtaining a warrant, requiring that an independent judge conclude that law enforcement has reasonable grounds to request the information, the information requested directly relates to a crime, and that the request is made clearly, accurately, and proportionally. This is the opposite of unfettered access.

When AWS receives a request for data located outside the United States, we have tools to challenge it and a long track record of doing so. In fact, our challenges typically begin well before we go to a court. Each request from law enforcement agencies is reviewed by a team of legal professionals. As part of that review, we assess whether the request would violate the laws of the United States or of the foreign country in which the data is located, or would violate the customer’s rights under the relevant laws. We rigorously enforce applicable legal standards to limit – or reject outright – any law enforcement request for data coming from any country, including the United States. We actively push back on law enforcement agencies to address concerns, which frequently results in them withdrawing their request.

In the event we cannot resolve a dispute, we do not hesitate to go to court. Amazon has a history of formally challenging government requests for customer information that we believe are overbroad or otherwise inappropriate. We will continue to resist requests, including those that conflict with local law such as GDPR in the European Union, to do everything we can to protect customer data. We will also continue to notify customers before disclosing content, and we provide advanced encryption and key management services that customers can use to protect their content further. We have industry leading encryption services that give our customers a range of options to encrypt data in-transit and at rest, and to manage encryption/decryption keys – because encrypted content is rendered useless without the applicable decryption keys.

The CLOUD Act did not change cloud providers’ ability to protect their customers

AWS is vigilant about its customers’ privacy and security. We are committed to providing all customers, including governmental agencies who trust us with their most sensitive content, with the most extensive set of security services and features to help ensure complete control of their data. The CLOUD Act did not alter or weaken this commitment. On the contrary, the CLOUD Act recognizes the right of cloud providers to challenge requests that conflict with another country’s laws or national interests and requires that governments respect local rules of law. Additionally, foreign governments concerned about the risk of government data disclosure may be entitled to sovereign immunity. The United States recognizes that under the principle of sovereign immunity foreign governments have effective legal means under U.S. law to prevent disclosure of their data.

Customers around the world can continue to use AWS in compliance with local laws

At AWS, we are constantly helping our customers and partners understand their position in relation to new compliance standards and laws. It is the only way we believe organizations can ensure that they are able to protect their end users. After you have read Mr. Downing’s speech and the documents from DOJ, you should visit our webpage dedicated to the CLOUD Act, which has FAQs, whitepapers, and other resources for customers and APN partners. On that webpage, you can learn the facts about the limited impact of the CLOUD Act and understand its application to AWS.

The reality is that cloud computing is positively impacting lives around the world in all kinds of ways. With AWS technologies, our customers are creating forward-thinking technologies that shape the ways we live and learn, whether through photo sharing and video streaming, increased access to financial services and e-commerce/trade, processing geospatial data for new discoveries, creating or promoting greater opportunities for education and skills development, or helping industries evolve with accessible AI/ML services. Our customers are also leveraging the cloud for good: working to prevent human trafficking, prevent violent crime, improve citizen services in cities, and to make medical breakthroughs. What would be incredibly disappointing would be for all of this to be slowed due to fundamental misunderstandings about the CLOUD Act. The information recently provided by DOJ regarding the CLOUD Act is a helpful step toward greater understanding of the facts, but we hope this post and related resources will bring insight and clarity to this debate.

Michael Punke is the Vice President of Global Public Policy at AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.

from AWS Security Blog

Join us at AWS re:Inforce for the Builders Fair!

Join us at AWS re:Inforce for the Builders Fair!

AWS is launching its first conference dedicated to cloud security, AWS re:Inforce, which will take place June 25-26, 2019 at the Boston Convention and Exhibition Center.

At AWS, we encourage everyone to be a builder, to learn and be curious, and to use AWS products and services to explore the Art of the Possible. At re:Inforce, you’ll have an opportunity to see our “culture of building” at the AWS re:Inforce Builders Fair. The Builders Fair is a set of “science fair” projects from AWS employees that highlight different aspects of security built upon the AWS cloud. You’ll see how AWS services can be used to solve real-world security problems and you’ll get ideas that you can use in your own organization.

The Builders Fair features eight teams of AWS employees from Brazil, Chile, China, and the United States who were chosen out of nearly 100 submissions to our call for presenters. Every project was reviewed by a team of five judges and evaluated against a number of criteria, including the subject and the services used. We looked for submissions that were relevant to both the current and the future state of cloud security. The selected projects cover a number of areas, including data anonymization, chaos testing, detecting social engineering, voice services, and application protection. We’re really excited to share them with you.

Check out the re:Inforce website to get your conference tickets (which include access to the Builders Fair), or view Builders Fair sessions. Please stop by the Builders Fair, meet our team members, and consider the Art of the Possible when it comes to security backed by the power of the AWS cloud.

Want more AWS Security news? Follow us on Twitter.

Author photo: Ram Ramani

Ram Ramani

Ram is a Solutions Architect on the Security and Compliance team at AWS.

Author Photo: Jeff Levine

Jeff Levine

Jeff is a Senior Solutions Architect on the Security and Compliance team at AWS.

from AWS Security Blog