Tag: Healthcare

Closed Loop Security and Compliance Helps You Safely Migrate to and Expand AWS Usage

Closed Loop Security and Compliance Helps You Safely Migrate to and Expand AWS Usage

By Bashyam Anant, Vice President, Product Management at Cavirin Systems
By Suresh Kasinathan, Cloud Security Architect at Cavirin Systems
By Naveen Ramachandrappa, Sr. Machine Learning Engineer at Cavirin Systems

Connect with Cavirin-1

DevOps staff in many organizations are one misconfiguration away from compromising their Amazon Web Services (AWS) resources to attackers as they migrate to and grow their adoption of existing and new AWS services.

In this post, we propose “Closed Loop Security” based on unifying proactive and reactive risk signals as a key strategy for DevOps staff to protect their AWS infrastructure from misconfigurations and vulnerabilities.

Cavirin Systems is an AWS Partner Network (APN) Advanced Technology Partner with the AWS Security Competency. Their solution helps organizations leverage the cost savings and agility of the cloud without increasing operational risk or reducing their security posture.

If you want to be successful in today’s complex IT environment, and remain that way tomorrow and into the future, teaming up with an AWS Competency Partner like Cavirin is The Next Smart.

Closed Loop Security

Closed Loop Security is a refinement of the NIST Cybersecurity Framework (NIST CSF). It unifies proactive assessment of configuration and vulnerability checks with reactive risk signals from monitoring systems like AWS Cloud Trail and AWS CloudWatch, as well as threat detection systems like Amazon GuardDuty.

Put together, we believe Closed Loop Security helps organizations protect their AWS resources despite the volume, variety, and velocity of AWS IaaS and PaaS services adoption.

Like NIST CSF, Closed Loop Security has five steps, visualized in Figure 1 and described in detail below.


Figure 1: Closed Loop Security.

Step 1: Identify

Cavirin discovers and identifies cloud resources using AWS Command Line Interface (CLI) and SSH (Linux) or WinRM (Windows) sessions for Amazon Elastic Compute Cloud (Amazon EC2) instances.

With this information, we build Cavirin’s inventory of more than 30 AWS resource types at the compute, storage, networking, container, and PaaS layers. Each resource has a criticality (0.8 = Low, 5 = High) based on an assessment of confidential data, system integrity, and availability requirements.

Without this step, DevOps users have no visibility into what they are consuming in AWS, and as a result cannot protect their AWS resources.


Figure 2 – Single pane inventory of more than 30 AWS resource types.

Step 2: Protect

In the next step, we assess AWS resources by evaluating 100,000+ configuration and vulnerability controls sourced from AWS best practices, Cavirin thought leadership, vulnerability feeds, threat detection feeds such as GuardDuty, the Center for Internet Security (CIS), Defense Information Systems Agency (DISA), and more.

Examples of configuration controls include:

  • Are any Amazon Simple Storage Service (Amazon S3) buckets open to public?
  • Do any Security Groups or NACLs allow ingress on all ports?
  • Do any Amazon EC2 instances have critical vulnerabilities?
  • Do any Amazon EC2 instances have GuardDuty findings (cryptomining, suspicious inbound/outbound connections) flagged against them?
  • Are weak passwords in use?
  • Have AWS Identity and Access Management (IAM) and SSH certificates been rotated per best practices?
  • Are admin permissions associated with Amazon S3 buckets restricted to users with admin roles?

Of course, no DevOps user would knowingly leave resources in a risky state. However, with more than 30 AWS services in Cavirin’s AWS catalog that customers could be using with multiple users making configuration changes, it’s very easy to make mistakes.

One of the most common scenarios for misconfiguration among Cavirin customers is when a DevOps user temporarily opens Security Group or NACL ports in a pre-production AWS account for the sake of troubleshooting, and inadvertently pushes these configurations to their production AWS deployment.

Some customers use Cavirin to assess a baseline AWS environment created using an organization’s AWS CloudFormation template. Cavirin’s solution may find ways to tweak the template resulting in a more secure baseline infrastructure.

Step 3: Predict

Once Cavirin assesses configuration and vulnerability controls for your AWS infrastructure, it knows which controls are passing versus failing on each AWS resource in Step 2.

Next, Cavirin applies a proprietary scoring algorithm which results in a score (0 = High Risk, 100 = Low Risk) for each resource in your AWS account.

The scoring is based on the following tradeoffs:

  • Weight (0-10) associated with the technical control assigned by a proprietary machine-learning classifier; 0 is used for informational findings, while 10 represents critical findings.
  • Resource criticality.
  • Number of resources failing a given configuration or vulnerability check.

In simple terms, high-weighted technical controls that are failing on lots of critical resources will result in a low score (implying high risk) in our scoring algorithm.

For a DevOps user, scoring serves several purposes:

  • Create change management plans by prioritizing findings that offer the greatest security posture improvement.
  • Slice and dice scores from the company level to a resource group to a resource type to an individual resource to figure out which resource types need immediate attention versus those that can wait.
  • Understand the efficacy of change management by assessing trendlines of scores. Scores with an upward trend imply your actions are having the intended effect.
  • Understand gaps in CloudFormation templates that may be used to create secure baselines

Without scoring, DevOps users can be overwhelmed by the sheer number of configuration and vulnerability issues in their environment leading to inaction and greater risk.

Step 4: Respond

With the help of scoring, DevOps users have a prioritized list of findings they can remediate, using one of the following options:

  • Enact changes via the AWS Management Console, using Cavirin reports that detail remediation steps.
  • Modify CloudFormation templates, using remediation steps in Cavirin reports.
  • Publish security findings to ticketing systems (JIRA, ServiceNow), notification systems (Slack, PagerDuty) and SIEMs (Splunk) for follow-up through those systems.
  • Serverless remediation of cloud resources using AWS Lambda functions.
  • Ansible playbooks remediation for operating system (OS) resources.

Once remediation steps are implemented, an organization should be able to achieve a security baseline.


Figure 3 – Create remediation request as a JIRA issue.

Step 5: Monitor

Achieving a security baseline is a great start. However, infrastructure changes can happen continuously. Incorporating signals from monitoring systems like CloudTrail and CloudWatch, and threat detection platforms like Guard Duty, is an essential aspect of Closed Loop Security.

Cavirin closes the loop by enacting the following:

  • As AWS resources log events to CloudTrail, Cavirin evaluates them against its technical controls. Cavirin also tracks new and deleted resources through CloudTrail.
  • Cavirin accumulates a change log in the form of resources failing technical controls, new resources and deleted resources.
  • If the extent of these changes exceeds a customer-configurable threshold, Cavirin reassesses the customer’s AWS account using the protect workflow of Step 2, and the cycle continues through the predict and respond steps.

This approach of connecting reactive risk signals to proactive technical controls assessment is at heart of Closed Loop Security. For DevOps users, it’s a breakthrough because it:

  • Reduces the time and cost to react to alerts by linking them to technical controls, risk scoring, and remediation.
  • Minimizes operator alert fatigue by dispositioning alerts automatically.
  • Applies risk-based scoring to alerts.

Closed Loop Security Architecture

Cavirin’s Closed Loop Security has two components that can be used together or separately: monitoring framework, and remediation framework.

Cavirin is available on AWS Marketplace, and advanced users can create custom deployments using installation scripts. In either case, Cavirin is deployed in your AWS account with an IAM role associated with the Amazon EC2 instance on which Cavirin is installed with read-only permissions to discover and assess your AWS account.

Closed Loop Monitoring Framework

Once Cavirin is deployed in your account, the setup of monitoring is handled transparently by our solution.

The monitoring setup scripts invokes CLI and creates the artifacts outlined in Figure 4 below, including an Amazon Simple Queue Service (SQS) queue, CloudWatch Alarms, and a Cavirin-authored Lambda function with the least IAM privileges.

Note the monitoring framework uses different IAM permissions compared to the Cavirin AMI to ensure role separation between the protect and monitor activities.

As Figure 4 demonstrates, Cavirin’s monitoring framework automates the following steps, though these are largely hidden from the Cavirin user:

  1. An AWS service (Security Group) logs events via CloudTrail.
  2. Cloudwatch Alarms filter events based on what Cavirin can analyze. These alarms are configured as part of Cavirin’s set up of its monitoring framework.
  3. Cloudwatch acts on alerts by triggering a Cavirin-authored Lambda function, which is deployed automatically by Cavirin and stays in place till the customer uninstalls Cavirin.
  4. The Lambda function processes the alerts and attempts to match them to policies available in Cavirin. Either way, it publishes alerts in a Cavirin format to SQS queue.
  5. Cavirin periodically polls the SQS queue for new alerts.
  6. The SecOps users view alerts within the Cavirin dashboard.


Figure 4 – Closed Loop Monitoring Framework.

Closed Loop Remediation Framework

Once Cavirin is deployed in your account, the setup of remediation via AWS Lambda is handled transparently by Cavirin. The setup scripts create the artifacts outlined in Figure 5 below, including an Amazon Simple Notification Service (SNS) topic, SQS queue, CloudWatch Alarms, and a Cavirin-authored Lambda function with the least IAM privileges.

We chose SQS as we require FIFO (first-in, first-out) behavior for remediation confirmations as will be apparent next.

As Figure 5 demonstrates, Cavirin’s remediation framework automates the following steps, though these are largely hidden from the Cavirin user:

  1. SecOps picks one or more policies to remediate from the Prioritized Issues list within the Cavirin dashboard. The Prioritized Issues is a result of the predict step of Closed Loop Security.
  2. Cavirin publishes a remediation request to an SNS topic, which is created automatically by Cavirin as part of the remediation setup.
  3. A Cavirin-authored Lambda function subscribes to remediation requests. This function is deployed automatically by Cavirin as part of the remediation setup.
  4. Lambda function performs remediation and posts remediation confirmations to an SQS queue, which is set up automatically by Cavirin as part of the remediation.
  5. Cavirin periodically polls the message queue for remediation confirmations.
  6. SecOps users view alerts related to completed remediations. Completed remediations contribute to Cavirin’s change log for the customer’s AWS account.
  7. If the extent of remediation confirmations, policy failures, deleted resources, and new resources exceeds a customer-configurable threshold, Cavirin reassesses the customer’s AWS account using the protect workflow of Step 2, and the cycle continues through the predict and respond steps.


Figure 5 – Closed Loop Remediation Framework.

Closed Loop Security in Action

We demonstrate Closed Loop Security through two scenarios.

Scenario 1: Security Group Ingress Rules

The scenario starts with an AWS account that has a Security Group with several ports open for inbound traffic, which is a risky configuration.

Figure 6 shows the current CyberPosture score for security Groups based on the identify, protect, and predict steps of Closed Loop Security. The prioritized issues shown on the right side of Figure 5 above lists Security Group inbound ports with their improvement potential.


Figure 6 – CyberPosture Score with Open Security Groups

Remediating the issue (closing an open port) is one-click away as shown in Figure 7.


Figure 7 – One-click remediation.

Post remediation, the remediation is acknowledged via a CloudTrail alert shown to the user in Figure 8.


Figure 8 – Remediation confirmation viewed through CloudTrail alerts.

The next assessment shows the improved score.


Figure 9 – CyberPosture score after remediation.

Scenario 2: Amazon GuardDuty Findings

With the configuration of a permissioned AWS service account, Cavirin can ingest GuardDuty findings that are accessible via one or more GuardDuty detectors.

Cavirin ingests all documented Amazon GuardDuty findings, including cryptominers and suspicious API, user, and machine activity. These findings are mapped to Amazon EC2 instances or AWS accounts discovered by Cavirin, and reduces their scores based on our algorithm.

The scenario reflects the following steps:

  • An Amazon EC2 instance IAM entity (user, group, role), or API exhibits suspicious activity. This is reported as a finding in GuardDuty and viewed as an alert in Cavirin.


Figure 10 – Amazon GuardDuty findings list.

  • Cavirin scores the affected Amazon EC2 instances or AWS accounts and reflects it on the dashboard and prioritized issues.


Figure 11 – CyberPosture score due to Amazon GuardDuty findings.

The user can act on these alerts in the AWS Console by quarantining Amazon EC2 instances or disabling IAM users. Just indicate “Mark as Read” against the findings within Cavirin.

In subsequent assessments, the CyberPosture score increases.

Closed Loop Compliance

Cavirin’s approach to compliance mirrors Closed Loop Security with a few significant differences.

For compliance frameworks like HIPAA, PCI, GDPR, and AICPA SOC2, Cavirin has developed a machine learning Recommender System that accomplishes the following:

  • Identifies the best-fit technical control from Cavirin’s technical controls repository.
  • Assigns a similarity score for each technical control that can be used to create differential weights for each technical control that serves to evaluate a compliance control

Compliance policy packs contribute to a compliance posture score, which can be sliced and diced by environment, resource group, policy pack, and resource type to create prioritized action plans.

Next Steps

We believe that Closed Loop Security and compliance as described in this post can help organizations safely migrate to and expand their AWS usage.

You can try Cavirin through AWS Marketplace >>

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.


Connect with Cavirin-1

Cavirin Systems – APN Partner Spotlight

Cavirin is an AWS Security Competency Partner. Their solution helps organizations leverage the cost savings and agility of the cloud without increasing operational risk or reducing their security posture.

Contact Cavirin | Solution Overview | Buy on Marketplace

*Already worked with Cavirin? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

How to Benchmark and Prioritize Security Threats in Amazon GuardDuty Using Sumo Logic

How to Benchmark and Prioritize Security Threats in Amazon GuardDuty Using Sumo Logic

By Bruno Kurtic, Founding VP, Product & Strategy at Sumo Logic
By Sridhar Karnam, Sr. Director of Product Marketing at Sumo Logic

Sumo Logic-APN Logo-1
Sumo Logic-APN Badge-2
Connect with Sumo Logic-1
Rate Sumo Logic-1

Users looking for enhanced security operations within their Amazon Web Services (AWS) environment can utilize Sumo Logic Global Intelligent Service (GIS) for Amazon GuardDuty.

This solution allows organizations to separate the signal from the noise within your security alerts, helping to more accurately pinpoint investigations and resources.

With Sumo Logic, an AWS Partner Network (APN) Advanced Technology Partner with AWS Competencies in Security, Data & Analytics, and DevOps, customers can compare their environment security data to the aggregate AWS global landscape to understand if a potential attack is targeting them specifically.

This invaluable comparison data allows companies to bolster their security efforts by proactively identifying and remediating threats.

In this post, we will go over how to baseline capability from Sumo Logic for the Amazon GuardDuty security threats, and how the security threat benchmark gives you an insight into what’s normal and what’s an expected behavior.

We’ll explore how to prioritize rare threats and events that analysts often miss, and we’ll look at optimizing the security configurations to follow the industry’s best practices.

Lastly, we’ll see how Sumo Logic customer ThoughtWorks has used the benchmarking to detect rare security threats.


Sumo Logic Global Intelligent Service (GIS) for Amazon GuardDuty provides customers with a baseline of what’s normal, what’s expected, and ways to dig deeper into the long tail of rare security events that analysts would typically miss.

Sumo Logic GIS for GuardDuty provides real-time insight and actionable intelligence about technology adoption trends, deployment, and architecture. It generates continuous machine learning and statistical baselines for key performance indicators (KPIs) and key risk indicators (KRIs) from Amazon GuardDuty’s threat detection service.

Those baselines are used by Sumo Logic customers to benchmark, prioritize, and optimize security configuration and detection for their AWS accounts.

The Sumo Logic GSI for GuardDuty benchmark application is available on Sumo Logic App Catalog. All Sumo Logic Enterprise License Customers can install this app from their App Catalog within the product.


The Benchmark app analyzes baselines to benchmark, prioritize, and optimize security configurations and threat detection analysis for customers’ AWS environments. The rare one-off security events are then prioritized by comparing each of them to the baseline using statistical analysis and machine learning.

Prioritize and take action only on the threats that matter. This help you to improve and optimize your security monitoring, configuration, compliance, and implement best practices across AWS accounts.


Figure 1 – Benchmark app architecture using the Sumo Logic Global Intelligent Service.

AWS has done an amazing job with the monitoring framework in Amazon GuardDuty. The logs and events from Amazon CloudTrail, VPC Flow Logs, and DNS Logs are fed into GuardDuty, which analyzes these events and creates findings. These findings can be sent to CloudWatch and then be processed through AWS Lambda functions, or sent into Sumo Logic for further analysis.

The insight of merging global security intelligence and your local data allows you to benchmark local security threats against global threats, prioritize rare security events you may otherwise miss, and optimize your security configurations for industry best practices.

Benchmarking Security Threats on AWS

As more and more enterprises migrate their workloads to AWS, they are faced with learning new methodologies and tools to help them secure their cloud environments.

Amazon GuardDuty works hard on your behalf to detect threats, but how do you prioritize where to focus your resolution efforts? How do you figure out what’s normal or expected behavior?

Sumo Logic GIS for GuardDuty finds the signal in the noise by offering a view into what’s happening from a threat perspective in the broader AWS environment. The Global Threat Activity dashboard gives you a view into active threats, their targets, severity from a GuardDuty perspective, and spotlights unusual activity to take a proactive security approach.

Sumo Logic GIS-Amazon GuardDuty-1

Figure 2 – Global threat benchmark dashboard for Amazon GuardDuty.

From there, you can leverage the dashboard view that overlays global activity with your own activity to help you see how these differ.

For instance, if you’re seeing more backdoor activity than the global benchmark indicates, it could mean you’re currently targeted and may want to take proactive defensive posture against those specific types of threats. This helps you prioritize future resource allocation and understand what’s normal.

Sumo Logic GIS-Amazon GuardDuty-2

Figure 3 –  Threat posture view of baseline against local security data.

Another interesting benchmark is the rare active threat type benchmark. It’s relatively straightforward to figure out the top 10 attacks on your account. However, that classic single-tenant view entirely misses whether what’s happening to you is unique or the normal part of the global landscape.

There are many security issues you can only analyze if you have visibility into global threat activity, which helps you look for those rare threats in your own environment. Sumo Logic GIS for GuardDuty continuously monitors for threats that become active in your account and notifies you via dashboard or alerts.

Prioritizing Rare Events to Investigate

If you determine that threat activity in your account differs in a significant way from the baseline, you can use the threat detail view to isolate those threats to investigate and determine what your exposure is.

The benchmark app compares all active threats in your account to determine significant contributors to the gap. Some of these threats would have gone unnoticed using a classic security information and event management (SIEM) rules engine, or even single-tenant analytics because they’re missing the global context necessary to compare your threats with the global user-base.

Without this lens, many important threats may remain buried under mountains of common findings.

Sumo Logic GIS-Amazon GuardDuty-3

Figure 4 – Threat details of rare security event details.

Another important lens on your security is understanding rare threats that occur in your environment but do not rarely occur anywhere else.

A narrow view often misses the question about whether or not something unique is occurring in your environment. For instance, event such as a “TOR IP caller domain generation algorithm request” occurring in your environment that rarely occurs anywhere else probably warrants an investigation.

Addressing Security Gaps and Misconfigurations

Exposure to specific types of threats may indicate that some aspects of your configuration or security process require improvements.

Using Sumo Logic GIS for GuardDuty helps identify those gaps and enables you to perform root cause analysis to close potential gaps or set up compensating controls. The solution gives you access to all underlying data and benchmark from GuardDuty, but also enables you to correlate that data to other sources such as AWS CloudTrail, Amazon VPC Flow Logs, and other data such as OS logs, containers, or databases.

Sumo Logic also enables you to integrate findings with third-party tools such as ticketing systems, collaboration tools, remediation tools, and more.

You can create alerts that trigger your standard security incident response processes, create incidents inside workflow tools for further investigation, or simply start collaboration with your team to address potential incidents.

Customer Success: ThoughtWorks

ThoughtWorks is a modern application company that has contributed a lot to the open source community around the continuous integration and continuous development (CI/CD) process of the software development life cycle.

Sumo Logic works with ThoughtWorks on the security of their stack, and they have been a preview customer of Sumo Logic’s benchmarking app. The baseline on GuardDuty is used by ThoughtWorks to drill down to the rare events and optimize security configurations to meet or exceed industry best practices.

“As a global consultancy, there are hundreds if not thousands of potential security threats and events that pass through our organization on any given day, making it a challenge to not only track, but prioritize how to handle these events,” says Philip Duldig, senior security analyst at ThoughtWorks.

“As an early adopter of Sumo Logic’s Global Intelligence Service for Amazon GuardDuty, the biggest value we’ve experienced is the ability to get actionable insights to prioritize and benchmark rare or non-frequent security events from our AWS workloads, so we can optimize our security posture. I also love that I can compare global benchmarking data with my local data, to see where we are stacking up.”


Sumo Logic Global Intelligent Service (GIS) for Amazon GuardDuty increases your security posture and helps you benchmark threat detection with the global AWS landscape.

With Sumo Logic GIS for Guard Duty, organizations can utilize machine learning and automatic baselines to create a proactive security operations hub at your organization.

If you’re already an Amazon GuardDuty and Sumo Logic customer, simply go to the Sumo Logic App Catalog and install the app Amazon GuardDuty Benchmark.

New customers can sign up for a free trial, send your GuardDuty findings, and deploy the Benchmark app on your management console.

For more security and DevSecOps-focused reads, check out the Sumo Logic Blog.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.


Sumo Logic-APN Logo-1
Connect with Sumo Logic-1

Sumo Logic – APN Partner Spotlight

Sumo Logic is an AWS Competency Partner. Its cloud-based machine data analytics service helps customers gain instant insights into their growing and complex pool of machine data.

Contact Sumo Logic | Solution Overview | Buy on Marketplace

*Already worked with Sumo Logic? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Why Your Company Should Become Security Experts on AWS

Why Your Company Should Become Security Experts on AWS

By Raveesh Chugh, Global Alliance Manager, Security Partners at AWS

AWS Navigate-2019Cloud security at Amazon Web Services (AWS) is the highest priority, and members of the AWS Partner Network (APN) offer hundreds of industry-leading products that are equivalent, identical to, or integrate with existing controls in on-premises environments.

Security Partner Solutions are the top category in AWS Marketplace and help customers ensure their workloads are highly secured.

These validated APN Partner products complement existing AWS services and enable customers to deploy a comprehensive security architecture, in addition to providing a more seamless experience across your cloud and on-premises environments.

APN Partners with security expertise help customers identify asset vulnerabilities and develop an organizational understanding to manage security risks in AWS customer systems, assets, and data.

If your company wants to help AWS customers with security, the new APN Navigate Security track provides APN Partners with a prescriptive journey to help you build expertise in cloud security solutions.

APN Navigate materials will help your team prepare for better engagement with key AWS experts, and set organizations on the path to achieve the AWS Security Competency.

Learn more about the APN Navigate Security track >>

APN Navigate Security Track-2

Getting Started

If your company is interested in deepening your expertise in AWS Cloud security, two (2) individuals must go through the APN Navigate technical track, and two (2) individuals must go through the business track. This unlocks advanced content and trainings to help your team build a strong practice or solution based on the AWS security best practices.

Technical Professionals: APN Security Navigate Track

In this course, you will learn about the AWS approach to cloud security. What does this mean when developing security practices on AWS? Dive deep into common cloud security solutions and core AWS services related to security. See how to take advantage of APN Security Navigate resources and best practices.

In this course, you will learn how to:

  • Articulate the AWS approach to cloud security.
  • Identify core AWS services for security solutions.
  • Understand common security architectures.
  • Familiarize your organization with APN Security Navigate next steps and resources.
  • Get started now (login required) >> 

Business Professionals: APN Security Navigate Track

In this course, you will learn about the demand for AWS security services and talent, in addition to common use cases. You will also review suggested best practices for developing a security capability that helps your organization differentiate through the AWS Security Competency.

In this course, you will learn how to:

  • Articulate the AWS approach to cloud security.
  • Identify common security use cases.
  • Build a security practice on AWS.
  • Position security on AWS to potential customers.
  • Prepare your organization for the AWS Security Competency.
  • Get started now (login required) >> 

Once your organization has taken these specialized trainings, you’ll receive a designated toolkit of resources recommended by AWS subject matter experts in security to further deepen your knowledge in cloud security on AWS and key checkpoints to check your knowledge.

To learn more about the APN Navigate program, see the five phases of APN Navigate.

About APN Navigate

APN Navigate provides APN Partners with the knowledge and tools to become specialists in a solution, industry, workload, or service area on AWS.

Each Navigate track includes foundational and specialized e-learnings, advanced tools and resources, clear calls to action for both business and technical tracks, and “Apply Your Knowledge” checkpoints to help APN Partners measure their progress.

After you have leveraged all of Navigate’s tools and built a strong practice on AWS, you can set your business apart through the AWS CompetencyAWS Managed Service Provider (MSP), or AWS Service Delivery programs. Each of these designations help AWS customers identify and choose top APN Partners.

Join the AWS Partner Network (APN)

The APN is the global partner program for Amazon Web Services and is focused on helping APN Partners build successful AWS-based businesses and solutions. As an APN Partner, you will receive business, technical, sales, and marketing resources to help you grow your business and support your customers.

See all the benefits of being an APN Partner >>

Team Up with an APN Partner

APN Partners are focused on your success, helping customers take full advantage of the benefits AWS has to offer. With their deep expertise on Amazon Web Services, APN Partners are uniquely positioned to help your company at any stage of your Cloud Adoption Journey, and to help you achieve your business objectives.

Find an APN Partner that meets your needs >>

from AWS Partner Network (APN) Blog

WireWheel Leverages AWS SaaS Factory to Help Companies Solve Data Privacy Management

WireWheel Leverages AWS SaaS Factory to Help Companies Solve Data Privacy Management

By Craig Wicks, Sr. Manager, AWS SaaS Factory

WireWheel-APN Badge-1
Connect with WireWheel-1
Rate Wirewheel-1

Do you know what personal data your organization collects from customers? Do you know what third parties are doing with that data? Do customers understand where their data goes?

For Software-as-a-Service (SaaS) companies responsible for data collection and processing on behalf of multiple organizations, privacy is a core requirement and a competitive differentiator. Knowing the answers to these questions is critical to meeting data management, security, and privacy requirements.

WireWheel is a group of privacy experts, data scientists, and business leaders that have set out to help companies meet this challenge. Its platform is available on AWS Marketplace.

An AWS Partner Network (APN) Advanced Technology Partner with the AWS Security Competency, WireWheel helps you reduce privacy-related risk, accelerate compliance, and foster trust with customers.

If you want to be successful in today’s complex IT environment, and remain that way tomorrow and into the future, teaming up with an AWS Competency Partner like WireWheel is The Next Smart.

Get Started with WireWheel

As new privacy laws toughen requirements for personal data collection and processing, WireWheel automates the most important tasks required to comply.

This includes the General Data Privacy Regulation (GDPR), California Consumer Privacy Act (CCPA), and many other new or evolving privacy regulations around the world.

Working with AWS SaaS Factory, WireWheel navigated technical and business decisions for launch and beyond.

AWS SaaS Factory provides APN Partners with resources that help accelerate and guide their adoption of a SaaS delivery model. SaaS Factory includes reference architectures for building SaaS solutions on AWS; Quick Starts that automate deployments for key workloads on AWS; and exclusive training opportunities for building a SaaS business on AWS.

The AWS SaaS Factory team sat down with Ed Peters, WireWheel’s Chief Technology Officer, to learn how they streamline data privacy management. We also asked what advice they have for other APN Partners building SaaS solutions on AWS.

Check out WireWheel on AWS Marketplace >>

Q&A with WireWheel

SaaS Factory: Can you tell us about your background and personal experience with cloud computing?

Ed Peters: I’ve been building enterprise software for about 20 years, with most of it being SaaS, so I’m deeply familiar with cloud computing. About seven years ago, I got involved in taking applications built for proprietary data centers and retargeting them for public cloud environments. I also have experience building for public clouds like AWS.

SaaS Factory: What products and solutions has WireWheel built on AWS?

Ed Peters: In 2018, we launched the WireWheel Platform for data privacy management. It helps IT, security, and privacy teams understand the personal data you have, where that data is stored and computed, and which third parties you’d contact if downstream deletion were required. AWS Solution Architects supported our platform development efforts with the Well-Architected Framework.

SaaS Factory: How does the WireWheel platform work?

Ed Peters: At WireWheel’s core is a sophisticated tasking engine that lets users collaborate in the collection of information. There’s a component of it which is oriented around humans working with one another, asking questions and capturing the answers for processing.

WireWheel also contains hooks so that answers can add new entities into the system database, be pre-filled with known content from the platform, and send and receive messages to participate in automated workflows. This helps customers make the most of their time and stop answering the same questions over and over again.

WireWheel leverages public cloud APIs for asset discovery, as well as proprietary technology for automated schema analysis. This helps speed up privacy analysis by giving direct access to information about where everything is running and kick-starting the classification of potentially sensitive data sets. This reduces the amount of time spent chasing down information during a data inventory project.

With these elements in place, our customers can get insights about where the risk lies in their systems. For instance, by tying data classification with infrastructure information, customers can identify unencrypted storage of sensitive personal data, which is a red flag for regulators and can trigger large fines. By relating a detailed inventory of a cloud environment directly to required privacy analysis, customers can identify areas where they may have “unknown unknowns,” which refers to significant systems that have not been subject to the proper scrutiny.

As a proof point, a team of just three people at Under Armour used WireWheel to involve hundreds of employees and dozens of vendors in a privacy program that was recognized by the International Association of Privacy Professionals.

SaaS Factory: SaaS Factory: What’s the opportunity for customers?

Ed Peters: The majority of customers we talk to say they are overwhelmed and don’t know where to begin when it comes to establishing a solid privacy program. Even those who’ve done the required analysis are flummoxed by recent changes in legislation and unsure how to demonstrate to customers and regulators that they’re being responsible data stewards.

We think there’s a common-sense backbone to all privacy requirements that will make sure you’re doing the right thing with personal data, regardless of regulatory evolution.

You need to know: (1) what kind of personal data are you collecting and hosting; (2) where is it being stored and processed; (3) what third parties are you giving it to; and (4) why is all this a legitimate use of that data? Companies that have lost track of the answers to one or more of those questions are at risk of compromising the personal data they’re responsible for.

One industry trend is the desire of companies to find a fully automated privacy solution. Currently, no solution can deliver on that demand or promise. A critical element to privacy compliance and turning privacy into a business generator is human analysis and decision-making around core privacy issues. WireWheel’s technology automates processes that surface the most important information to inform those key decisions.

Customers operating in public cloud environments are in a better position than ever to answer these questions. The rich amount of metadata available about their infrastructure makes it easy to perform due diligence to establish a baseline of compliance, and then establish solid governance programs to keep from drifting off course.

SaaS Factory: How does WireWheel’s solution address this for customers?

Ed Peters: The WireWheel platform helps simplify, structure, and automate data protection and privacy compliance, turning a compliance effort into a competitive and business advantage. We simplify all privacy regulations down to five key actions that companies must take to build trust with their customers and meet regulators’ requirements:

  • Data flow mapping.
  • Data discovery and classification.
  • Vendor management, scoring and approval for privacy.
  • Documentation (privacy threshold, privacy impact, and data protection impact assessments, and more).
  • Subject access requests.

By focusing on these actions through different workflows, we allow customers to:

  • Demonstrate ethically sourced data in data inventories.
  • Provide customer-facing privacy portals.
  • Create a self-service internal privacy management tool.
  • Demonstrate transparency and build trust through shared reports/documentation.
  • Create easily manageable and automated data governance processes.

SaaS Factory: Can you walk us through the architecture? What AWS services are key?

Ed Peters: Building off the workflows we just talked about, we strive to keep our architecture simple. Our core application is a set of micro services packaged as Docker containers.

We originally self-hosted on Amazon Elastic Compute Cloud (Amazon EC2) instances, but we became early adopters of Amazon Elastic Container Service for Kubernetes (Amazon EKS) and that has provided a ton of cost savings and manageability benefits.

Our two primary data stores are Amazon S3 for unstructured data and MongoDB for documents. S3 is a well-known workhorse, and we were able to take advantage of hosting services from MongoDB, an AWS Solution Provider. The combination of these two technologies has the scale and manageability we need to keep our operations light.

We’ve also been dipping our toes in the serverless water. We have a couple of document transformation and scanning features we’ve deployed as AWS Lambda functions listening to our S3 buckets. That provided great benefits from an encapsulation perspective, and also huge scalability when we needed it during large-scale migrations.

Of course, we also use core technologies like AWS CloudFormation for auto scaling, AWS CloudWatch for alerting, and AWS CloudTrail for logging.

Working with AWS, we have also focused on using AWS Security’s Shared Responsibility Model to demonstrate “Of the Cloud” and “In the Cloud” responsibility for a Privacy Shared Responsibility Model. We’ve identified 12 additional services besides the AWS compute, storage, and database services that WireWheel can leverage for customers. WireWheel also supports a customer’s use of Amazon Macie and AWS Glue.

WireWheel-AWS SaaS Factory-2

SaaS Factory: What technical challenges did you face building this SaaS solution on AWS?

Ed Peters: We faced multiple challenges, but AWS SaaS Factory worked with us hand-in-hand to solve them and turn them into opportunities.

On the business side, we faced the challenge of understanding how to engage cloud customers and bring the tool to them as an early-stage startup. We also faced the challenge of determining how to price our product for cloud customers. On the technical side, we chose to invest early in our relationship with AWS, so wanted to ensure we were architected to AWS’s standards for our testing and production environments.

While we are capable of supporting on-premises or hybrid cloud customers, the true power to comply with data protection and privacy requirements comes from being deployed in the cloud. When a customer is using AWS for their IaaS or PaaS, WireWheel dynamically maps their environment, conducts metadata analysis of data stores and compute, and creates privacy-related insights and alerts.

SaaS Factory: What support did AWS SaaS Factory provide your team?

Ed Peters: Our experience with AWS SaaS Factory has been incredible, from multiple angles. AWS has been a fully committed partner and has top quality talent to work with. Your focus on the customer matches with our values.

On the technology front, the SaaS Factory team supported our Well-Architected Review and helped us connect with AWS ProServ to get advice about transitioning our original architecture to a Kubernetes environment. On the business side, SaaS Factory helped us work with AWS Marketplace to make our product available to all AWS customers and get into a pilot that combined contract and consumption pricing models.

AWS has an extensive partner program to help APN Partners build, market, and sell their products or services. From the WireWheel perspective, we have clearly seen AWS is willing to invest in partner relationships in terms of time, money, and other resources to help us succeed.

AWS SaaS Factory was a critical element to helping us build our product and to position us to go-to-market with AWS and sell our product on AWS Marketplace.

SaaS Factory: What would you tell others planning to build a SaaS solution on AWS?

Ed Peters: The time for a SaaS company to engage with AWS is from the start. AWS knows its technology better than anyone and will provide expert guidance and even cost-saving advice. Especially for a SaaS product builder, AWS has become a hub for SaaS offerings.

AWS also helped us get ready to go-to-market, providing advice on how to best architecture our development, test, and production environments on AWS. They helped us prepare for a general offering in AWS Marketplace and conducted joint marketing efforts together, including webinars and promotion of a whitepaper and e-book.

As a partner for a growing business that is focused on the transition to the cloud, AWS has been fantastic. The AWS offerings give our customers a chance to really do privacy right, from the start.

SaaS Factory: What are your future plans with AWS? 

Ed Peters: We are continuing to build and improve our product on AWS and deepening our partnership with AWS. We have achieved the AWS Security Competency and are focused on expanding our sales through AWS Marketplace.

We see AWS Marketplace’s effort to expand the Enterprise Contract Program (ECP) as a great service for customers to help reduce sales procurement frictions. We also look forward to special announcements with the AWS Activate team in helping support startups. We have benefited from the AWS Startup team’s efforts and want to pay it forward to other startups who are building their products on AWS like we did.

Learn More About AWS SaaS Factory

AWS SaaS Factory-2019APN Partners are encouraged to reach out to their account representative to inquire about additional engagement models and to work with the AWS SaaS Factory team.

Additional technical and business best practices can be accessed via the AWS SaaS Factory website >>

ISVs that are not APN Partners are encouraged to subscribe to the SaaS on AWS email list to receive updates about upcoming events, content launches, and program offerings.


Connect with WireWheel-1

WireWheel – APN Partner Spotlight

WireWheel is an AWS Security Competency Partner. Its platform automatically discovers and maps AWS services for privacy compliance, including GDPR and more.

Contact Wirewheel | Solution Overview | AWS Marketplace

*Already worked with WireWheel? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Driving Continuous Security and Configuration Checks for Amazon EKS with Alcide Advisor

Driving Continuous Security and Configuration Checks for Amazon EKS with Alcide Advisor

By Yaniv Peleg Ysabari, Head of Product Management at Alcide
By Paavan Mistry, Specialist Security Solutions Architect at AWS

APN Advanced Technology Partner-3
Connect with Alcide-1
Rate Alcide-1

As Kubernetes gradually becomes the leading open-source platform for automating deployment, scaling, and management of containerized applications, more and more Amazon Web Services (AWS) customers are using Kubernetes as their main container orchestration platform.

On AWS, you can easily adopt Kubernetes with a managed service like Amazon Elastic Container Service for Kubernetes (Amazon EKS), or alternatively provision and manage custom Kubernetes clusters with tools such as Kubernetes Operations (Kops).

In this post, we will go over some of the Kubernetes controls that we believe can greatly improve your application security, and specifically, accessing secrets, detecting Kubernetes vulnerabilities, and running specific checks related to Amazon EKS clusters.

We also explain how Alcide Advisor helps ensure your Amazon EKS cluster, nodes, and pods configuration are tuned to run according to security best practices and internal guidelines.

Alcide is an AWS Partner Network (APN) Advanced Technology Partner. As a network security leader, Alcide empowers DevSecOps with code-to-production continuous security, helping customers discover, manage, and enforce security policies for workloads running in Kubernetes.


Traditional application-level security aims to find issues before they expand, and involves practices such as code reviews, scans, and penetration tests. While these practices are still relevant, they are no longer sufficient in a Kubernetes environment.

Kubernetes provides the freedom to rapidly ship applications by minimizing deployment and service update cycles from weeks to days, and sometimes even hours. The velocity of application updates and deployment, however, requires a continuous security approach that involves integrating tools as early as possible in the deployment pipeline.

Alcide Advisor is an agentless service for Kubernetes audit and compliance that’s built to ensure a frictionless and secured DevSecOps flow by hardening the development stage before moving to production.

Alcide Advisor serves as a one-stop-shop to continuously discover, mitigate, and validate Amazon EKS cluster risks.

With Alcide Advisor, you can cover the following security checks:

  • Kubernetes vulnerability scanning.
  • Hunting misplaced secrets, or excessive secret access.
  • Workload hardening from Pod Security to network policies.
  • Istio security configuration and best practices.
  • Ingress controllers for security best practices.
  • Kubernetes API server access privileges.
  • Kubernetes operators security best practices.

Flexible Deployment and Operation

Alcide Advisor can be easily deployed on your Amazon EKS clusters, or any other managed Kubernetes clusters. The deployment process is frictionless, takes less than five minutes, and does not require an agent.

Alcide Advisor can run in two modes:

  • Monitor: Continuously scan your cluster to monitor for existing and new security risks.
  • CI/CD: Natively integrated with your existing CI/CD automation workflows, such as AWS CodePipeline and Jenkins, to ensure new deployments are secured and don’t possess any new risks.

Discovering and Mitigating Risks

Alcide Advisor scan results display the latest scan on every monitored cluster. By default, a scan runs on every monitored cluster every 24 hours and can be configured as required.

Each result includes a check type with a short description of the detected issue, category the specific check belongs to, cluster, namespace, and resource the issue was detected on, as well as its severity.

In addition, each result provides additional information such as a detailed description, recommended steps you can take to fix the issue based on security best practices, and links to external references with additional details.

On the Alcide dashboard, you can see the time that an issue was first detected through the First Seen table column, to better understand if the issue was already found in previous scans and was not yet fixed.

Alcide Advisor-Amazon EKS-1

Figure 1 – Alcide Advisor continuous security summary dashboard.

You can sort and filter any of the table columns, search for specific results to prioritize mitigation actions, and group results by type, resource, or namespace.

Ongoing Checks Provided by Alcide Advisor

Let’s dive into three examples of the ongoing checks the Alcide Advisor provides.

Example 1: Hunting Misplaced Secrets, or Excessive Secret Access

The Kubernetes secret object is designed to store and manage sensitive information, such as passwords, OAuth tokens, and SSH keys. Placing this information in plain text or in the wild (such as config maps) makes it easily exposed to unauthorized users, and is a greater risk for your Kubernetes and cloud provider environments.

Alcide Advisor scans for any secrets, API keys, and passwords that may have been wrongfully misplaced in pod environment variables, as well as in config maps. In addition, it verifies the use of RBAC permissions that defines who can read secret objects.

Alcide Advisor-Amazon EKS-2

Figure 2 – Secrets found in Pod environment variables.

The information displayed in Figure 2 shows a critical alert stating that an AWS Access Key was found as plain text on the Pod environmental variable. To remediate this risk, DevOps should use the secret object to store this key and thus ensure it is properly secured.

Example 2: Kubernetes Vulnerabilities Scan

While Kubernetes drastically simplifies the orchestration of your most sensitive containerized environments, it’s not bulletproof to critical security vulnerabilities that require quick detection and response.

An example of a serious vulnerabilities that was recently found is the privilege escalation vulnerability, tracked as CVE-2018-1002105. This vulnerability allows users, through a specially crafted request, to establish a connection through the Kubernetes API server and send arbitrary requests over the same connection directly to that backend. It was authenticated with the Kubernetes API server’s TLS credentials that were used to establish the backend connection.

Alcide Advisor scans your cluster for known vulnerabilities on the master API server and worker node components, including container runtime. This has great benefit for teams using Kops on AWS, as it signals when an upgrade is required.

Alcide Advisor-Amazon EKS-3

Figure 3 – Continuous security alert on Kubernetes master API server vulnerability.

The alert you see in Figure 3 states that your current Kubernetes API version is exposed to a critical known CVE. Alongside with reference for further reading, the recommendation is to upgrade the Kubernetes cluster to the newest available version.

Example 3: Amazon EKS Cluster Checks

Alcide Advisor provides a set of checks related to Amazon EKS clusters. For example, kiam bridges Kubernetes Pods with AWS Identity and Access Management (IAM) and makes it easy to assign short-lived AWS security credentials to your application.

Kiam runs as an agent on each node in your Kubernetes cluster to allow cluster users to associate IAM roles to Pods. Alcide Advisor scans Amazon EKS clusters and checks for kiam deployment, as you can see in Figure 4 below.

Alcide Advisor-Amazon EKS-4

Figure 4 – Configurating kiam to assign IAM modes to pods running in Kubernetes cluster.


The App-formation feature allows you to create a baseline profile on a specific cluster, and get scan results only on issues that deviate from the baseline. This helps DevOps focus on relevant issues and assets that require attention.

App-formation supports the following use cases of:

  • Integration with CI/CD pipeline.
  • Scalable review of many clusters against a blue print profile.
  • Re-run against same cluster and alert on changes compared to baseline profile.

VIDEO: Alcide Advisor App Formation (1:10)

Here’s how to create a baseline profile for your Amazon EKS cluster, and run a scan against that baseline.

Getting Started with Alcide Advisor

To install the Alcide Advisor scanner component, follow the steps below:

  • Sign up for a Free Trial.
  • Log in to Alcide using the credentials you’ve received.
  • Run through the onboarding wizard and configure the cluster you wish to run Advisor scan on.
  • In Steps 1-5, you’ll name the Amazon EKS cluster you want to add, and then connect it to a logical entity representing the AWS account it’s running on.
  • In Step 6, start by downloading the Alcide CLI tool and run commands 1-3, 5.

Once installed, Alcide Advisor automatically starts scanning your Amazon EKS cluster and provides an immediate scan report. This outlines your cluster’s configuration issues, with actionable recommendations for every scan result.

From this point on, unless configured differently, Alcide Advisor will automatically scan that Amazon EKS cluster every 24 hours.

Alcide Advisor provides many other checks to help you create a secured Amazon EKS deployment and meet security best practices. This includes Cluster Ingress Controllers and Kubernetes Operators Security best practices, Istio security configuration, workload labeling and annotations scheme conformance, Kubernetes dashboard permissions, and more.

Alcide Advisor-Amazon EKS-5

Figure 5 – Report of all risks and misconfiguration issues found.

The end result is that you get a report with a summary of the Amazon EKS cluster’s compliance and security status, as well as a detailed list of identified compliance and security issues, with description and recommendation for a quick remediation.


The inherent complexities for running cloud-native applications such as Kubernetes, especially in a multi-cluster environment, are growing. Alcide Advisor creates a snapshot of your cluster’s security and compliance posture with actionable recommendations to ensure no security drifts are detected only in runtime.

In this post, we demonstrated how Alcide Advisor provides Kubernetes security and compliance check in all levels by answering questions like “Are all my clusters contconform?” and “Am I pulling software from authorized image registries?”

Alcide Advisor also allows DevOps and security teams to discover misplaced secrets or secret access, identify Kubernetes vulnerabilities and perform Amazon EKS cluster checks. These teams can then benefit from continuous, always-on, dynamic analysis of their deployments.

Get Alcide Advisor for free by signing up for a 30-day trial.

You can also download the Solution Brief, and watch the on-demand webinar: Providing Continuous Kubernetes Security through Your CI/CD Pipeline >>


Connect with Alcide-1

Alcide – APN Partner Spotlight

Alcide is an APN Advanced Technology Partner. They empower DevSecOps with code-to-production continuous security, to discover, manage, and enforce security policies for workloads running in Kubernetes.

Contact Alcide | Solution Overview

*Already worked with Alcide? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Say Hello to 79 New AWS Competency, MSP, and Service Delivery Partners Added in May

Say Hello to 79 New AWS Competency, MSP, and Service Delivery Partners Added in May

AWS Partner NetworkThe AWS Partner Network (APN) is the global partner program for Amazon Web Services (AWS). We enable APN Partners to build successful AWS-based businesses, and we help customers identify top APN Partners that can deliver on core business objectives.

To receive APN program designations such as AWS Competency, AWS Managed Services Provider (MSP), and AWS Service Delivery, organizations must undergo rigorous technical validation and assessment of their AWS solutions and practices.

These designations help customers identify and choose specialized APN Partners that can provide value-added services and solutions. Guidance from these skilled professionals leads to better business and bigger results.

Team Up with AWS Competency Partners

If you want to be successful in today’s complex IT environment, and remain that way tomorrow and into the future, teaming up with an AWS Competency Partner is The Next Smart.

The AWS Competency Program verifies, validates, and vets top APN Partners that have demonstrated customer success and deep specialization in specific solution areas or segments.

The Next Smart-General-5

These APN Partners were recently awarded AWS Competency designations:

AWS DevOps Competency

AWS Digital Customer Experience Competency

AWS Education Competency

AWS Financial Services Competency

AWS Government Competency

AWS Healthcare Competency

AWS Migration Competency

AWS Oracle Competency

AWS SAP Competency

AWS Security Competency

Team Up with AWS Managed Service Providers

The AWS Managed Service Provider (MSP) Partner Program recognizes leading APN Consulting Partners that are highly skilled at providing full lifecycle solutions to customers.

Next-generation AWS MSPs can help enterprises invent tomorrow, solve business problems, and support initiatives by driving key outcomes. AWS MSPs provide the expertise, guidance, and services to help you through each stage the Cloud Adoption Journey: Plan & Design > Build & Migrate > Run & Operate > Optimize.

Explore 7 reasons why AWS MSPs are fundamental to your cloud journey >>

Meet our newest AWS Managed Service Provider (MSP):

Team Up with AWS Service Delivery Partners

The AWS Service Delivery Program identifies and endorses top APN Partners with a deep understanding of specific AWS services, such as AWS CloudFormation and Amazon Kinesis.

AWS Service Delivery Partners have proven success delivering AWS services to end customers. To receive this designation, APN Partners must undergo service-specific technical validation by AWS Partner Solutions Architects, and complete a customer case business review.

Introducing our newest AWS Service Delivery Partners:

Amazon Connect Partners

AWS Direct Connect Partners

Amazon DynamoDB Partners

Amazon EC2 for Microsoft Windows Server Partners

Amazon QuickSight Partners

Want to Differentiate Your Partner Business? APN Navigate Can Help.

If you’re already an APN Partner, carve your Cloud Adoption Journey by leveraging APN Navigate for a prescriptive path to building a specialized practice on AWS.

APN Navigate tracks offer APN Partners the guidance to become AWS experts and deploy innovative solutions on behalf of end customers. Each track includes foundational and specialized e-learnings, advanced tools and resources, and clear calls to action for both business and technical tracks.

Learn how APN Navigate is a partner’s path to specialization >>

Learn More About the AWS Partner Network (APN)

APN Partners receive business, technical, sales, and marketing resources to help you grow your business and better support your customers.

See all the benefits of being an APN Partner >>

Find an APN Partner to Team Up With

APN Partners are focused on your success, helping customers take full advantage of the business benefits AWS has to offer. With their deep expertise on AWS, APN Partners are uniquely positioned to help your company.

Find an APN Partner that meets your needs >>

from AWS Partner Network (APN) Blog

Integrating Next-Gen Firewalls with VMware Cloud on AWS

Integrating Next-Gen Firewalls with VMware Cloud on AWS

By Aarthi Raju, Principal Solutions Architect at AWS
By Nicolas Vilbert, Lead Systems Engineer at VMware

VMware Cloud on AWS_blueAs customers start to build their hybrid network architectures, they often ask us how they can leverage next-generation firewalls to protect their data in VMware Cloud on AWS similar to what they do in their native Amazon Web Services (AWS) environment or on-premises.

Some of these customers already leverage AWS Partner Network (APN) solutions like Checkpoint, Palo Alto Networks, or other firewall vendors and want to leverage the same partner solutions in their VMware Cloud on AWS environments.

This post covers the design required to leverage a next-generation firewall with VMware Cloud on AWS. A next-gen firewall provides deep-packet inspection firewalls that move beyond port/protocol inspection and blocking to add application-level inspection.

Network Architecture for VMware Cloud on AWS

VMware Cloud on AWS, the hybrid cloud solution jointly developed by AWS and VMware, already includes two edge firewalls—the Management Gateway, and the Compute Gateway.

The Management Domain is protected by a Management Gateway (MGW), which is an NSX Edge Security gateway that provides north-south network connectivity for the vCenter Server and NSX Manager running in the Software-Defined Data Center (SDDC).

The Compute Domain, which includes compute workloads created by the customer, is protected by a Compute Gateway (CGW). This provides north-south network connectivity for virtual machines (VMs) running in the SDDC.

By north-south, we mean traffic coming to and from the internet to the VMware Cloud on AWS SDDC. This post will not cover east-west firewalling, which refers to traffic within the SDDC.

VMware Clound on AWS Firewalls-1

Figure 1 – Management and Compute Gateways.

Both MGW and CGW provide firewall capabilities. Today, they only offer Layer 4 (L4) firewalling, though, meaning they only inspect the traffic up to the Layer 4 of the OSI model. They can only inspect IP addresses (source and destination) and TCP/UDP ports and filter the traffic based upon these criteria.

AWS Security Groups are similar to L4 virtual firewalls and behave the same way.

For internet-facing applications or internet-bound traffic, you might want to leverage a L7 firewall. That’s a firewall capable of inspecting packet payload and URL, and dropping traffic if its content or URL destination do not adhere to the company’s security policy.

VMware Clound on AWS Firewalls-2

Figure 2 – Differences between L4 and L7 firewalls.

L7 firewalls are sometimes referred to IPS/IDS, context-aware firewalls, next-gen firewalls, application firewalls. Several popular L7 firewall vendors include Palo Alto Networks, Check Point, and Cisco.

Integrating a Next-Gen Firewall with VMware Cloud on AWS

Let’s walk through our potential options of how to integrate a next-gen firewall with VMware Cloud on AWS.

Option 1: Inspect VMware Cloud on AWS traffic via the on-premises next-gen firewall

If you use VMware Cloud on AWS as an extension of your data center and maintain an on-premises presence, you may want the traffic to be inspected by an on-premises web proxy and internet L7 firewall.

In that case, it’s pretty straight-forward—advertise the default route over the virtual private network (VPN) or AWS Direct Connect, and all the internet-bound traffic from the VMware Cloud on AWS VMs will go via the on-premises L7 appliance.

VMware Clound on AWS Firewalls-3

Figure 3 – Outbound internet traffic inspected by on-premises L7 firewall.

If you want to expose web-facing applications on VMware Cloud on AWS, you can advertise the public IPs of these VMs from your internet-facing router and NAT these Public IPs to the private IP of the VMware Cloud on AWS VMC-VM.

Inbound traffic from an external user will go through the on-premises internet firewall where the destination IP will be NAT’ed to the private IP of VMC-VM and transferred across DX/VPN to VMC-VM.

VMware Cloud on AWS Firewalls-4

Figure 4 – Inbound internet traffic inspected by on-premises L7 firewall.

Option 2: Next-gen firewall deployed within a transit VPC in native AWS

Alternatively, we can leverage the concept of a transit VPC, which is a common strategy for connecting multiple, geographically disperse VPCs and remote networks in order to create a global network transit center.

Transit VPC simplifies network management and minimizes the number of connections required to connect multiple VPCs and remote networks.

VMware Clound on AWS Firewalls-5

Figure 5 – Transit VPC on AWS.

The transit VPC is a “hub VPC” that would connect to “spoke VPCs” via VPN. A next-gen firewall would then be deployed within the transit VPC as an Amazon Elastic Compute Cloud (Amazon EC2) instance. All of the traffic leaving the spoke VPCs travel to the hub/transit VPC and be inspected by the next-gen firewall.

So how’s it work with VMware Cloud on AWS? The VMware Cloud on AWS SDDC would just be another “spoke VPC.”

Remember, the “ENI-Connected VPC” is the one we connected to via the Elastic Network Interface (ENI) when we deployed the SDDC. This is typically used for services such as Active Directory, Amazon FSx, or back-ups using Amazon S3. The ENI-Connected VPC would not be connected to the transit VPC; instead, it remains reachable to the VMware Cloud on AWS SDDC via ENI.

VMware Clound on AWS Firewalls-6

Figure 6 – Transit VPC with VMware Cloud on AWS.

For our testing purposes, we used a Palo Alto Network appliance in our transit VPC. This is the ideal option for customers already using a transit VPC, as VMware Cloud on AWS would just be another spoke.

All of the traffic from VMware Cloud on AWS to either spoke VPCs or the internet would transit through the secure transit VPC.

VMware Cloud on AWS Firewalls-7

Figure 7 – Transit VPC, next-gen firewall, and VMware Cloud on AWS architecture.

Option 3: Leverage NSX Service Insertion to insert a next-gen firewall

This third model is not available yet, but it’s something on our roadmap which you can review here. We are actively working on a feature to insert a virtual next-gen FW through our NSX-T Partner Service Insertion platform. This is nothing new if you’ve followed NSX. It’s been available for years on NSX-V and for a few months on NSX-T.

When completed, this model provides the following benefits:

  • L7 inspection of both outbound and inbound traffic.
  • Inspect traffic to compute VMs and management VMs.
  • Faster performance and reduced latency.


Customers who are looking to leverage APN Partner solutions within VMware Cloud on AWS can utilize one of the above-mentioned options to achieve this architecture.

This enables customers to perform deep packet inspection for applications running both with VMware Cloud on AWS and for native AWS services.

Additional Resources


Connect with VMware-1

VMware – APN Partner Spotlight

VMware is an APN Advanced Technology Partner. Its software spans compute, cloud, networking, security, digital workspace, and streamlines the journey for organizations to become digital businesses.

Contact VMware | Solution Overview

*Already worked with VMware? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Migrating Applications Seamlessly to AWS with Micro Focus PlateSpin Migrate

Migrating Applications Seamlessly to AWS with Micro Focus PlateSpin Migrate

By Scott Kellish and Jim Huang, Partner Solutions Architects at AWS
By Jo De Baer, Senior Product Manager at Micro Focus

Micro Focus-Logo-1.1
APN Advanced Technology Partner-3
Connect with Micro Focus-1
Rate Micro Focus-1

Along a customer’s digital transformation journey to the Amazon Web Services (AWS) Cloud, server migration is one of the most common application migration cases.

As an example, imagine a web shop that runs on two web servers, two application servers, and one database server. It’s clear that when you migrate the web shop application to AWS, you need to migrate these five servers.

A key to a customer’s migration success is mobility tooling for migration automation and process consistency, which is particularly critical when a customer has a large number of applications running on hundreds or thousands of servers.

In this post, we will describe common characteristics of server mobility technology and look at PlateSpin Migrate, a server portability solution built by Micro Focus, an AWS Partner Network (APN) Advanced Technology Partner.

We will also provide a case study to show how server mobility tools like PlateSpin Migrate can help customers accelerate mass workload migration to AWS.

Together with AWS, Micro Focus created the PlateSpin Migrate on AWS Quick Start to allow fast and easy provisioning of PlateSpin Migrate on AWS.


PlateSpin Migrate is a powerful server portability solution that automates the process of migrating physical server machines or virtual host servers over the network to enterprise cloud platforms like AWS—all from a single point of control.

When migrating such servers, PlateSpin Migrate re­fers to these servers as “workloads.” A workload in this context is the aggregation of the software stack installed on the server: the operating system, applications, and middleware, plus any data that resides on the server volumes.

PlateSpin Migrate provides you with a mature and proven solution for migrating, testing, and rebalancing workloads across infrastructure boundaries.

Server Migration Technology

Server mobility technology enables workload migration with a number of technical characteristics.

Here, we start by listing several common characteristics to help readers better understand server migration tools, and help guide in selecting the right migration tool according to specific application migration requirements. In addition, we call out how PlateSpin Migrate implements the functionality.

Tool Type

Migration tools are of two different types with respect to where they orchestrate workload migration. Software-as-a-service (SaaS) tools are typically hosted in the cloud, while a server-based migration service runs either in the source or target migration environment. The former is a multi-tenant service managed by the tool vendor, whereas the latter is managed by tool users. PlateSpin Migrate is server-based.

Tool Deployment Method

Server-based migration tools require users to deploy and configure them in the migration environment. Deployment automation is essential for efficiency and quality of server-based migration tools. PlateSpin Migrate can be deployed and configured manually on-premises, or deployed and configured automatically in the AWS target environment via the PlateSpin Migrate on AWS Quick Start.

Source Environment Support

There can be physical as well as virtual servers for migration from the source environment. Server mobility tools are expected to support both types of servers. PlateSpin Migrate supports both physical and virtual servers for migrations.

Workload Operating System

Mobility tools are expected to migrate servers running Linux and Windows operating systems. Some tools support any OS platform, as long as it’s an x86 processor architecture. Some are capable of migrating legacy systems such as Solaris and Windows 2003. PlateSpin Migrate supports migration of servers running Linux or Windows operating systems.

OS License Handling

As part of server migration, customers may choose to “bring your own license” (BYOL) to AWS, or may want to leverage AWS-managed licenses. Thus, server mobility technology may provide some form of OS license migration schemes. PlateSpin Migrate supports both BYOL and AWS-managed licenses.

Target Landing Configuration

Prior to server migration, users may plan and set up a landing environment with an Amazon Virtual Private Cloud (Amazon VPC), subnets, Network ACLs, and Security Groups for target virtual machines (VMs). Server mobility technology has the capability of enabling users to browse and select a target landing environment. PlateSpin Migrate possesses this capability.

Data Protection

Migration data in transit and in the target environment should be protected. Mobility tools protect data in transit through native data encryption/decryption functions, or via secured transport such as a virtual private network (VPN). PlateSpin Migrate provides native encryption with 128-bit Advanced Encryption Standard (AES).

For migrated server data on Amazon EBS Volumes, mobility tools let users configure Amazon EBS encryption options before the migration starts. For migrated server data on Amazon EBS, PlateSpin Migrate allows users to select EBS encryption options before the migration starts.

Application Dependency Support

From the customer’s perspective, a unit of migration is an application that is implemented by a set of servers. Mobility tools help users to group and migrate servers that implement one application. PlateSpin Migrate provides workload tagging functionality for application grouping.

Mode of Operation: Agent-Based or Agent-Less

Agent-based migration technology utilizes a software component installed in each of the source servers to perform the migration, while agent-less migration tools don’t require this. PlateSpin Migrate utilizes a software component, the PlateSpin Migrate agent, which is installed in each of the source servers. The agent is automatically removed after the migration is completed.

Data Replication at Block or File Level

During server migration, data is replicated from the source server to the target server at storage block level or application file level. Customers should select a server mobility tool that supports either or both of the replication methods based on their migration needs. PlateSpin Migrate supports both. Block-based replications are faster, but file-based replications allow additional configuration options (e.g. volume resizing).

Service Impact

Service downtime is an important metric often considered for migration cutover. The cutover operation is typically comprised of a final incremental data replication, followed by (re)configuration of the target VM, and boot up of the target VM. Migrations performed with PlateSpin Migrate have zero service downtime during replication phases, and near-zero service downtime during final cutover.

Migration Testing

Prior to migration cutover, customers must validate the applications running on the migrated server instances or staging servers with migrated server content. The cutover takes place only after the validation test passes. It’s essential that mobility tools support migration validation testing and are able to rollback an attempted migration if the validations are unsuccessful.

PlateSpin Migrate provides flexible and sandboxed testing capabilities. When using PlateSpin Migrate, there is no limit on the amount of testing or on how long a test can run, prior to cutover.

Cost Optimization

Migration tools should be cost-conscious with respect to their resource consumption. PlateSpin Migrate only boots the target workload in the cloud during replication and test cutover phases. This provides cost optimization with respect to resource consumption, compared to other solutions that require the target workload to be running during the whole migration process.

PlateSpin Migrate Operational Architecture

PlateSpin Migrate uses the following components to instrument a server workload migration:

  • PlateSpin Migrate Server: The server component works as an orchestrator telling the other components what to do. The replication traffic does not flow over the PlateSpin Migrate server, but is sent directly from the source workload to the target workload.
  • PlateSpin Migrate Agent: The agent is installed on the source workload and is responsible for registering that source workload with the PlateSpin Migrate server, and later for sending the replication traffic from the source workload to the target workload.
  • PlateSpin Replication Environment (PRE): This is a staging VM that is launched during the replication phases that has one or more Amazon EBS Volumes attached corresponding to each volume in the target workload. The PRE VM receives the replication traffic directly from the Migrate agent on the source workload and places the received blocks or files on the target disks that are attached to it.
    The PRE does not require any manual management by the end user to make it available in AWS.
  • Target workload: This is created automatically by PlateSpin Migrate, and is automatically booted from the PRE at every replication phase. During test cutover or final cutover, the PRE is automatically removed, so that the target workload runs from its own disks, rather than being booted from the PRE.

Available as a separate product, PlateSpin Transformation Manager can be used in tandem with PlateSpin Migrate to streamline the execution of very large migration projects.

Micro Focus PlateSpin-8

Figure 1 – PlateSpin Migrate operational architecture.

PlateSpin Migrate Communication Port Requirements

Migration of workloads using PlateSpin Migrate involves two distinct phases:

  • Discovery phase where PlateSpin Migrate discovers and inventories supported source workloads.
  • Replication phase where you execute and monitor the migration of a discovered source workload by performing a series of replications.

During the discovery phase, PlateSpin Migrate has the following port requirements:

  • Without installation of the PlateSpin Migrate agent, the PlateSpin Migrate server needs to be able to connect to the source workload on ports 137, 139 and 445 (TCP).
  • When these ports are not accessible, the PlateSpin Migrate agent must be manually installed on the source workload and connect to the PlateSpin Migrate server on port HTTPS (TCP/443) to register the workload. For cloud migrations without a VPN, source workload registration using the PlateSpin Migrate agent is the only viable option.

For the replication phase, the following port requirements must be fulfilled:

  • PlateSpin Migrate server must be able to communicate with the AWS API endpoint on port HTTPS (TCP/443).
  • During replications:
    • Both the source workload and target workload must be able to connect to the PlateSpin Migrate server on port HTTPS (TCP/443).
    • PlateSpin Migrate server must be able to connect to the target workload on port 22 (TCP), while this workload is booted from the PRE helper VM.
    • The source workload must be able to connect to the target workload on port 3725 (TCP), while this target workload is booted from the PRE helper VM.
  • During testing cycles:
    • The target workload must be able to connect to the PlateSpin Migrate server on port HTTPS (TCP/443).
  • At cutover:
    • Both the source workload and target workload must be able to connect to the PlateSpin Migrate server on port HTTPS (TCP/443).

Micro Focus PlateSpin-9

Figure 2 – PlateSpin Migrate port requirements during replication.

PlateSpin Migrate on AWS Quick Start

Quick Starts_featured-2To enable users to quickly and easily get started with migrating workloads to the AWS Cloud, Micro Focus together with AWS created the PlateSpin Migrate Quick Start.

AWS Quick Starts are built by AWS solutions architects and APN Partners to help customers deploy popular technologies on AWS, based on AWS best practices for security and high availability.

These accelerators reduce hundreds of manual procedures into just a few steps, so you can build your production environment quickly and start using it immediately.

Each Quick Start includes AWS CloudFormation templates that automate the deployment and a guide that discusses the architecture and provides step-by-step deployment instructions.

You can find the PlateSpin Migrate Quick Start under the Migration use case on the AWS Quick Start home page. You can deploy it into a new VPC or existing VPC, and the latter option assumes you have an existing (pre-created) VPC with existing subnets. We’ll use this option to illustrate this post.

Figure 3 below depicts the infrastructure deployed once the launched Quick Start has successfully completed. The example shown is for the use-case where there is no site-site VPN connection, with the PlateSpin Migrate server deployed into a public-facing subnet and assigned an AWS Elastic IP address.

Initially, only the PlateSpin Migrate server is deployed as it is responsible for managing the target workload instances.

Micro Focus PlateSpin-1

Figure 3 – Micro Focus PlateSpin Migrate on AWS Quick Start.

The PlateSpin Migrate Quick Start includes a comprehensive reference guide which covers the deployed architecture as well as step-by-step deployment instructions and best practices for using PlateSpin Migrate on AWS.

When configuring the Quick Start, you’ll have to answer a couple of easy questions: What VPC do you want to deploy the server into? What subnet? What key pair do you want to link to your server? What user name and password do you want to use for the PlateSpin Migrate server admin account?

While we’ve provided sensible default values where appropriate, some of the other configuration options are more advanced and deserve some more explanation:

  • Replication access CIDR: The IP address range defined by this CIDR determines which source workloads are allowed to be replicated by this PlateSpin Migrate server. Set this to to allow the server to migrate any source workload.
  • Management access CIDR: The IP address range defined by this CIDR determines which systems are allowed to administer the PlateSpin Migrate server and migrated workloads in production and test. Set this to to allow administration from any system.
  • Target workloads interconnect CIDR: Sometimes migrated workloads need to interact with other servers so the application they host will run properly. The IP address range defined by this CIDR determines which servers are allowed to communicate with migrated workloads in test and production, on any port. Set this to to allow communication with any server.

Micro Focus PlateSpin-3

Figure 4 – After about 15 minutes you have a fully provisioned PlateSpin Migrate server.

With just a couple more clicks you can kick off the Quick Start, and after about 15 minutes you’ll have a fully provisioned PlateSpin Migrate server that is ready for use.

You can retrieve its public IP address via the instances overview of the region where it’s deployed. Once you know the public IP address, simply open a browser connection to your new server like this:


Note that while the port requirements are the same, PlateSpin supports migrations with and without a VPN connection, and that provisioning the PlateSpin Migrate server with the Quick Start automatically configures Security Groups with the correct ports opened in both scenarios.

When you plan to have a site-site VPN connection, as depicted in Figure 5, either over the internet or with AWS Direct Connect, setting the Quick Start parameter “Assign persistent public IP address” to false will result in the PlateSpin Migrate server being deployed in a private subnet with no public-facing Elastic IP address assigned.

You would use this same configuration without a VPN if you were migrating workloads from one Amazon VPC to another.

Micro Focus PlateSpin-6

Figure 5 – Automated migration to AWS over a VPN.

When you don’t have a site-site VPN connection, PlateSpin supports connectivity directly over the internet, as depicted in Figure 6.

When you set the Quick Start parameter “Assign persistent public IP address” to true, the PlateSpin Migrate server will be deployed into a public subnet and individual Elastic IPs will be assigned to the PlateSpin Migrate server and the target VMs it creates when executing migration jobs.

Micro Focus PlateSpin-7

Figure 6 – Automated migration to AWS over the internet.

PlateSpin Migrate Operations

After provisioning your PlateSpin Migrate server with the AWS Quick Start, the next step is to license your server with an activation code. You can generate a trial activation code using the Free Trial button on the PlateSpin Migrate product website, which you then apply to your server in combination with the email address of your MicroFocus.com account.

A trial activation code will allow you to run five replications, which you can use to migrate as many as three workloads over 30 days.

Micro Focus PlateSpin-4

Figure 7 – License your PlateSpin Migrate server.

Once your server is licensed, running a migration is easy. Figure 8 depicts the overall migration workflow.

Micro Focus PlateSpin-10

Figure 8 – PlateSpin Migrate migration operation workflow.

After adding one or more migration targets, multiple source workloads can be added (discovered) and then configured for migration. Once a source workload migration is fully configured, the migration needs to be prepared. Preparation makes sure all components are correctly installed and functional. Once prepared, the migration process can be started at any point in time.

The first replication will be a full replication, copying all blocks or files from the source workload to the target workload. After the first full replication is done, the workload will be in a “Replicated” state.

At this point, if automated incremental replications were enabled during the migration configuration (recommended), PlateSpin will automatically run nightly incremental replications to keep the target workload in sync with the source workload. By default, these incremental replications run at night.

During the working day, a test cutover can be started at any point in time. With PlateSpin Migrate, there is no limit to how often or how long you can test. When the test is ended, PlateSpin automatically cleans up any testing resources that were provisioned, and will resume the nightly incremental replications until the day when the final cutover is initiated by the user.

Let’s take a look at what this process looks like in practice. As said, after adding one or more target environments, the first step is to register source workloads to be migrated. For migrations to the AWS Cloud, this is done by downloading the PlateSpin Migrate agent from the PlateSpin Migrate server web interface and installing it on each source workload.

Once source workloads are registered and at least one target AWS Region is added, you can start your migration configurations. Select a first source workload to be configured, and click Configure Migration.

Micro Focus PlateSpin-11

Figure 9 – Select a source workload and click Configure Migration.

Configure your migration as desired using the various configuration options that PlateSpin Migrate offers (enabling automated nightly incremental replications is recommended), and then click Save & Prepare at the bottom of the migration configuration screen.

On the next screen, click Execute to kick off the migration preparation process. When the preparation is done, click Run Migration and then Execute to start the first full replication.

Micro Focus PlateSpin-12

Figure 10 – When you have configured your migration, click Save & Prepare.

When the first full replication is completed, the workload will be in the “Replicated” state, and the buttons at the bottom of the screen will indicate the rest of the workflow.

Note that when automated nightly incremental replications are enabled, PlateSpin Migrate will keep the source workload and the target workload in sync every night by replicating the delta changes that happened since the last replication. Alternatively, an incremental replication can be launched manually.

By clicking the button Test Cutover, a test pass can be performed. PlateSpin Migrate will spin up the target workload in the cloud so that business and application teams can verify the application. When all testing is satisfactory, the workload can be cut over in production via the button Run Cutover.

Micro Focus PlateSpin-13

Figure 11 – The buttons at the bottom of the screen indicate the workflow.

Customer Case Study

Atos is an APN Advanced Consulting Partner focused on digital transformation. Leveraging PlateSpin, Atos seamlessly migrated a multi-national electronics manufacturer to AWS.

Like many enterprises, their client was interested in a cloud environment to reduce operational costs, create a higher availability platform, improve security, and centralize IT management.

After a thorough analysis of the existing environment and the available options, Atos recommended an AWS cloud approach.

To migrate the data center servers to AWS, Atos chose PlateSpin from Micro Focus. Key results included:

  • Reduction of operational costs, in-line with client expectations.
  • Improved key performance indicators (KPIs) related to uptime, ease of maintenance, and system administration simplification.
  • Migrations executed 50 percent faster than when using alternative migration solutions.
  • Total elimination of manual effort while executing the individual server migrations.
  • Downtime consistently kept under one hour per application for 95 percent of the migrations.


Built by Micro Focus, PlateSpin Migrate is a proven and powerful server portability solution that automates the process of migrating physical server machines or virtual host servers over the network to enterprise cloud platforms like AWS.

The new PlateSpin Migrate on AWS Quick Start allows you to provision a cloud-based PlateSpin Migrate server with just a couple of clicks and automatically deploy all the resources that are required for your AWS migrations to run successfully.

The PlateSpin “lift-and-shift” approach migrates applications by migrating the servers that applications run on. When using the AWS Quick Start, you can start migrating servers to AWS within minutes.


Micro Focus-Logo-1.1
Connect with Micro Focus-1

Micro Focus – APN Partner Spotlight

Micro Focus is an APN Advanced Technology Partner. They enable customers to utilize new technology solutions while maximizing the value of their investments in critical IT infrastructure and business applications.

Contact Micro Focus | Solution Overview

*Already worked with Micro Focus? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Enhancing AWS Marketplace Reporting with Tackle.io and Commerce Analytics Service

Enhancing AWS Marketplace Reporting with Tackle.io and Commerce Analytics Service

By Dillon Woods, Founder & CTO at Tackle.io
By David Cowden, Partner Solutions Architect at AWS

AWS Marketplace-2Many independent software vendors (ISVs) reach out to Tackle.io, an AWS Partner Network (APN) Advanced Technology Partner, when they need help getting solutions listed quickly on AWS Marketplace.

Kirk Punches, Sr. Director of Strategic Cloud Alliances at PagerDuty, has first-hand experience partnering with Tackle to implement a holistic marketplace strategy. “AWS Marketplace has simplified how ISVs can generate visibility and help transact,” he says. “However, there is still a lot for ISVs to contemplate around technical integration, internal systems integration, go-to-market, and how to do business efficiently at scale.”

Reducing overall time-to-Marketplace is certainly one of Tackle’s key capabilities, but ISVs are often surprised when they hear that getting listed is just the first step to being successful.

Amazon Web Services (AWS) provides AWS Marketplace Commerce Analytics Service, which allows you to access your sales data from AWS Marketplace. Getting started with Commerce Analytics Service can be technically difficult for some users, however.

This post describes how Tackle developed an innovative solution using the AWS Fargate compute engine to make Commerce Analytics Service data more accessible for business owners. The solution asynchronously pulls sales data from Commerce Analytics Service and then makes that data available to you.


Your sales operations, finance, and even marketing teams all need to carefully consider how sales originating from AWS Marketplace might affect existing processes. Issues become more apparent after a product listing is complete and sales start coming in.

Questions from your teams may include:

  • How do I know when a buyer accepts a specific private offer?
  • I saw a deposit from AWS in our bank account. How do I know which customer payments are included in that disbursement?
  • We are enrolled in the AWS Marketplace Enhanced Data Sharing (EDS) Program. How do I take advantage of the enhanced buyer information?

Commerce Analytics Service allows you to answer these and many other questions by programmatically accessing sales data from AWS Marketplace.

Tackle-AWS Marketplace-1

Figure 1 – The AWS Marketplace Commerce Analytics Service process.

Building Upon AWS Fargate

Tackle built their process on AWS Fargate because of some key features:

  • Container-focus. Tackle used their existing development expertise to create a serverless Commerce Analytics Service microservice by deploying container tasks on Fargate.
  • Operational simplicity. Fargate automatically manages all underlying infrastructure, including management of the Amazon Elastic Compute Cloud (Amazon EC2) instances, so Tackle didn’t need to add operational overhead.
  • Automatic scaling. Fargate automatically scales the number of containers in or out depending on customer demand for Commerce Analytics Service.

To implement their service, Tackle wrote a simple Python script that uses the AWS SDK for Python to call the MarketplaceCommerceAnalytics.Client.generate_data_set function.

Commerce Analytics Service provides about 21 different datasets that are all available on different schedules. For example, the daily_business_fees dataset is available by 17:00 Pacific Standard Time each day, while the disbursed_amount_by_product report is available every 30 days. Tackle’s service must call generate_data_set each day for available reports for each ISV they serve.

A simple call to the generate_data_set function looks like the following code:

import sys
import boto3
import logging

marketplace_client = boto3.client('marketplacecommerceanalytics')

    resp = marketplace_client.generate_data_set(
except Exception, e:
    logging.error("Error from generate_data_set: %s", str(e))

The generate_data_set function creates the desired dataset, copies it into a designated Amazon Simple Storage Service (Amazon S3) bucket, and then sends an Amazon Simple Notification Service (SNS) notification to the user when the data is ready.

Consequently, Tackle has a second Python service that uses the Amazon SNS SDK to subscribe to those notifications and ingest the data after it’s ready. The data ingestion involves a fairly complex extract, transform, and load (ETL) process to join the data from multiple reports together and transform it into the expected final form.

Fargate makes deploying and managing these services simple. Whenever code updates to the service are ready, Tackle’s CI/CD pipeline automatically takes the following steps:

Using the following revision of the task definition, Fargate takes care of all work needed to perform a rolling service update and push all new code into production.

$ docker push 000000000000.dkr.ecr.us-east-1.amazonaws.com/cas-message-handler:latest

$ aws --region=us-east-1 ecs update-service --cluster cas-message-handler-cluster --service cas-message-handler-service --force-new-deployment
    "service": {
        "serviceArn": "arn:aws:ecs:us-east-1:000000000000:service/cas-message-handler-service",
        "serviceName": "cas-message-handler-service",
        "clusterArn": "arn:aws:ecs:us-east-1:000000000000:cluster/cas-message-handler-cluster",
        "loadBalancers": [],
        "serviceRegistries": [],
        "status": "ACTIVE",
        "desiredCount": 2,
        "runningCount": 2,
        "pendingCount": 0,
        "launchType": "FARGATE",
        "platformVersion": "LATEST",
        "taskDefinition": "arn:aws:ecs:us-east-1:000000000000:task-definition/cas-message-handler:2",
        "deploymentConfiguration": {
            "maximumPercent": 200,
            "minimumHealthyPercent": 50

This service allows you to count on having your latest AWS sales data for analysis as soon as it’s available. You can consume the sales data in a variety of different ways:

  • Load the raw data into Microsoft Excel.
  • Use Amazon QuickSight, a business intelligence service with pay-per-session pricing and machine learning insights.
  • Build more complicated connectors that bring the data directly to Salesforce.

Compiling these reports can be time-consuming, however, and may distract your product teams from working on your core product.

“We are in pretty rapid headcount growth, so to distract people away from what they were hired for would not have been best for us” says Kevin Mellor, VP of Finance for A Cloud Guru. “The ongoing maintenance was not going to work. Having [Tackle] do that was pretty valuable for us.”

Tackle offers a custom reporting platform called Tackle Downstream, which you can see in Figure 2. It provides a persona-specific dashboard for consuming AWS Marketplace sales data as efficiently as possible.

Tackle-AWS Marketplace-2

Figure 2 – Tackle Downstream allows you to consume AWS Marketplace data.


AWS Marketplace offers a place where you can generate visibility and sell your products. The AWS Marketplace Commerce Analytics Service allows you to access your sales data and improve business practices.

Tackle.io developed a solution leveraging the AWS Fargate compute engine to help make Commerce Analytics Service data more accessible for business owners. This has helped software vendors like New Relic, Druva, and Auth0 quickly list and start selling on AWS Marketplace.

Dave McCann, vice president of AWS Marketplace, estimates there are more than 200,000 buyers using AWS Marketplace to purchase software. With momentum like that, it’s no wonder that ISVs are in a hurry to complete AWS Marketplace listings for their products.

from AWS Partner Network (APN) Blog

Industry’s First Alexa Skill Builder Certification Helps Give Partners a Voice Advantage

Industry’s First Alexa Skill Builder Certification Helps Give Partners a Voice Advantage

By Jennifer Davis, Product Marketing at AWS Training and Certification

Alexa Skill Builder

You can now register for the new AWS Certified Alexa Skill Builder – Specialty certification, the industry’s first and only certification that validates your ability to build, test, and publish Alexa Skills.

This offering enables AWS Partner Network (APN) Partners to validate their Alexa and cloud expertise with an industry-recognized credential, building credibility among your clients and prospects.

With Alexa, you can reach customers through more than 100 million Alexa-enabled devices.

“Our clients are looking for more innovative ways to drive engagement with their customers or solving real-world problems using voice and natural language processing,” says Rebecca Gentile, Global Alliance Enablement Director at Slalom, an APN Premier Consulting Partner. “This new certification will enable us to identify new talent, develop our teams, and prepare our clients for the transformative power of Alexa for their businesses.”

The AWS Certified Alexa Skill Builder – Specialty certification is recommended for individuals who have six (6) months or equivalent experience developing Alexa Skills, have proficiency in at least one (1) programming language, and have published at least one (1) Alexa Skill.

The exam is available in English at testing centers worldwide for 300 USD.

Learn more about the AWS Certified Alexa Skill Builder – Specialty exam >>

About AWS Certification

AWS Certification helps candidates build credibility and confidence with the Amazon Web Services (AWS ) Cloud by validating their expertise with an industry-recognized certification.

Our new Alexa Skill Builder – Specialty certification joins the portfolio for critical roles supporting customer success on the AWS Cloud. With 11 certifications, the choices are yours whether you pursue specialty certifications that evaluate technical expertise in areas such as machine learning and security, or role-based certifications for cloud practitioner, solutions architect, developer, and operations.

Available AWS Certifications-1

AWS recently tripled the number of testing centers worldwide, so you have even more choices and flexibility when deciding when and where to get AWS Certified.

Learn more at aws.amazon.com/certification >>

from AWS Partner Network (APN) Blog