Tag: Life Science

Industry’s First Alexa Skill Builder Certification Helps Give Partners a Voice Advantage

Industry’s First Alexa Skill Builder Certification Helps Give Partners a Voice Advantage

By Jennifer Davis, Product Marketing at AWS Training and Certification

Alexa Skill Builder

You can now register for the new AWS Certified Alexa Skill Builder – Specialty certification, the industry’s first and only certification that validates your ability to build, test, and publish Alexa Skills.

This offering enables AWS Partner Network (APN) Partners to validate their Alexa and cloud expertise with an industry-recognized credential, building credibility among your clients and prospects.

With Alexa, you can reach customers through more than 100 million Alexa-enabled devices.

“Our clients are looking for more innovative ways to drive engagement with their customers or solving real-world problems using voice and natural language processing,” says Rebecca Gentile, Global Alliance Enablement Director at Slalom, an APN Premier Consulting Partner. “This new certification will enable us to identify new talent, develop our teams, and prepare our clients for the transformative power of Alexa for their businesses.”

The AWS Certified Alexa Skill Builder – Specialty certification is recommended for individuals who have six (6) months or equivalent experience developing Alexa Skills, have proficiency in at least one (1) programming language, and have published at least one (1) Alexa Skill.

The exam is available in English at testing centers worldwide for 300 USD.

Learn more about the AWS Certified Alexa Skill Builder – Specialty exam >>

About AWS Certification

AWS Certification helps candidates build credibility and confidence with the Amazon Web Services (AWS ) Cloud by validating their expertise with an industry-recognized certification.

Our new Alexa Skill Builder – Specialty certification joins the portfolio for critical roles supporting customer success on the AWS Cloud. With 11 certifications, the choices are yours whether you pursue specialty certifications that evaluate technical expertise in areas such as machine learning and security, or role-based certifications for cloud practitioner, solutions architect, developer, and operations.

Available AWS Certifications-1

AWS recently tripled the number of testing centers worldwide, so you have even more choices and flexibility when deciding when and where to get AWS Certified.

Learn more at aws.amazon.com/certification >>

from AWS Partner Network (APN) Blog

Scheduling on the AWS Cloud with IBM Spectrum LSF and IBM Spectrum Symphony

Scheduling on the AWS Cloud with IBM Spectrum LSF and IBM Spectrum Symphony

By Geert Wenes, Partner Solutions Architect at AWS

IBM-Logo-1.1
Connect with IBM-1
Rate IBM-1

Many high performance computing (HPC) and grid customers with large technical and on-premises computing systems select IBM Spectrum LSF and IBM Spectrum Symphony for policy-driven control and scheduling.

Spectrum Symphony schedules tasks very fast: in milliseconds, rather than in seconds for conventional schedulers. It also supports tens of thousands of compute nodes and hundreds of applications on a scalable, shared, and heterogeneous multi-site grid.

For you, this translates into better application performance, better throughput, better utilization, and the ability to respond quickly to business demands.

Many on-premises infrastructures are aging, however, and strained by expanding requirements.

In the financial services industry (FSI), these requirements are increasingly regulatory in nature. For example, satisfying the Fundamental Review of the Trading Book (FRTB) requirements will lead to more frequent risk calculations on a wider range of data.

In HPC segments such as manufacturing or electronic design analysis (EDA), these requirements are shorter-term-to-solution and higher-fidelity results with larger and multi-scale models and more complex multi-physics.

To cope with increasing requirements without slowing down the pace of innovation, customers are testing and validating different cost-effective deployment models of Spectrum LSF and Spectrum Symphony.

Some start with hybrid deployments, but complexities associated with security, data movement, and predictable costs have driven many customers to simplify their deployments on Amazon Web Services (AWS).

Even so, deployments on the AWS Cloud may benefit from Spectrum Computing LSF’s and Symphony’s policy-driven control and scheduling capabilities. Both solutions have plugin-based technology called resource connector and host factory, respectively, which allows you to define policies to make these environments automatically elastic based on workload demand.

In particular, both hybrid and cloud-native deployment can take advantage of Amazon EC2 On-Demand Instances, as well as Spot Instances; unused Amazon EC2 capacity that is available at up to a 90 percent discount compared to On-Demand Instance prices.

In response to customer requests, IBM and AWS are on an ongoing journey to enable the IBM Spectrum Computing family of products on AWS.

AWS and IBM, an AWS Partner Network (APN) Select Technology Partner, have completed testing and validation for hybrid deployments of Spectrum LSF and Spectrum Symphony. Both provide enterprise workload management for distributed high performance computing and analytics, and have an established brand within their respective markets.

Spectrum LSF

Today, you can reliably and cost-effectively deploy Spectrum LSF on AWS. IBM offers a deployment guide (including deployment options and steps) and best practices in Ansible Playbooks. Spectrum LSF conforms to best practices with respect to operations, security, cost-effectiveness, and backup and recovery.

Spectrum LSF can be deployed in two modes:

  • Stretch cluster
  • Multi-cluster

Stretch cluster mode assumes you have a cluster in another location—either on-premises or running on another cloud or cloud location. It’s defined as a single cluster stretched over a wide area network (WAN) so that compute nodes in the cloud communicate with a master scheduling host on the originating location.

In the LSF Stretch Cluster architecture, the on-premises cluster resources can be dynamically “stretched” over a WAN to include cloud resources to accommodate spikes in demand.

Though simpler in concept than the multi-cluster mode, this generally means all LSF daemon communication with the master scheduler happens over the WAN, which can be a source of extra cost or lowered reliability. The following diagram shows Spectrum LSF deployed with the stretch cluster configuration:

IBM Spectrum LSF-1

Figure 1 – IBM Spectrum LSF stretch cluster configuration.

Multi-cluster mode architecture adds a master scheduler running on AWS. This architecture simplifies communication and coordination between the on-premises and cloud-based clusters by reducing it to task meta-data exchanges between master schedulers in a “job forwarding” model.

Hence, it eliminates all communication from the cloud compute instances to the on-premises master. In fact, in multi-cluster mode, all compute capacity can reside on AWS and none needs to reside on-premises.

The following diagram shows Spectrum LSF deployed with the multi-cluster configuration.

IBM Spectrum LSF-2

Figure 2 – IBM Spectrum LSF multi-cluster configuration.

Both configurations offer certain advantages and trade-offs. Each is covered in detail in the deployment guide and can be downloaded from GitHub.

Spectrum LSF includes an additional capability, the LSF resource connector, which enables policy-driven cloud bursting to AWS. In particular, this enables you to use either On-Demand or Spot Instances to bid on computing capacity. While your request for a Spot Instance will be fulfilled as long as capacity is available, you also have the option to hibernate, stop, or terminate your Spot Instances when Amazon EC2 reclaims the capacity back with two-minutes of notice.

Spot Instances are a cost-effective choice if you can be flexible about when your applications run, and if your applications can be interrupted. If you use Spot instances and your application does get interrupted, Spectrum LSF may be able to requeue your job within the two-minutes termination notice, as it checks periodically for any Spot Instances that are planned to be reclaimed and requeues the job.

However, if you have specified hibernation as the interrupt behavior, you do not receive the two-minutes warning because the hibernation process begins immediately and Spectrum LSF may not be able to requeue the job.

Using Spot instances with Spectrum LSF, you can significantly reduce the cost of running your applications and yet maintain capacity even for hyper-scale workloads.

Spectrum Symphony

Spectrum Symphony contains a similar framework to the LSF resource connector, called host factory. This enables your on-premises clusters to dynamically include compute hosts from AWS based on the resource demands of applications in your cluster and on AWS. You can control bursting for your cluster through policy configurations, which define when and how resource scale-out and scale-in requests are triggered.

With host factory, you can leverage the on-demand capabilities of the AWS infrastructure to provision as many resources as you need and pay only for what you use.

Today, Spot enablement for host factory is released in limited availability mode. To enable Spot in Spectrum Symphony v7.1 and v7.2, engage with your IBM sales team for Engineering Feature Requests (EFR) or download IBM-supported patches. You may also engage with the AWS sales team and specialist solutions architects for custom enablement.

Spot enablement for host factory will be made generally available (GA) with the next revision release of Spectrum Symphony v7.3.

Conclusion

Using IBM Spectrum LSF and IBM Spectrum Symphony on the AWS Cloud offers the following key outcomes:

  • Easier migration path.
  • Flexible deployment options that include native AWS or hybrid cloud mode.
  • Policy-driven scheduling capability for maximization of compute resources and optimal application performance.
  • Cost-effectiveness.

If you’re a Spectrum LSF and Spectrum Symphony customer, you now have flexible options for running on AWS. The IBM cloud-friendly licensing model (PAYG), along with elastic scaling capabilities, makes AWS the ideal target for bursting workloads. You also have the option to bring your own licenses (BYOL) to AWS. IBM continues to deliver support directly, just as it does when those licenses are deployed on IBM customer premises.

Spot Instances are currently enabled in Spectrum LSF, can be enabled in Spectrum Symphony upon request, and will soon be GA for Spectrum Symphony. As a result, you can significantly reduce the cost of running applications, grow your application compute capacity and throughput for the same budget, and enable new types of cloud computing applications.

To get started, please visit GitHub which will take you through the two deployment options for Spectrum LSF. You can watch the videos and you find the Ansible Playbooks used in these videos. They are public and freely available for you to take and customize.

.


IBM-Logo-1.1
Connect with IBM-1

IBM – APN Partner Spotlight

IBM is an APN Select Technology Partner. Customers around the world rely on IBM’s advanced cloud technologies and on the deep industry and technology expertise of IBM services and solutions professionals and consultants.

Contact IBM | Solution Overview | AWS Marketplace

*Already worked with IBM? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Discovering and Reporting on Agile Assets with AWS Systems Manager

Discovering and Reporting on Agile Assets with AWS Systems Manager

By Kiran Chadalavada, AWS Business Leader at Cognizant AWS Practice
By Soumya Banerjee, Lead Architect, AWS Practice at Cognizant AWS Practice

Cognizant_Logo-3
Cognizant-APN Badge-3
Connect with Cognizant-1
Rate Cognizant-1

Hundreds of large enterprises rely on Cognizant’s ability to provide customized cloud management platform capabilities with a differentiated experience.

To excel, we integrate multiple management tools into our Cloud Management Platform (CMP) to provide a unified view of customers’ Amazon Web Services (AWS) environments.

When it comes to managing workloads across hybrid and heterogeneous cloud environments, a key challenge is dealing with asset tracking. This includes software and hardware inventory.

As we provide customers with push-button provisioning capabilities, it’s even more critical to track the number of software and hardware inventories. We track every configuration item for security, as well as usage metrics.

To address this growing need of managing and reporting asset details, Cognizant uses AWS Systems Manager, which gives you visibility and control of your infrastructure on AWS.

Overview

In this post, we will explore specific use cases for software and hardware inventory collection and tracking that we do for customers using AWS Systems Manager.

Cognizant is a AWS Partner Network (APN) Premier Consulting Partner and member of the AWS Managed Services Provider (MSP) Partner Program.

To provide a unified view of assets for customers at the click of a button, we integrated AWS Systems Manager into CMP. Currently, we manage around 50,000 servers across various clients. With the adoption of Systems Manager, we reduced the manual effort of maintaining the inventory details by 70 percent.

Inventory Data and Reporting at Scheduled Intervals

Using the Systems Manager State Manager association with the AWS-GatherSoftwareInventory document, we collect inventory data from instances in a customer’s hybrid cloud environment spanning multiple AWS accounts and regions. This includes inventory details from instances that are hosted in our customer’s data center.

We use the Systems Manager Resource Data Sync feature to send inventory data collected from all of our managed instances to a single Amazon Simple Storage Service (Amazon S3) bucket.

We then use Amazon Athena to query and pull specific inventory details out of the bucket, and to build custom reports. For more detailed information and the steps involved, see Configuring Resource Data Sync for Inventory.

Cognizant offers greater asset visibility to customers by providing the ability to build their own custom reports, with all the inventory data in a single S3 bucket.

Using AWS Lambda functions, we can execute Athena queries and compile the results into a custom inventory report. We quickly push out the custom reports to customers and individuals in our team using Amazon Simple Email Service (SES).

We also perform analytics on this inventory data using Amazon QuickSight. Centralizing inventory reporting has helped us improve the operational efficiency up to 80 percent over other methods by providing accurate insights on a customer’s hybrid environment.

Cognizant-Systems-Manager-1

Figure 1 – Inventory collection and reporting across accounts using Resource Data Sync.

Here’s the sequence of steps shown in Figure 1 that summarize the inventory collection and reporting process flow for multiple accounts spread out across multiple AWS Regions:

  1. In Account 1, State Manager executes the AWS-GatherSoftwareInventory document on managed Amazon Elastic Compute Cloud (Amazon EC2) instances.
  2. Resource Data Sync collects the inventory for these instances and sends it to AWS Systems Manager.
  3. The inventory data from on-premises managed instances is collected as result of State Manager executing the AWS-GatherSoftwareInventory document.
  4. Resource Data Sync sends the inventory data to an Amazon S3 bucket.
  5. Using the same steps performed earlier for the first account, Resource Data Sync collects the inventory data for the second account and sends it to the S3 bucket.
  6. Using the same steps performed earlier, Resource Data Sync collects the inventory data for the third account and sends it to S3.
  7. An Amazon CloudWatch Event invokes a Lambda function on a periodic basis.
  8. Lambda triggers the execution of Athena queries to generate datasets for ingestion by analytical tools like Amazon QuickSight and for report distribution.
  9. Athena executes the queries on S3 buckets to generate datasets containing inventory data from all accounts and stores them in another S3 bucket.
  10. The reports are distributed to end users using Amazon SES.

Automating the Tagging of Instances

From a customer’s point of view, asset tracking is key for understanding capacity consumption and optimization, as well as for managing the security and governance aspects.

Systems Manager Automation allows you to represent operational tasks and runbooks as code, in a JSON or YAML document. You can execute that code across multiple accounts in multiple regions.

Using Automation documents, you can define steps in your workflow such as seeking approval for certain actions or calling any AWS API, among other available actions or plugins.

At Cognizant, we help our cloud customers to simplify the process of asset tracking by tagging existing and new instances using custom Automation documents.

Cognizant-Systems-Manager-2

Figure 2 – Automated tagging of instances using an Automation document.

Here’s the sequence of steps that summarize the process in Figure 2:

  1. Administrator runs the triggers that execute AWS System Manager Automation using the console and specifies the document to invoke.
  2. Document encapsulates the logic to tag instances.
  3. Amazon EC2 instances are tagged with the key-value pairs based on the logic encoded in Automation document.

The following code example shows the steps in the Automation document we used to tag the Amazon EC2 instances:

mainSteps:
- name: launchInstance
  action: aws:runInstances
  maxAttempts: 3
  timeoutSeconds: 1200
  onFailure: Abort
  inputs:
    ImageId: ""
    InstanceType: ""
    SubnetId: ""
    KeyName: ""
    SecurityGroupIds:
    - ""
- name: createTags
  action: aws:createTags
  maxAttempts: 3
  onFailure: Abort
  inputs:
    ResourceType: EC2
    ResourceIds:
    - ""
    Tags:
    - Key: Tag1Key
      Value: ""
    - Key: Tag2Key
      Value: ""

Why Systems Manager?

Our reasons at Cognizant for choosing AWS Systems Manager include the following benefits:

Support for Hybrid Cloud Environments

Many enterprise customers have infrastructure hosted in hybrid environments, both private and public clouds. Using AWS Systems Manager features, we can provide a range of cloud management capabilities for Windows Server, AL, AL2, RHEL, CentOS, Ubuntu, and SUSE–based operating systems.

Operational Efficiency

We have automated inventory collection and centralized inventory data storage to a single Amazon S3 bucket using the Resource Data Sync feature.

We use Systems Manager Automation across multiple accounts and regions for the following:

  • Tagging instances.
  • Baking golden images for our AMIs.
  • Cleaning up the infrastructure by deleting unused Amazon Elastic Block Store (Amazon EBS) volume snapshots.
  • Managing database snapshots.
  • Terminating unwanted instances.

We also use Parameter Store to manage the configuration data and secret securely separate from code. We use Patch Manager for patching instances across multiple operating systems.

Overall, AWS Systems Manager has helped us gain operational efficiencies of 400 percent and reduce customer support costs.

Improved Security Posture

Security is the bedrock of everything we do at Cognizant Cloud. Systems Manager helps us perform operations without having to open up ports in our virtual private cloud (VPC) for SSH or RDP access.

All the actions are audited in AWS CloudTrail and execution outputs can be stored in Amazon S3 or Amazon CloudWatch Logs. Access control is driven by AWS Identity and Access Management (IAM) policies and roles.

Cost-Effective Management Solution

The majority of Systems Manager features are provided at no additional cost, and the rest follow the pay-per-use model. There are no licenses to worry about or servers to manage. This lets us build cost-effective solutions and pass the savings to customers.

Summary

Cognizant—with the help of AWS Systems Manager—has automated tagging for more than 50,000 instances, increasing efficiency by 80 percent. This has helped us streamline the way resources are managed and governed.

Being a native tool, AWS Systems Manager has powerful cloud management abilities that makes it a highly scalable and agile toolset. As part of the Cognizant Cloud Management Platform, Systems Manager helps us provide secure and differentiated operational support in an automated way to multiple customers.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.


Cognizant_Logo-3
Connect with Cognizant-1

Cognizant – APN Partner Spotlight

Cognizant is an APN Premier Consulting Partner. They transform customers’ business, operating, and technology models for the digital era by helping organizations envision, build, and run more innovative and efficient businesses.

Contact Cognizant | Practice OverviewBuy on Marketplace

*Already worked with Cognizant? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Analyzing Performance and Cost of Large-Scale Data Processing with AWS Lambda

Analyzing Performance and Cost of Large-Scale Data Processing with AWS Lambda

By Chris Madden, Senior Cloud Architect at Candid Partners

Candid Partners-Logo-1
Candid Partners-APN-Badge-2
Connect with Candid Partners-1
Rate Candid Partners-1

There are many tools available for doing large-scale data analysis, and picking the right one for a given job is critical.

In this post, I will provide an in-depth analysis of the architecture and performance characteristics of a completely serverless data processing platform.

While the approach we demonstrate here isn’t applicable for every data analytics use case, it does have two key characteristics that make it a useful part of any IT organization’s tool belt.

First of all, this approach has a very low total cost of ownership (TCO). Unlike traditional server clusters, this serverless data processing architecture costs nothing when it isn’t being used.

For ad hoc jobs against large datasets, it can be extremely costly to maintain enough capacity to run those jobs in a timely manner. By using services like AWS Lambda, we can quickly access massive pools of compute capacity without having to pay for it when it’s sitting idle.

Second, because Lambda allows us to run arbitrary code easily, this approach provides the flexibility to handle non-standard data formats easily. Services like Amazon Athena are great for similar types of data processing, but these tools require your data to be stored in predefined standard formats.

At Candid Partners, an AWS Partner Network (APN) Advanced Consulting Partner, we find that many of our customers have large volumes of data stored in various formats that aren’t compatible with off-the-shelf tools.

For instance, the Web ARChive file format (WARC) used in this example isn’t supported by Amazon Athena or most other common data processing libraries, but it was easy to write a Lambda function that could handle this niche file format.

Candid Partners holds AWS Competencies in both DevOps and Migration, as well as AWS Service Delivery designations for AWS Lambda and other services. We have worked with several large enterprises to build solutions that improve agility while minimizing TCO using the AWS serverless platform.

Greping the Web

To demonstrate our approach, we built a basic search service over the Common Crawl dataset that provides an archive of web pages on the internet. In our example, we looked for all instances of American phone numbers, but you could easily use this to do a grep-like search for any regular expression across all of the pages in the Common Craw archive.

The Common Crawl data are organized into approximately 64,000 large objects using the WARC format. A WARC file is a concatenation of zipped HTTP responses. To process one of these files, you need to first split it into individual records and then unzip each of the records in order to access the raw, uncompressed data.

The dataset also provides an index of all the WARC files for a particular crawl. This index is a simple list of Amazon Simple Storage Service (Amazon S3) URLs pointing to all of the WARC files.

The overall architecture for our Lambda-based data processing solution is simple. The URLs of files to be processed are added to Amazon Simple Queue Service (SQS). Each message on that queue is sent to a separate instance of a Lambda function that processes all of the records in that file.

The results and metrics associated with scanning a given file are then placed on a downstream queue, and eventually recorded using custom Amazon CloudWatch metrics.

Candid Serverless-1

Figure 1 – Serverless data processing architecture overview.

Polling the Work Queue

AWS Lambda provides a native event source for triggering functions from an SQS queue. This event source works great for most use cases, but there’s a lag in how quickly the integration will consume function concurrency.

Because the work queue goes from a depth of zero to many thousands of messages almost immediately, and because we want to maximize the number of concurrently executing Lambda functions, we decided to implement our own optimized polling mechanism that we refer to as the fleet launcher.

When it starts, the fleet launcher immediately starts 3,000 instances of the worker Lambda function (the initial concurrency burst limit in us-east-1). Every minute thereafter, it continues to add an additional 500 Lambdas to the processing fleet in order to utilize as much concurrency as possible without being throttled.

The worker function starts by checking the work queue to see if there is work available. If not, it simply terminates. Otherwise, it will process the Amazon S3 object referenced in the message.

Once the object is successfully processed, it puts various metrics about the execution on the downstream metrics queue and deletes the message from the work queue.

Finally, the worker function recursively invokes itself, and the process repeats. Since processing messages off the metrics queue doesn’t require a huge spike in concurrency, we use the standard Lambda-SQS integration to trigger a function that updates CloudWatch based on the data captured there.

It’s important to note that by default the regional Lambda concurrency limit for a new account is 1,000.

In order to test the scale we could achieve with this solution, we worked with AWS to raise the concurrency limit on our account in the US East Region (N. Virginia) where we ran the test. For cases where you’re processing less than a few TB of data, this is probably not necessary.

Results and Observations

Our final test run used more than 12,000 concurrent Lambdas to scan over 64,300 individual Amazon S3 objects. In total, the system processed 259 TB of uncompressed data in just under 20 minutes.

The total cost of this run was $162 or about $0.63 per raw terabyte of data processed ($2.7 per compressed terabyte). We scanned a total of 3.1 billion archived HTTP responses and discovered 1.4 billion phone numbers.

Candid Serverless-2Figure 2 – Dashboard of Candid Partners experiment.

Using this approach, we achieved significant scale using an architecture that provides excellent cost characteristics for ad hoc workloads.

Being able to go from zero to processing nearly two million records per second and back to zero over the course of just minutes is unheard of using traditional server-based architectures. It’s also incredibly powerful across many use cases.

While building grep for archived web pages is probably not a problem many businesses are dying to solve, we see many real-world applications for this approach.

Instead of the Common Crawl archive, a researcher could analyze genomic data in search of the patterns that hold the next breakthrough in the fight against cancer. Or a risk manager could process millions of claims to identify the most at-risk borrowers within minutes.

Imagine running real-time analytics on a flash sale, or if there are millions of Internet of Things (IoT) devices flooding you with data once a day.

Conclusion

If your organization collects and analyzes data, this data analysis pattern could be far simpler than your current methods of performing data analysis.

Your customer experience could improve, your costs of doing business could decrease, and your internal teams could work faster and cheaper than ever before.

If you’re interested in seeing what we did check out Lambda at Scale, or visit the Serverless Repo.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.


Candid Partners-Logo-1
Connect with Candid Partners-1

Candid Partners – APN Partner Spotlight

Candid Partners is an AWS Competency Partner. They combine enterprise-class scale and process with born-in-the-cloud domain expertise to help translate complex business needs into specific technology solutions.

Contact Candid Partners | Practice Overview

*Already worked with Candid Partners? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Helping Customers Migrate to AWS Just Got Easier with the AWS ISV Workload Migration Program

Helping Customers Migrate to AWS Just Got Easier with the AWS ISV Workload Migration Program

By Guy Farber, Global Manager, AWS ISV Workload Migration Program

ISV Workload Migration Program-1Is your business focused on migrating independent software vendor (ISV) workloads from on-premises to Amazon Web Services (AWS)? You’ll be excited to hear about the AWS ISV Workload Migration Program (WMP) for AWS Partner Network (APN) Consulting and Technology Partners.

The AWS ISV Workload Migration Program leverages the expertise of APN Partners and AWS best practices to create repeatable and scalable migration models. These models, in turn, enhance APN Partners’ AWS practices and support the success of AWS customers’ cloud journey.

The WMP provides APN Partners with technical enablement, migration funding to offset costs, and go-to-market support, making it easier to migrate customers’ ISV workloads to AWS.

Learn more about the AWS ISV Workload Migration Program >>

Program Prerequisites

APN Consulting and Technology Partners interested in participating in the WMP must meet the following prerequisites prior to applying:

  • Select, Advanced, or Premier tier APN Consulting or Technology Partner.
  • Able to demonstrate deployment and migration of a nominated ISV workload.
  • Have an on-premises install base of the workload to migrate to AWS.
  • Have an offering of the workload running on AWS deployed as software-as-a-service (SaaS), AWS Marketplace listing, bring you own license (BYOL), or managed service.
  • Each end-customer migration must result in a minimum of $36,000 in AWS annual recurring revenue (ARR) within 12 months of the migration.

Once you meet all of these prerequisites, you must pass a technical review by an AWS ISV Workload Migration Program Partner Solutions Architect. To get started, check out the ISV WMP website.

AWS ISV Workload Migration Program-1

Program Benefits

The AWS ISV Workload Migration Program offers participating APN Partners the following benefits to accelerate the migration of their ISV workloads into AWS.

Technical Enablement

Development of a migration playbook (WMP Playbook) is key to participating in this program. A WMP Playbook is a technical document that provides a set of repeatable and comprehensive guidelines for migrating an end-customer’s ISV workload to AWS.

APN Partners will work with the WMP Partner Solutions Architect team to establish a WMP Playbook for the qualified ISV workload.

Investment

The WMP may invest up to 10-15 percent of the overall post-migration AWS ARR, towards reducing end-customers’ migration costs. The funding rate will be based on the projected AWS ARR driven by the ISV workload and complexity of the migration.

Funding will be provided either in the form of cash or AWS Promotional Credits. For additional information, please reach out to the WMP team.

Go-to-Market Support

The WMP offers APN Partners certain marketing-related support in the form of speaking engagements, APN Blog inclusion, agency alignment for content creation, and more.

A variety of helpful marketing resources are also available in APN Marketing Central, which provide guidance on logo usage, best practices for case study creation, how to access Acceleration Funding (applicable to Advanced and Premier tier APN Partners only), and more.

Another way WMP helps APN Partners distinguish themselves is by including the program on your AWS Partner Solution Finder listing. There may also be opportunities to participate in AWS-led WMP promotion initiatives.

Launch Partners

Congratulations to our AWS ISV Workload Migration Program launch partners!

APN Consulting Partners

APN Technology Partners

Getting Started

The AWS ISV Workload Migration Program provides a prescriptive migration approach as well as technical, funding, training, and GTM support to accelerate migrations of a customers’ ISV workloads to AWS.

If you are interested in applying, check out the ISV WMP website >>

from AWS Partner Network (APN) Blog

Authority to Operate on AWS Program Helps Public Sector Partners Accelerate Security and Compliance for Customers

Authority to Operate on AWS Program Helps Public Sector Partners Accelerate Security and Compliance for Customers

By Tim Sandage, Senior Security Partner Strategist at AWS

Authority to Operate on AWS-1Security and compliance are primary considerations for many Amazon Web Services (AWS) customers as they begin their cloud journey. Public sector customers, in particular, face obstacles and challenges using commercially available solutions that may not have an Authority to Operate (ATO).

To help customers overcome these obstacles, we are excited to announce the Authority to Operate on AWS program that provides resources to solution providers who need assistance pursuing a compliance authorization, including:

  • Federal Risk and Authorization Management Program (FedRAMP)
  • Defense Federal Acquisition Regulation Supplement (DFARS)
  • Payment Card Industry Data Security Standard (PCI DSS)
  • Criminal Justice Information Services (CJIS)
  • As well as many other compliance programs >>

Solution providers running on AWS may encounter additional difficulties achieving an ATO due to complexity of both the process and technological barriers, uncertain time frames from start to finish, and unclear expectations of cost.

These challenges can result in an unintended barrier to entry and be a limiting factor in how well public sector customers can execute their mission, as the breadth of solutions available to them is not on par with companies operating in the commercial sector.

The ATO on AWS program connects customers to validated AWS Partner Network (APN) Partners who are members of the AWS Public Sector Partner Program.

Learn more about the Authority to Operate on AWS program >>

Authority to Operate on AWS

Program Benefits for AWS Customers

Authority to Operate on AWS helps solution providers running on AWS accelerate the security and compliance authorization process, reducing the time and cost it takes to achieve an ATO from their customers, which is required for production use (such as FedRAMP or CJIS).

The program provides resources to help solution providers build, implement, and optimize DevOps, SecOps, Continuous Integration and Continuous Delivery (CI/CD), and Continuous Risk Treatment (CRT) strategies and processes for their organization. It also provides access to managed solutions that minimize the work required to achieve such authorizations.

The ATO on AWS program consists of:

  • Community of validated APN Consulting Partners and solutions from APN Technology Partners that are proven to be effective in helping solution providers meet and maintain regulatory compliance requirements. These organizations must meet the qualifications defined by the program and are verified by AWS program administrators.
    .
  • Community-developed and verified resources, templates, tools, and guidance that help simplify the development of compliant infrastructure, provide a more consistent operating environment, and reduce the time and costs of achieving and maintaining a compliant infrastructure.
    .
  • Support and guidance from highly-qualified AWS security and compliance strategists.

Program Requirements for APN Partners

APN Consulting Partners must be at the Select tier or above and be a member of the AWS Public Sector Partner Program with two (2) public sector practice customer references that are specific to completed ATO projects resulting in a customer certification or accreditation.

APN Technology Partner solutions must have two (2) AWS case studies specific to a single ATO on AWS solution under review. These solutions must be:

  • An ATO on AWS solution, targeting one or more of the primary steps in achieving compliance through automation: product design, production design, production, and operations.
  • Follow AWS best practices as defined in the AWS Well-Architected Framework.
  • Clearly differentiated from existing solutions built by APN Partners.

Customer Success Stories

Here are some success stories ATO on AWS has had accelerated AWS customers through the compliance process on AWS.

SmartSheet

  • Solution Provider: SmartSheet is a cloud-based collaboration software company seeking FedRAMP authorization.
  • ATO on AWS Partner: Anitian, a security intelligence and compliance automation (CA) firm.
  • Program Resources Leveraged: Anitian used many APN Partner solutions available through the ATO on AWS program, including GitHub, CIS, Yubico, Trend Micro, Puppet, Saint, and Barracuda. Anitian also collaborated with APN Consulting Partners Kratos for security documentation, and Coalfire as the FedRAMP 3PAO (Third-Party Assessment Organization).
  • Outcomes: SmartSheet deployed a new workload in AWS GovCloud (US), developed a FedRAMP authorization package, and successfully navigated a formal 3rd Party FedRAMP assessment, all in less than 90 days.

Innovest Systems

  • Solution Provider: Innovest Systems, LLC is a financial technology company seeking FedRAMP authorization.
  • ATO on AWS Partners: Coalfire, a cyber-risk management and compliance services organization; Schellman & Company, an independent third-party assessment organization.
  • Program Resources Leveraged: In addition to their consulting and engineering expertise, Coalfire leveraged both the AWS Security Automation and Orchestration (SAO) framework and technical solutions from several APN Technology Partners such as Palo Alto Networks, Splunk, GitHub, Trend Micro, and Puppet, to deploy preconfigured and FedRAMP compliant HashiCorp Terraform configurations to AWS GovCloud (US). Coalfire authored all of the requisite FedRAMP security documentation, while Schellman & Company completed the FedRAMP assessment in sync with the deployment.
  • Outcomes: Innovest deployed their workload to AWS GovCloud (US) and achieved a FedRAMP Authorization to Operate (ATO) in under 10 months.

RedFlex

  • Solution Provider: RedFlex is a developer of Intelligent Transport Systems (ITS) solutions and services.
  • ATO on AWS Partner: Anitian, a security intelligence and compliance automation (CA) firm.
  • Program Resources Leveraged: In collaboration with the ATO on AWS team, Anitian leveraged their own CA tool and Allgress’ compliance vision tool to deploy an automated, “audit ready” Criminal Justice Information Services Division (CJIS) security policy (version 5.7) architecture in AWS GovCloud (US), including documentation. This deployment leveraged a number of technical solutions from APN Partners, such as Trend Micro, Center for Internet Security (CIS), Puppet, GitHub, Allgress, Barracuda, Yubico, and Saint.
  • Outcomes: The deployment of the RedFlex solution was completed within 30 days and is currently under assessment and awaiting migration of customer data to the environment.

Team Up with an ATO on AWS Partner

We are launching the Authority to Operate on AWS program with an established community of 24 APN Partners that can help customers with security and compliance:

These validated APN Partners have demonstrated their expertise, suitability, and capability in helping customers achieve and maintain regulatory compliance requirements. They are committed to building the community resources and programs that assist all AWS customers and fellow APN Partners in meeting their compliance goals.

Get Started on Your Path to ATO

Solution providers interested in achieving a compliance authorization should visit the ATO on AWS website, or contact [email protected] for more information.

We are actively seeking more APN Partners to continue to expand this community and the resources available to customers in regulatory markets. If you are interested in joining us, please contact [email protected].

from AWS Partner Network (APN) Blog

Journey to Being Cloud-Native – How and Where Should You Start?”

Journey to Being Cloud-Native – How and Where Should You Start?”

By Ashley Sole, Senior Engineering Manager at Skyscanner
By Rael Winters, CloudOps Product Manager at DevOpsGroup
By Kamal Arora, Sr. Manager, Solution Architecture at AWS

Cloud Journey-1Cloud-native is one of the hottest topics in IT, so naturally it’s a source of much debate.

Amazon Web Services (AWS), DevOpsGroup, and Skyscanner have teamed up to cut through the hype and offer an objective look at “going native” in the context of large-scale cloud adoption.

DevOpsGroup is an AWS Partner Network (APN) Advanced Consulting Partner that offers digital transformation services based on DevOps practices. Skyscanner is a leading global travel service search engine that recently migrated all-in to AWS.

In this post, our goal is to differentiate between applications that justify the full cloud-native treatment upfront and those where a simpler, phased approach might be more appropriate.

Before we dive in, let’s consider what we mean by cloud-native. With such a complex, rapidly evolving concept, a simple definition is too restrictive. It’s more useful to consider cloud-native as a continuum.

The Cloud Native Maturity Model outlined by Kamal Arora et al in Cloud Native Architectures is a good place to start. It positions “cloud-native services”, “application-centric design”, and “automation” as core elements which can evolve over time. Their sophistication shapes the overall maturity of a given application.

Read more about these three elements on the DevOpsGroup Blog.

DevOpsGroup-Skyscanner-1.1

Figure 1 – The Cloud Native Maturity Model.

What’s the relevance of cloud-native maturity? This brings us full-circle to the rationale behind all-in cloud migration or widespread adoption.

Mounting evidence shows that the way you implement cloud technology matters. The latest State of DevOps report, which considers data from more than 30,000 surveys, highlights that infrastructure-as-code (IaC), platform-as-a-service (PaaS), containers, and cloud-native architectures are predictive of organizational success.

Using these technologies and practices clearly impacts the speed at which the promised performance benefits are realized and translated into tangible commercial advantage.

If you’re migrating to the cloud, moving as-is has limited value in itself. Likewise, if you were born in the cloud, failure to exploit advanced features, services, and automation techniques will hinder long-term agility and growth.

Targeting Maturity Levels

This is why you need to decide upon the level of sophistication required. If you’re building in the cloud, it’s a case of focusing on what each application needs to achieve.

It may not be necessary to aim for the more advanced end of the cloud-native spectrum, but you still need to consider medium and long-term business goals while ensuring the application can accommodate ongoing improvement.

When it comes to migrations, re-platforming and evolving the existing IT estate is usually simpler, with lower costs and risks attached, than a full rewrite. Rael has written about three fundamental paths for cloud migration, which are closely aligned with the AWS six ‘”R”s of cloud migration.

Kamal’s maturity model, as shown in Figure 1, positions these three options—rewrite, re-platform, or re-host—as a larger spectrum that reflects the myriad of potential choices within each path.

But the million-dollar question is, which applications are best suited to which approach?

Outcomes-Focused Migrations

In the scope of a large-scale cloud migration, it’s likely that only a small percentage of the estate should be earmarked to go fully native during the move. Developers’ time and energy is finite, and it needs to be invested in applications closest to core value creation.

This is a tough decision for many organizations. Most large-scale cloud migrations have hard deadlines, which often means a lot of compromise. Decision-making must consider desired outcomes, as well as technical factors.

DevOpsGroup has been through this process with several organizations migrating to AWS. Multiple factors can have a bearing on the outcome, but those with the greatest significance are:

  1. Amount of technical debt within a given application. More debt makes the migration a catalyst for much-needed overhaul.
    .
  2. Application suitability for running in the cloud. Legacy-architected applications benefit the most from a rewrite.
    .
  3. Proximity of an application to core value creation activities, which tends to go hand-in-hand with more development activity. A greater need for agility and amplified gains from even marginal improvements.

Skyscanner’s All-In Migration to AWS

When leading global travel service search engine Skyscanner opted to pursue all-in cloud adoption, its data center hardware refresh cycle was a key driver.

The team faced a choice between hardware reinvestment and an extension of its expensive data center estate, or an aggressive, time-pressured migration. They opted for the latter.

There were five global data centers, each holding a VMware installation with more than 7,000 virtual machines (VMs) hosted in total. Together, the data centers held more than 300 different services, owned by multiple engineering teams within Skyscanner. It was inevitable the migration would be highly complex.

Skyscanner operates a “you build it, you run it” approach with product-aligned teams. There was no central migration team, and each product team was responsible for formulating and executing a plan to migrate its own services.

Project roadmaps were used to define milestones and deadlines, and ongoing communication ensured everyone was clear about expectations and accountability.

Most teams’ Plan A was a cloud-native rewrite. But it soon became apparent this was not feasible or appropriate for some applications. Having the roadmaps in place made it easier for teams to transition to a Plan B—such as a rehost—when necessary.

Here, we outline two Skyscanner applications that occupy different positions on the cloud-native maturity spectrum following their migration.

The rewrite of Skyscanner’s Flight Stack is a sophisticated and impressive example of how cloud-native principles can be strategically developed over time. But it was no mean feat and the process took more than two years.

The re-platforming of Skyscanner’s translation tool, Strings-as-a-Service, was much simpler. Even so, challenges were encountered along the way and in-flight decisions had to be made to ensure it was moved quickly and could operate smoothly post-migration.

Long-Haul Migration: The Flight Stack Rewrite

The Flight Stack application is central to Skyscanner’s ability to fulfill its core customer proposition. Users expect a seamless service and want to access the information they need in a matter of seconds. Effective management of the transition to AWS was critical.

To maximize the benefits of moving to AWS, the stack was rewritten from a .NET, SQL Server-backed monolith to a stateless Java microservices application.

Previously, the SQL Server was used for all aspects of data processing and storage. Following the rewrite, Apache Spark on Amazon EMR handles processing, Amazon Simple Storage Service (Amazon S3) is used for storage, and Redis has been deployed for real-time queries.

DevOpsGroup-Skyscanner-2

Figure 2 – Skyscanner’s Flight Stack architecture diagram.

The quote cache at the center of the diagram in Figure 2 runs on Amazon Elasticache for Memcache, with global replication based on Amazon Simple Notification Service (SNS), which you can read more about in this blog. Redis is used to store search results.

For the Browse/TAPS service outlined in the diagram, a key architectural decision was to store data as immutable objects in Amazon S3. It uses a custom S3-based filesystem written in-house at Skyscanner and fronted by a Redis cache. A legacy SQL database was also migrated using lift and shift, but this mainly holds historical data.

Statelessness is a critical factor of the rewritten Flight Stack, enabling it to run in Kubernetes on Amazon EC2 Spot Instances, unlocking significant cost benefits.

When AWS wants a Spot Instance back, there is a two-minute window to move the workload. So, Skyscanner developed a procedure to drain services and remove them from the cluster within this timeframe.

The application monitors AWS endpoints every five seconds, thereby allowing at least one minute 55 seconds to drain a node. The node is cordoned to prevent Kubernetes scheduling anything new on it, and it’s deregistered from the Elastic Load Balancer to prevent it receiving traffic.

Finally, the node is drained and all pods are moved to new nodes while waiting for the in-flight connections to terminate.

Rewriting the Flight Stack was a significant undertaking, but the rapid gains in terms of software modernization, resiliency, scalability, and cost optimization made it all worth the effort.

There was no single moment of switchover. Instead, the migration was handled in phases as different elements of functionality moved onto AWS. Additional services had to be implemented to handle traffic-shaping during the process, with flight searches alternating between AWS and the data center for a time.

The process Skyscanner adopted emulates the “strangler pattern” that’s gaining popularity in the cloud-native world for monoliths that cannot feasibly be rewritten in one go. Instead of using a cut-over rewrite, cloud-native functionality is slowly built around the application, progressively strangling it.

Today, Skyscanner operates a number of large multi-tenant Kubernetes clusters across multiple regions. These run thousands of pods, serving tens of thousands of requests per second to power the flight search product.

The product team is still scaling the application, and while this is ongoing, a conductor is being used to split traffic, making it easier to maintain stability and reliability.

Short-Haul Migration: Strings-as-a-Service

As a global business operating in more than 30 languages, Skyscanner relies on a complex localization process managed by translation management executives and software engineers.

A proprietary tool called Strings-as-a-Service holds JSON strings in a central repository, and enables new strings to be translated, then pushed to relevant services.

The goal is to deliver up-to-date native experiences, so customers feel they’re being looked after by local teams that understand their needs.

Strings-as-a-Service’s workflow focuses on checking the validity of translations before they’re pushed into production. Extracting this functionality into a microservice has allowed Skyscanner to ensure completed translations are immediately available instead of being bundled into software releases.

Strings-as-a-Service was part of a package of migrations handled by DevOpsGroup. The brief was to evolve the application to deliver rapid short-term benefits and accommodate further modernization after the migration.

DevOpsGroup-Skyscanner-3

Figure 3 – Strings-as-service architecture.

To achieve this goal, the team redeployed the application into Docker containers, introduced GitHub to allow the containers to become stateless, and made functional changes to reduce the number of steps in the workflow.

Use of Docker containers was driven by the need to avoid scope creep, and to achieve the migration quickly. The existing application architecture wasn’t compatible with AWS Lambda and would have required major code changes.

The application had initially been designed with a concept of state and the team looked at various options to introduce statelessness, a fundamental quality for modern applications.

Amazon S3 buckets were considered, and Amazon Elastic File System (Amazon EFS) looked promising but wasn’t available in all required regions at the time. CIFS shares seemed like a good option, but a proof of concept (PoC) to test this couldn’t overcome performance issues.

Ultimately, GitHub Enterprise was selected as the central source of truth for JSON files, which facilitated statelessness and worked well with various sections of the workflow, pushing and pulling translated strings to and from the repository.

The intention was to use multiple Docker containers, with custom Shell scripts written to manage configuration and replace Ansible, which Skyscanner had been using for VM configuration management.

Docker Compose was used to test mini clusters before they were deployed into Amazon Elastic Container Service (Amazon ECS) using Skyscanner’s proprietary orchestration tool Slingshot.

However, when the time came to orchestrate the migration, it became apparent that Dockerising Strings-as-a-Service was too unreliable. This was largely due to the way it had originally been implemented.

For instance, the high level of state entrenched in the service (including local management of a Git repository) meant container start-up times were excessive. The file system had to be populated before reaching a state of readiness, and there was a high risk of losing in-progress changes if the container was terminated.

What’s more, the application required periodic maintenance. This involved engineers logging in to conduct operations locally, which would have been problematic in a Dockerised Amazon ECS environment.

Ultimately, the decision was made to shelve the Dockerised approach. Instead, Skyscanner rehosted the application’s existing VMs to AWS using a toolkit that DevOpsGroup had devised.

This toolkit utilized open source tooling, including Troposphere and some custom developed libraries, to automate the creation of AWS CloudFormation templates during a lift and shift. The GET endpoint was rewritten as a Java Dropwizard application backed by Amazon S3.

This example underlines the scale of the complexities involved in cloud migrations. It reinforces the need for a phased approach, where learnings are harnessed and used to inform the next stage of work.

Strings-as-a-Service was the first project Skyscanner attempted to evolve in this way, and the extent of its incompatibility with Docker was impossible to predict.

When faced with this challenge, the team acted pragmatically and took an alternative approach without undue impact on progress.

Conclusion

Cloud-native is a complex and ever changing concept. Whether you’re building a new service in the cloud, or migrating existing applications, you need to be aware of this and make decisions accordingly.

When it comes to migrations, it’s important to note that full rewrites are inherently time-consuming. For critical applications close to the core value stream or underpinning competitive differentiation, a cloud-native rewrite may be worth the investment.

For others, approaches such as rehosting and re-platforming offer a perfectly acceptable shortcut and, in some cases, lift and shift is the most feasible option. That’s fine, providing the applications are revisited and modernized later.

There is no one-size-fits-all answer to the cloud-native question. Remembering that cloud-native is a continuum helps maintain perspective and keeps business objectives front-of-mind.

As with any major redevelopment, resources need to be applied intelligently. Establish your vision, and then develop a roadmap to achieve it. This strategy helped Skyscanner keep its ambitious migration on track, enabling teams to swiftly switch from a Plan A rewrite to a Plan B alternative when it looked like targets might be missed.

Whether you’re working towards all-in deployment, large-scale migration, or a gradual shift towards cloud-native principles, it’s important to identify what matters most to your business.

Consider where the greatest commercial benefits can be realized, and where the likely challenges lie. From this vantage point, you can make focused and logical decisions. It may be more appropriate—and beneficial—to address underlying issues like overdue technical debt before undertaking extensive rewrites.

In the digital economy, businesses have to continually evolve and modernize to remain relevant and satisfy customer demands. It’s important that cloud adoption strategies are rooted in this understanding. The ability of IT to adapt and scale has a direct bearing on future business success.

.


DevOpsGroup-Logo-2.1
Connect with DevOpsGroup-1

DevOpsGroup – APN Partner Spotlight

DevOpsGroup is an APN Advanced Consulting Partner. They work with global enterprises and offer digital transformation services based on DevOps practices and principles underpinned by agile software development to develop high performing IT teams.

Contact DevOpsGroup | Practice Overview

*Already worked with DevOpsGroup? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Nine from AWS Honored as CRN’s Women of the Channel for 2019

Nine from AWS Honored as CRN’s Women of the Channel for 2019

Working at Amazon

Amazon works to identify the best possible talent from all backgrounds for technical and non-technical roles, partnering with organizations and academic institutions worldwide that reach underrepresented communities.

Through our unique interview process, which is based on our Leadership Principles, we work to understand the diverse perspectives that candidates from all backgrounds bring to Amazon. We actively recruit women globally and underrepresented racial/ethnic minority talent in the United States.

Amazon has 10 affinity groups, also known as employee resource groups, which bring Amazonians together across businesses and locations around the world. This includes the Amazon Women in Engineering (AWE), Black Employee Network (BEN), and [email protected] groups.

As of 2018, there were 40,000 Amazonians in more than 190 affinity group chapters worldwide. With executive and company sponsorship, these groups play an important role in building internal networks, advising Amazon business units, leading in service projects, and reaching out to communities where Amazonians live and work.

Learn more about diversity and inclusion at Amazon >>

2019 Women of the Channel Awards

At Amazon Web Services (AWS), we believe the future of tech should include people of every color, gender, belief, origin, and community. The future of tech should also be accessible, flexible, and inclusive. This vision is the foundation of our We Power Tech program, which focuses on building a pool of technologists as diverse as our world.

This year, CRN honored nearly 700 women whose channel expertise and vision are deserving of recognition. Nine women from AWS made the CRN’s Women of the Channel list for 2019, and we are proud of these Amazonians for their excellence in supporting customers, partners, and our global cloud community.

In addition, Kelly Hartman and Darci Kleindl were named to the Power 100, which spotlights female executives whose insight and influence help drive channel success. Congratulations Kelly, Darci, and all of our AWS honorees!


Power 100
Kelly Hartman, Global Head of the AWS Partner Network (APN)

Kelly Hartman-AWS-Women-of-the-Channel-2019 (2)Kelly started her career in the U.S. Air Force as an Airborne Communications Operator and Technician. From there, she went on to work for a large technology vendor where she worked with global service providers, and later launched and ran strategic partner programs.

Since joining AWS in 2014, Kelly has worked with AWS Partner Network (APN) Partners to develop AWS practices and solutions to serve business customers of all sizes.

“As my team grows, I’ve spent a lot of time thinking about the women who have shaped my path,” says Kelly. “I really admire my former boss and mentor, Dr. Monica Cojocneau. Monica took chances on me, she taught me a lot about building and managing high-performance teams, and continues to mentor men and women around her.”

Read Kelly’s full award profile >>


Power 100
Darci Kleindl, General Manager, NA Partner Sales

Darci Kleindl-AWS-Women-of-the-Channel-2019Darci sets the cloud strategy and operating model within North America to deliver AWS services through the APN Partner community. Since joining AWS, she has built innovative models to identify business value with APN Partners and AWS customers.

Darci is known as a strategist and collaborative leader who takes action to exceed customer expectations. Talent development, mentorship, and building organizational capital are important priorities for Darci, and she’s active in developing programs that focus on women in business.

“The key to success is the ability to earn trust,” says Darci. “At all levels of our customers’ organizations, they are looking for partners who they can trust with the business technology decisions. This includes the ability to operate with high integrity in every situation, be vulnerable when necessary, and balance short- and long-term outcomes.”

Read Darci’s full award profile >>


Mira Ayad, Sr. Manager, Channel Incentives Program

Mira Ayad-AWS-Women-of-the-Channel-2019Mira has worked with APN Partners since joining AWS in 2015, pulling from over a decade of experience working in the IT channel. She launched the AWS Solution Provider Program focused on helping APN Consulting Partners provide customers with a one-stop shop for all their AWS needs.

Prior to AWS, Mira worked at Microsoft in a similar capacity and had a great impact on shaping the next wave of the cloud-first partner channel.

“My focus is to continually listen to what customers want from APN Partners in terms of value-added services,” says Mira. “Our channel enables APN Partners to be deeply trained on AWS so they can build, deploy, and manage best-in-class AWS workloads on behalf of customers.”

Read Mira’s full award profile >>


Lucia Filanti, Partner Marketing Lead, Global ISVs

Lucia Filanti-AWS-Women-of-the-Channel-2019Lucia designs solution-level integrated campaign strategies for the ISV Partner community, delivering joint marketing plans with top strategic technology partners. Prior to AWS, she spent several years managing marketing teams at Oracle, SAP, VMware, and other technology start-ups.

Lucia’s professional interests and constant growth gravitate towards partner marketing roles, which have helped her understand and solve challenges with companies on the digital transformation journey.

“The move to cloud is taking place at a very fast pace,” says Lucia. “To be successful in the current environment, customers and APN Partners need to build agility into their businesses, as well as embrace and enable continuous transformation to take advantage of evolving technology trends.”

Read Lucia’s full award profile >>


Kristin Heisner, Head of Americas Partner Marketing

Kristin Heisner-AWS-Women-of-the-Channel-2019Kristin is responsible for the partner marketing strategy and execution within North and South America, driving AWS services through the APN Partner community to all customer segments.

Since joining AWS in 2015, she launched the Global System Integrator and Influencer Partner Marketing teams, and led the Global SAP Enterprise workload marketing team. Kristin worked at VMware for eight years in channel marketing.

“My goal is to work with our channel to build a stronger demand generation engine,” says Kristin, “while continuing to expand our network of partnerships, especially in Latin America, and build out an APN Partner community that can help solve our customers’ needs.”

Read Kristin’s full award profile >>


Barb Huelskamp, Sr. Manager, ISV Partner Development

Barb Huelskamp-AWS-Women-of-the-Channel-2019Barb is a 25-year veteran in channel and sales leadership who delivers revenue growth with sustainable outcomes in technology, cloud, and software markets. She was hired to re-architect the channel strategy, program, and global alliance manager focus at AWS.

Barb is a dedicated and accountable leader with a passion for success and mentoring others. She insists on the highest standards and has been able to attract, develop, and retain key talent throughout her career.

“My first job in tech was with a women-owned firm,” says Barb. “Working for a female CEO was a big draw, but they also needed broad talents and a willingness to dive into new responsibilities. She gave me two pieces of advice that have never left me. First, always handle yourself with grace. Second, never participate in office politics. If you do your job, everything will work itself out.”

Read Barb’s full award profile >>


Barbara Kessler, Global APN & MSP Programs Leader

Barbara Kessler-AWS-Women-of-the-Channel-2019Barbara joined AWS in 2016 to lead the AWS Managed Service Provider (MSP) Partner Program. In this role, she has been responsible for the incubation and development of next-generation AWS MSPs globally.

In 2018, Barbara’s role expanded to include ownership of the foundational APN Program, where she has driven significant advancements and aligned partner programs to customer needs and outcomes. Through extensive research and consensus building, she successfully proposed a new construct of requirements and benefits for APN Partners.

“The key to success for APN Partners is focusing on customer needs and customer outcomes,” says Barbara. “This is passionately how AWS creates programs, is part of our Leadership Principle of Customer Obsession, and our most successful partners share this passion with us.”

Read Barbara’s full award profile >>


Tara Palmieri, Head of NA East Consulting Partners

Tara Palmieri-AWS-Women-of-the-Channel-2019Tara’s career has spanned 17 years as an “approachable badass” channel chief at Microsoft, Oracle, and now AWS. Her experience includes worldwide and NA leadership and management roles in professional services, sales, and product management.

For the last decade, Tara led Oracle’s NA programs and alliances teams, transforming their partner ecosystem from on-premises to the cloud. In March 2019, Tara joined AWS and is leading a team driving go-to-market with top APN Partners on the U.S. east coast.

“A cloud-first strategy and obsessing over the hybrid needs of customers with a focus on ‘super powers’ will drive success for channel partners,” says Tara. “Vendors are looking to focus on quality partners who take these super powers into new markets while building repeatable technical solution offerings.”

Read Tara’s full award profile >>


Rachel Rose, Head of APN Differentiation Programs

Rachel Rose-AWS-Women-of-the-Channel-2019Rachel leas the AWS Global Differentiation Programs, which includes hundreds of software and consulting partners around the world. She owns the strategy, execution, and marketing of global differentiation programs, including APN Navigate, AWS Service Delivery, and AWS Competency.

Rachel was employee number two and an integral player in launching the AWS Partner Network seven years ago, which included program development, partner enablement and onboarding, building strategic partner relationships, and driving overall program marketing and communications.

“In true Amazon fashion, we’re customer obsessed which includes the way we design our partner programs,” says Rachel. “We’re placing a huge focus on building and leveraging scalable tools that aid our customers in finding the right partners based on their business needs.”

Read Rachel’s full award profile >>


Learn More About Working at Amazon

We seek top talent from all industries and a range of backgrounds, from MBA graduates to veterans and military spouses, who join our offices around the world.

People who succeed at Amazon, like these talented women from AWS, have something in common—they are customer-centric, they are leaders, and they are innovators.

If you’re an inventor, an owner, you’ll love being an Amazonian. From day one at Amazon, you’ll take ownership of projects that have a direct impact on our customers.

Learn more about working at Amazon >>

from AWS Partner Network (APN) Blog

Joining the AWS Partner Network (APN) Strengthens Your Capabilities to Better Serve Customers

Joining the AWS Partner Network (APN) Strengthens Your Capabilities to Better Serve Customers

AWS Partner NetworkTens of thousands of AWS Partner Network (APN) Partners from across the globe support Amazon Web Services (AWS) customers of all sizes to build sophisticated, personalized, scalable solutions in a diverse set of industries.

More than 90 percent of Fortune 100 companies utilize APN Partner solutions and services to drive their business outcomes.

The APN is the global partner program for technology and consulting businesses using AWS. We are focused on building long-term sustainable businesses by helping APN Partners build, market, and sell their offerings, and grow a successful cloud-based business.

For customers, the AWS Partner Network makes it easy to find top APN Partners who:

  • Possess extensive experience building and deploying customer solutions that are built on or integrated with AWS.
  • Provide well-architected solutions for AWS customers.
  • Develop and retain a strong bench of AWS trained and certified experts.

Learn more about becoming an APN Consulting or Technology Partner >>

What Makes the APN Different?

AWS approaches how we partner differently. We lead with the customer first, and design our strategies to enable APN Partners to deliver high-quality AWS solutions and services to joint customers.

The APN makes it easy for partners to find relevant business, technical, and customizable marketing resources to help build a healthy, sustainable, and profitable business.

From APN Consulting Partners supporting mass migrations, to independent software vendors (ISVs) developing new solutions, our customers know they can trust APN Partners to follow AWS best practices.

By becoming an APN Partner, you can:

  • Gain credibility by leveraging the AWS brand, known for innovation, customer centricity, and the pace of innovation.
  • Deliver more innovation with the constantly evolving portfolio of ground-breaking AWS technologies and services.
  • Work with an assigned Partner Manager who will contact you within one (1) business day of signing up with the APN.
  • Highlight your expertise with APN programs that help differentiate your business practice.
  • Define your APN Partner journey based on your business focus area and capabilities.
  • Increase visibility to AWS field teams and AWS customers while taking advantage of sales opportunities.
  • Promote and sell your solutions through AWS Marketplace, a digital catalog for AWS customers.
  • Maximize opportunities by collaborating and sharing resources, knowledge, and experience with the APN Partner community.
  • Save time and money, and get the tools and resources you need to reach customers and respond quickly to customer issues.

Grow Your Business with the APN

Registering with the APN is the first step of your journey. The APN is a tiered program comprised of Consulting and Technology Partners who progress through the Select, Advanced, and Premier tiers based on their level of engagement with AWS.

We engage on a deeper level with higher-tier APN Partners who invest significantly in their AWS practice and possess extensive experience building and deploying customer solutions on AWS.

Why Join the APN-1

Every APN Partner has a different journey and path to success. Define your journey as an APN Partner with programs that align with your capabilities and support your business growth to deliver memorable customer experiences.

The APN Navigate path provides prescriptive guidance to help you onboard with the APN, move through the tiers, and define every step of the way what you’d like to achieve as an APN Partner. This path will empower your organization to build on your core strengths and deploy innovative solutions on behalf of AWS customers.

Progressing through the APN tiers provides greater access to benefits that will help you build, market, and sell your solutions, regardless of workload, vertical, or solution area. In addition, you’ll unlock programs that help you grow your business and stand out.

Why Join the APN-2

Join the AWS Partner Network

If you have not already registered your company with the APN, create your APN Partner Central Account. Join at no cost, and then choose how to advance your journey with AWS.

Get started today with the APN >>

Stay Up-to-Date with the APN

For APN Consulting Partners

APN Consulting Partners are professional services firms that help customers design, architect, build, migrate, and manage their workloads and applications on AWS. Consulting Partners include System Integrators, Strategic Consultancies, Agencies, Managed Service Providers, and Value-Added Resellers.

Learn more about becoming an APN Consulting Partner >>

For APN Technology Partners

APN Technology Partners provide software solutions that are either hosted on, or integrated with, the AWS platform. Technology Partners include Independent Software Vendors (ISVs), SaaS, PaaS, Developer Tools, Management, and Security Vendors.

Learn more about becoming an APN Technology Partner >>

from AWS Partner Network (APN) Blog

How Slalom Created Personalized, Interactive Event Experiences Using Amazon Rekognition

How Slalom Created Personalized, Interactive Event Experiences Using Amazon Rekognition

By Chris Mendoza, Technology Enablement Consultant at Slalom

Slalom-Logo-1.1
Slalom-APN-Badge-3
Connect with Slalom-1
Rate Slalom-1

Amazon Rekognition makes it easy to add highly accurate image and video analysis to your applications.

The service’s core functionality allowed Slalom, an AWS Partner Network (APN) Premier Consulting Partner, to create three personalized, interactive experiences for attendees at REALIZE, the company’s inaugural, one-day client summit in Chicago.

Using Amazon Rekognition, Slalom created the following interactive, guest-friendly experiences:

  • Personalized compliment-delivering booth that recognized guests using facial analysis and provided them a personalized compliment.
    .
  • Interactive photo mosaic wall that recognized more than 500 unique faces and delivered information about each person.
    .
  • Catering station using sentiment analysis that registered non-verbal cues on guests’ faces to read their expression, and then order a menu item corresponding to their sentiment.

All three experiences relied on a centralized back-end of serverless AWS Lambda functions for custom logic, Amazon DynamoDB for storage and session management, Amazon CloudFront and Amazon Simple Storage Service (Amazon S3) for hosting web apps, and Amazon CloudWatch log streams for real-time event analytics.

In this post, I’ll walk you through the key decisions Slalom made to build these experiences, share the impact they had on the guest experience at REALIZE, and provide our reference architecture and instructions so you can recreate them for your own event.

Objectives for the Event and Experiences

In March 2019, Slalom hosted an inaugural, one-day summit in Chicago called REALIZE. We brought together more than 100 clients, nonprofit leaders, and alliance partners through innovative, interactive, and community-focused experiences.

Slalom employees brought the event attendance to over 600 total guests. The day combined four keynotes, 12 unique breakout sessions, and 10 interactive experiences.

Guests were invited to a unique happy hour where they could network and explore custom-made interactive experiences. Five of these experiences were built using Amazon Web Services (AWS) and two were built in support of local nonprofit partners. Each experience was imagined, designed, and executed by a volunteer team from Slalom Chicago.

Our goal was to create dynamic, surprising experiences that brought three of Slalom’s Core Values to life:

  • Smile
  • Drive Connection and Teamwork
  • Focus on Outcomes

Experiences involved varying levels of physical and digital touchpoints, and were integrated into the event venue itself to create an immersive environment.

Video: Amazon Rekognition at Slalom’s REALIZE event (1:49)

Personalized Compliment-Delivering Booth

Our “Smile” experience was designed to be irresistibly fun and share Slalom’s commitment to finding joy in our work and celebrating the contributions of our team.

When entering the compliment booth, guests started the experience by saying, “Alexa, compliment me!” which then prompted them to look at our tablet camera.

After the guest took a selfie, we used facial recognition to identify the guest, retrieve their personalized compliment, and reply back on the Amazon Echo Show (audibly and on screen), all in real-time.

Slalom-Amazon Rekognition-1

This experience was one of our most impactful and meaningful for guests. Compliments were not randomly generated, but instead individually (and secretly!) sourced from within the Slalom community.

The team used Amazon Rekognition’s IndexFaces operation to create a Face Collection, which stored facial information that was used to identify known faces and return specific metadata.

Slalom prepared personalized compliments for all REALIZE guests, including more than 500 employees and 100 client, partner, and community leaders.

Interactive Photo Mosaic Wall

The “Drive Connection and Teamwork” experience featured a 40-foot-long photo mosaic wall with more than 500 pictures—one for each Slalom Chicago employee. It was designed to better connect our team by sharing a little of what makes each person unique and revealing previously unknown commonalities.

Slalom-Amazon Rekognition-2

Utilizing the same Face Collection used for the “Smile!” experience, we provided devices at the installation that allowed REALIZE attendees to take a picture of any face on the mosaic or scan people in the room.

The device would then display personalized information about that Slalom employee, like a fun fact or how long they’ve worked for Slalom.

Slalom-Amazon Rekognition-3

Catering Station Using Sentiment Analysis

Since the way to our hearts is through our stomachs, we designed a catering station where attendees could order appetizers using Amazon Rekognition’s facial analysis.

In our “Focus on Outcomes” experience, guests struck their best happy, sad, or angry expression and Amazon Rekognition returned details on the scanned face.

Sentiment was the most important feature returned, and the service matched that to a corresponding menu item to place an order. As people’s expressions were captured and analyzed in real-time, guests were directed to open a specific, themed cabinet where their food would be waiting.

Designing the Guest Experience

These meaningful, guest-friendly experiences suited REALIZE’s scale, venue, and objectives beautifully, and were a big hit with attendees. In just about five hours, guests scanned 507 unique faces with more than 1,000 scans overall.

To maintain the element of surprise for event attendees, all 500+ employee photos and 100+ external guest photos were added to the Face Collection from publicly available photos and headshots. Keeping the Face Collection up-to-date required Slalom staff to upload faces and populate each guest’s profile with information right up to the day of REALIZE.

We couldn’t ask guests to upload better photos without revealing the secret, so our experiences needed to recognize different image resolutions. In many cases, we had to rely on one photo to match to the Face Collection, despite resolution size, condition of the photo, or even fluctuating lighting conditions at the venue.

While the functionality for these experiences is device-agnostic and web-based, we opted to provide dedicated devices to keep the experience location-based and to encourage interaction between guests and hosts. Slalom team members acted as hosts to greet guests and share insight into their experience’s development.

Building with Privacy in Mind

Guest privacy was a key consideration for the Slalom team. While attendees consented to and opted into participating in the experiences, we acknowledged the sensitivity around biometric data and adhered to the Illinois Biometric Information Privacy Act (740 ILCS 14/).

This act stipulates the following provisions for biometric data and information:

  • Obtain consent from individuals if the company intends to collect or disclose their personal biometric identifiers.
  • Destroy biometric identifiers in a timely manner (≤ 3 years).
  • Securely store biometric identifiers.

The experiences as created can be configured to only cater to event attendees who have opted in as participants. Therefore, we could guarantee that no results are returned for people who you do not intend to match with. This is a useful safeguard against sharing the biometric data of guests who may prefer to opt out.

REALIZE experiences do not persist any image data. As guests took a picture, it was immediately sent to Amazon Rekognition and deleted, with a returned hash of the response, and no way of linking the identified individual with the image itself.

Long-term, this would normally impact future enhancements to our training data for a facial recognition model, but the Amazon Rekognition service is automatically trained on reference data by AWS. The hash can provide relevant matching criteria without having to store or refer to the original image.

Developing Your Own Facial Recognition Experiences

Below is a simplified step-by-step guide outlining the facial recognition experiences set up by the Slalom team.

We started from instructions provided by the AWS Machine Learning Blog and created custom integrations with Amazon Rekognition and complementary AWS services to fit our needs. Device-agnostic web apps served as the front-ends, designed for each station.

The figure below shows the application workflow separated into two distinct parts:

  • Indexing (blue workflow) represents the process of importing faces into the Face Collection for analysis.
  • Analysis (black workflow) represents the process of querying the Face Collection represents the process of querying matches within the index.

Slalom-Amazon Rekognition-4

Figure 1 – Application workflow.

Follow these steps to recreate our experience for your own event:

Step 1: Create an IAM Role with full permissions to the following AWS services: Amazon S3, AWS Lambda, Amazon DynamoDB, and Amazon Rekognition. This will be temporary to help us quickly set up. We’ll attach this new IAM Role to the Amazon Elastic Compute Cloud (Amazon EC2) instance we’ll be spinning up.

Step 2: Go into your AWS console and spin up an Amazon EC2 instance. This will be used to access the AWS account via the command line to run AWS Command Line Interface (CLI) commands.

Note that if you have the ability to access the AWS account via your local computer’s CLI, feel free to use that method.

Step 3: Create a DynamoDB table for our application. Log on to the Amazon EC2 instance we just spun up and run the following CLI command to create the table:

aws dynamodb create-table \
--table-name realize-2019-face-recognization-tbl2 \
--attribute-definitions \
AttributeName=RekognitionId,AttributeType=S \
--key-schema AttributeName=RekognitionId,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1

Step 4: Create a new collection in Amazon Rekognition by running this CLI command:

aws rekognition create-collection --collection-id <WhatEverNameYouWantHere>

Step 5: Create another IAM Role with full permissions to these AWS services: Amazon S3, Amazon DynamoDB, AWS CloudWatch Logs, Amazon Rekognition, and AWS Lambda Execution. You can fine-tune the permissions if you choose to do so. This is our AWS Lambda execution role.

Step 6: Create Lambda functions for our application. Login to the Amazon EC2 instance we spun up and run the following CLI command:

aws lambda create-function \
	--function-name IndexFaces-realize-2019-face-recognization \
	--runtime python2.7 \
	--role <LAMBDA EXECUTION ROLE YOU CREATED> \
	--handler index.lambda_handler \
	--timeout 10 \
	--memory-size 128 

aws lambda create-function \
	--function-name realize-2019-detect-face \
	--runtime python2.7 \
	--role <LAMBDA EXECUTION ROLE YOU CREATED> \
	--handler lambda_function.lambda_handler \
	--timeout 180 \
	--memory-size 512 

The two CLI commands above will create two Lambda functions:

  • IndexFaces-realize-2019-face-recognization: This Lambda indexes faces of individual employees. It gets invoked by an Amazon S3 event when an employee’s picture (including metadata) lands in a specific S3 folder. The picture gets an identifier (face id) from the Amazon Rekognition service and saves the contents of the metadata with the id to the DynamoDB table created in Step 3.
    .
  • realize-2019-detect-face: This Lambda fetches the record from DynamoDB and passes it to the front-end application to display. It’s invoked by an API endpoint from Amazon API Gateway. The API submits the base64 of the image it wants to get the content for, and gets it id from Amazon Rekognition. It then searches in the DynamoDB table from Step 3 and returns the result if a record is found/matched.

Step 7: Modify code in the Lambda functions, and in both files modify the collection id with whatever you named it in Step 4. Take all the files in the code folder and compress them to create a single zip file. Then, go to the corresponding code in the AWS Lambda console and upload it there.

Step 8: In the Amazon API Gateway console, create a new endpoint and POST method that invokes the realize-2019-detect-face Lambda. Make sure to enable CORS and deploy.

Step 9: Deploy front-end web files. Create an Amazon S3 bucket, or use an existing bucket, and place the files in the S3_Files folder into that bucket. Change the property of that file and make all three files publicly visible by right-clicking and selecting “make public.”

Note that you will need to change the API endpoint in the index.html file to the new API endpoint you ended up creating.

In the bucket you want to add the initial images of employee (or whatever group you’re creating this experience for) with their metadata, create an Event that executes the IndexFaces-realize-2019-face-recognization function. We recommend creating the event on a specific folder where you’ll land the image files with metadata.

Next, add employee images with metadata to the bucket with the event in Step 10 by running the below CLI command:

aws s3 cp /home/ec2-user/Realize_Client_Photos/EMPLOYEE_NAME.jpg s3://<bucket with event from step 10>/<folder that the event is triggered on>/ --metadata '{"full_name":"EMPLOYEE NAME","practice":"Business Advisory Services","title":"Consultant","tenure":"1.39166666666667"}'

The CLI command above will populate your Face Collection database with the appropriate metadata to return when a match is found.

Figure 2 displays the metadata returned by the application at REALIZE when a match is found:

Slalom-Amazon Rekognition-5

Figure 2 – Sample of Face Collection metadata definitions and values.

Step 10: After adding employees to your DynamoDB database, go to the URL of the index.html file you added in Step 9. You can find it by going to the Amazon S3 location of the file in the AWS console.

Now when you take a picture of that employee, the Lambda will fire and return the results from the DynamoDB back to your front-end and user.

Summary

Since Chicago’s REALIZE event in March, other Slalom teams and markets have leveraged this solution to build their own creative facial recognition experiences as part of their local market REALIZE events.

For example, one Slalom team is considering building a similar photo wall with digital instead of static images. Additionally, Slalom invited attendees of the AWS Summit in Chicago to earn Slalom-branded swag at our booth, courtesy of an iterated version of this solution that used Amazon Rekognition architecture.

With the power of Amazon Rekognition’s capability to read faces, objects, scenes, and sentiments across many mediums, teams can leverage this service creatively to suit their venue, scale, and experience.

To learn more the Slalom and AWS partnership and how Slalom helps clients build for the future with modern data and technology solutions, visit slalom.com.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.


Slalom-Logo-1.1
Connect with Slalom-1

Slalom – APN Partner Spotlight

Slalom is an AWS Premier Consulting Partner. They are a modern consulting firm focused on strategy, technology, and business transformation. Slalom’s teams are backed by regional innovation hubs, a global culture of collaboration, and partnerships with the world’s top technology providers.

Contact Slalom | Practice Overview

*Already worked with Slalom? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog