Category: APN

Say Hello to 41 New AWS Competency, MSP, and Service Delivery Partners Added in September

Say Hello to 41 New AWS Competency, MSP, and Service Delivery Partners Added in September

AWS Partner NetworkThe AWS Partner Network (APN) is the global partner program for Amazon Web Services (AWS). We enable APN Partners to build, market, and sell their AWS-based offerings, and we help customers identify top APN Partners that can deliver on core business objectives.

To receive APN program designations such as AWS Competency, AWS Managed Services Provider (MSP), and AWS Service Delivery, organizations must undergo rigorous technical validation and assessment of their AWS solutions and practices.

These designations help customers identify and choose specialized APN Partners that can provide value-added services and solutions. Guidance from these skilled professionals can leads to better business and bigger results.

Team Up with AWS Competency Partners

If you want to be successful in today’s complex IT environment, and remain that way tomorrow and into the future, teaming up with an AWS Competency Partner is The Next Smart.

The AWS Competency Program verifies, validates, and vets top APN Partners that have demonstrated customer success and deep specialization in specific solution areas or segments.

These APN Partners were recently awarded AWS Competency designations:

AWS Data & Analytics Competency

AWS DevOps Competency

AWS Government Competency

AWS Life Sciences Competency

AWS Migration Competency

AWS Oracle Competency

AWS Security Competency

AWS Storage Competency

Team Up with AWS Managed Service Providers

The AWS Managed Service Provider (MSP) Partner Program recognizes leading APN Consulting Partners that are highly skilled at providing full lifecycle solutions to customers.

Next-generation AWS MSPs can help enterprises invent tomorrow, solve business problems, and support initiatives by driving key outcomes. AWS MSPs provide the expertise, guidance, and services to help you through each stage the Cloud Adoption Journey: Plan & Design > Build & Migrate > Run & Operate > Optimize.

Explore 7 reasons why AWS MSPs are fundamental to your cloud journey >>

Meet our newest AWS Managed Service Providers (MSP):

Team Up with AWS Service Delivery Partners

The AWS Service Delivery Program identifies and endorses top APN Partners with a deep understanding of specific AWS services, such as AWS CloudFormation and Amazon Kinesis.

AWS Service Delivery Partners have proven success delivering AWS services to end customers. To receive this designation, APN Partners must undergo service-specific technical validation by AWS Partner Solutions Architects, and complete a customer case business review.

Introducing our newest AWS Service Delivery Partners:

Amazon API Gateway Partners

Amazon Aurora MySQL-Compatible Edition Partners

Amazon CloudFront Partners

AWS Database Migration Service Partners

AWS Direct Connect Partners

Amazon EC2 for Microsoft Windows Server Partners

AWS Lambda Partners

Amazon RDS Partners

AWS WAF Partners

Want to Differentiate Your Partner Business? APN Navigate Can Help.

If you’re already an APN Partner, enhance your Cloud Adoption Journey by leveraging APN Navigate for a prescriptive path to building a specialized practice on AWS.

APN Navigate tracks offer APN Partners the guidance to become AWS experts and deploy innovative solutions on behalf of end customers. Each track includes foundational and specialized e-learnings, advanced tools and resources, and clear calls to action for both business and technical tracks.

Learn how APN Navigate is a partner’s path to specialization >>

Learn More About the AWS Partner Network (APN)

APN Partners receive business, technical, sales, and marketing resources to help you grow your business and better support your customers.

See all the benefits of being an APN Partner >>

Find an APN Partner to Team Up With

APN Partners are focused on your success, helping customers take full advantage of the business benefits AWS has to offer. With their deep expertise on AWS, APN Partners are uniquely positioned to help your company.

Find an APN Partner that meets your needs >>

from AWS Partner Network (APN) Blog

Your Guide to APN Partner Sessions, Workshops, and Chalk Talks at AWS re:Invent 2019

Your Guide to APN Partner Sessions, Workshops, and Chalk Talks at AWS re:Invent 2019

Global Partner Summit 2019-1AWS re:Invent 2019 is almost here and reserved seating is now live!

To reserve seating for re:Invent activities throughout the week, including Global Partner Summit, log into the event catalog using your re:Invent registration credentials. Build out your event schedule by reserving a seat in available sessions.

Reserved seating is for breakout sessions, workshops, chalk talks, builder sessions, hacks, spotlight labs, and other activities. Keynotes, builders fairs, demo theater sessions, and hands-on labs are first come, first served and not included in reserved seating.

Reserve your seat today at AWS re:Invent activities >>

Global Partner Summit Seating

At re:Invent, members of the AWS Partner Network (APN) can learn how to leverage AWS technologies to better serve your customers, and discover how the APN can help you build, market, and sell your AWS offerings. This year, we have 76 sessions dedicated for existing and prospective APN Partners.

You can find all of the partner-related sessions in the re:Invent catalog by selecting “Partner” under the Topics filter on the left side of the page.

There are different type of sessions that fit your company’s needs. Here are some GPS sessions to keep an eye on!

Breakout Sessions

Breakouts are one-hour, lecture-style sessions delivered by AWS experts.

Business Breakouts

  • GPSBUS207 – Build Success with New APN Offerings
    Learn about new APN program launches and announcements made at the Global Partner Summit keynote at re:Invent. These new APN programs are designed to help you demonstrate deep AWS expertise to customers and achieve long-term success as an APN Partner.
    .
  • GPSBUS203 – APN Technology Partner Journey: Winning with AWS for ISVs
    Hear from AWS experts and APN Partners about the steps of the APN Technology Partner journey, from onboarding to building, marketing, and selling. We share with you the markers for success along each path, programs to take advantage of, and how to accelerate your growth.

Technical Breakouts

  • GPSTEC337 – Architecting Multi-Tenant PaaS Offerings with Amazon EKS
    Learn the value proposition of architecting a multi-tenant platform-as-a-service (PaaS) offering on AWS, and the technical considerations for securing, scaling, and automating the provisioning of customer instances within Amazon Elastic Kubernetes Service (Amazon EKS).
    .
  • GPSTEC338 – Building Data Lakes for Your Customers with SAP on AWS
    In this demo-driven session, we show the best practices and reference architectures for extracting data from SAP applications at scale. Get prescriptive guidance on how to design high-performance data extractors using services like AWS Glue and AWS Lambda.

Workshops

Workshops are two-hour, hands-on sessions where you work in teams to solve problems using AWS services. Workshops organize attendees into small groups and provide scenarios to encourage interaction, giving you the opportunity to learn from and teach each other.

  • GPSTEC340 – How to Pass a Technical Baseline Review
    A Technical Baseline Review (TBR) is a prerequisite for achieving APN Advanced Tier status. In this workshop, learn why the review is important for the success of your product, how Partner Solutions Architects evaluate your architecture, and how to get prepared.
    .
  • GPSTEC404 – Build an AI to Play Blackjack
    In this workshop, use computer vision and machine learning to build an AI to play blackjack. Build and train a neural network using Amazon SageMaker, and then train a reinforcement learning agent to make a decision that gives you the best chance to win.
GPS Terry Wise Global Partner Summit at AWS at the The Venetian, Las Vegas, NV on Tuesday, Nov. 27, 2018. AWS Mythical Mysfits at the Venetian, Las Vegas, NV on Tuesday, Nov. 27, 2018.

Chalk Talks

Chalk Talks are one-hour, highly interactive sessions with a small audience.

  • GPSTEC204 – Technical Power-Ups for AWS Consulting Partners
    Learn about AWS technical assets that can help you deliver successful cloud projects. Dive deep into AWS Immersion Days and Well-Architected Reviews, and leverage GameDays and Hackathons to propel customers along their cloud adoption journey.
    .
  • GPSTEC303 – Overcoming the Challenges of Being a Next-Generation MSP
    Discuss how the AWS Managed Service (MSP) Partner Program guides and assists organizations in various stages of maturity to overcome the challenges of transitioning from being a traditional MSP to being a next-generation MSP on AWS.

Builder Sessions

Builder Sessions are one-hour, small group sessions with up to six customers and one AWS expert, who is there to help, answer questions, and provide guidance.

  • GPSTEC417-R – [REPEAT] Build a Custom Container with Amazon SageMaker
    Build a custom container that contains a train-completed PyTorch model, and deploy it as an Amazon SageMaker endpoint. A PyTorch/fast-ai model is provided for learning purposes.
    .
  • GPSTEC418-R – [REPEAT] Securing Your .NET Container Secrets
    Many customers moving .NET workloads to the cloud containerize applications for agility and cost savings. In this session, learn how to safely containerize an ASP.NET Core application while leveraging services like AWS Secrets Manager and AWS Fargate.

Learn More About Global Partner Summit

Join us for the Global Partner Summit at re:Invent 2019, which provides APN Partners with opportunities to connect, collaborate, and discover.

Learn how to leverage AWS technologies to serve your customers, and discover how the AWS Partner Network (APN) can help you build, market, and sell your AWS-based business. You’ll have plenty of opportunities to connect with AWS field teams and other APN Partners.

This year, Global Partner Summit sessions will take place throughout the week across the entire re:Invent campus.

Learn more about Global Partner Summit >>

Why Your Company Should Sponsor AWS re:Invent 2019

Is your company joining the up to 65,000 expected attendees at AWS re:Invent 2019? Enhance your conference experience and drive lead generation through sponsorship—an exclusive opportunity for APN Partners and select AWS enterprise customers.

With plenty of turnkey options still available, it’s not too late to participate in the leading global customer and partner conference for the cloud computing community.

Learn more about sponsorship and get started today >>

from AWS Partner Network (APN) Blog

How to Reduce AWS Storage Costs for Splunk Deployments Using SmartStore

How to Reduce AWS Storage Costs for Splunk Deployments Using SmartStore

By Devendra Singh, Partner Solutions Architect at AWS
By Jae Jung, Global Strategic Alliances Sales Engineer – APAC at Splunk

Splunk-Logo-1
Splunk-APN-Badge-3
Connect with Splunk-1.1
Rate Splunk-1.1

It can be overwhelming for organizations to keep pace with the amount of data being generated by machines every day.

Forbes estimates that around 2.5 quintillion bytes of data is generated each day through mediums such as the Internet of Things (IoT), websites, and IT services, to name a few. This data is a great source of meaningful information that can be extracted by organizations, but these companies need software vendors to develop tools that help.

Splunk is an AWS Partner Network (APN) Advanced Technology Partner with multiple AWS Competencies is key solution areas such as Data & Analytics, DevOps, and Security. Its popular big data platform that has seen widespread adoption globally.

In a wide range of use cases ranging from cyber security and network operations to the expanding adoption of IoT and machine learning, Splunk software and cloud services enable customers to search, monitor, analyze, and visualize machine-generated big data.

In this post, we will introduce you to Splunk SmartStore and how it helps customers to reduce storage cost in a Splunk deployment on Amazon Web Services (AWS).

Solution Overview

Traditionally and until recently, Splunk workloads on AWS were mirrors of their on-premises deployments, installed on an array of Amazon Elastic Compute Cloud (Amazon EC2) instances and attached Amazon Elastic Block Store (EBS) volumes.

These workloads rarely took advantage of additional AWS services such as Amazon Simple Storage Service (Amazon S3). But customers kept asking about Splunk’s compatibility with Amazon S3 considering all of the redundancy and cost benefits built into the service.

This issue was solved with the release and refinement of SmartStore for Splunk Enterprise. SmartStore reduces overall cost of ownership (TCO), efficiently reallocates infrastructure spend, and brings all of the benefits of S3 to Splunk deployments on AWS.

SmartStore for Splunk Enterprise

SmartStore finally brings decoupling of storage and compute to the indexer tier, which has traditionally had the highest infrastructure demands based on a cost and performance point of view.

In the past, search peers would have been provisioned or built with a fixed amount of storage and compute, and organizations would have considered joining additional peers to the cluster whenever compute was fully consumed, even if storage headroom remained, or vice versa.

With the separation of compute resources and storage onto Amazon S3, however, organizations can now organically scale search peers into Splunk indexer clusters to resolve compute constraints without unnecessary investment into storage.

SmartStore works by moving “warm” or “cold” buckets (i.e. Splunk containers of indexed data that is no longer being actively written) to Amazon S3 via API. The search peers can still operate in an indexer cluster, but each peer contains a cache manager that handles the writing and retrieval of buckets from S3, as required.

SmartStore has been available since Splunk 7.2, and the recent release of Splunk 7.3 has enabled Data Model Acceleration (DMA) support for SmartStore-enabled indexes. This was a critical path for customers using products with heavy DMA usage, with Splunk Enterprise Security being the most obvious.

As SmartStore can be enabled on a per-index basis, customers can choose to use it for all of their data, or just a subset to start before migrating their indexer data completely.

Configuring SmartStore to Use Amazon S3

Since SmartStore leverages Amazon S3 for storage, users must begin the configuration by creating an S3 bucket with the appropriate permissions.

When configuring S3 buckets:

  • They must have read, write, and delete permissions.
  • If the indexers are running on Amazon EC2, provision the S3 buckets for the same region as the Amazon EC2 instances that use it.
  • As a best practice, use an AWS Identity and Access Management (IAM) role for S3 access when deploying indexers on AWS.

Step 1: Create an Amazon S3 Bucket

You can create an Amazon S3 bucket from the AWS Management Console, or by using the following command line syntax:

aws s3api create-bucket --bucket <bucketname> --region <regionID) --create-bucket-configuration LocationConstraint=<regionID>

Example:

aws s3api create-bucket --bucket splunk-index-singapore --region ap-southeast-1 --create-bucket-configuration LocationConstraint=ap-southeast-1

Note that ap-southeast-1 is the nomenclature for the AWS Singapore Region. Also note that bucket names are unique and you can’t use the splunk-index-singapore bucket name again, so choose a different bucket name for your deployment.

Step 2: Configure Amazon S3 Access from Splunk Indexers

There are two approaches to configuring Amazon S3 buckets:

  • Approach 1: Configure an IAM role with the required permissions to access your S3 bucket, and configure the Amazon EC2 instances with Splunk indexer capability to use that IAM role. This is the recommended approach and helps to avoid sharing security credentials and access keys when configuring SmartStore.
    .
  • Approach 2: This is not the recommended approach, but you can use an AWS access key to access the S3 bucket. If you don’t have the access keys, you can create an IAM user or use an existing user and generate the access key, or use the access key of an existing user with the required permissions to access the S3 bucket. For more information on how to generate AWS access keys, please see the documentation

Step 3: Configure Splunk Indexer to Use Amazon S3 by Editing indexes.conf File

This example configures SmartStore indexes, using an Amazon S3 bucket as the remote object store.

The SmartStore-related settings are configured at the global level, which means all indexes are SmartStore-enabled and all use a single remote storage volume, named remote_store. This example also creates one new index called cs_index.

[default]
# Configure all indexes to use the SmartStore remote volume called
# "remote_store".
# Note: If you want only some of your indexes to use SmartStore,
# place this setting under the individual stanzas for each of the
# SmartStore indexes, rather than here.
remotePath = volume:remote_store/$_index_name
 
repFactor = auto
 
# Configure the remote volume
[volume:remote_store]
storageType = remote
 
# On the next line, the volume's path setting points to the remote storage location
# where indexes reside. Each SmartStore index resides directly below the location
# specified by the path setting. The <scheme> identifies a supported remote
# storage system type, such as S3. The <remote-location-specifier> is a
# string specific to the remote storage system that specifies the location
# of the indexes inside the remote system.
# This is an S3 example: "path = s3://mybucket/some/path".
 
path = s3://mybucket/some/path

# The following S3 settings are required only if you're using the access and secret
# keys. They are not needed if you are using AWS IAM roles.
 
remote.s3.access_key = <S3 access key>
remote.s3.secret_key = <S3 secret key>

# An example value for below would be https://https://s3-us-west-2.amazonaws.com
remote.s3.endpoint = https:|http://<S3 host>


# This example stanza configures a custom index, "cs_index".
[cs_index]
homePath = $SPLUNK_DB/cs_index/db
# SmartStore-enabled indexes do not use thawedPath or coldPath, but you must still specify them here.
coldPath = $SPLUNK_DB/cs_index/colddb
thawedPath = $SPLUNK_DB/cs_index/thaweddb

# Additional parameters that should be changed for SmartStore
# Splunk bucket sizes are reset to 750MB (auto) for efficient swapping
maxDataSize = auto

# hot to warm transition and data upload frequency
maxHotBuckets = 3
maxHotIdleSecs = 0
maxHotSpanSecs = 777600

# Per index cache preferences
hotlist_recency_secs = 86400
hotlist_bloom_filter_recency_hours = 360

Once the configuration is complete, Splunk indexers will be ready to use Amazon S3 to store warm and cold data.

The key difference with SmartStore is the remote Amazon S3 bucket becomes the location for master copies of warm buckets, while the indexer’s local storage is used to cache copies of warm buckets currently participating in a search or that have a high likelihood of participating in a future search.

Summary

In this post, we enabled our Splunk indexers to store data on Amazon S3, while being able to return search results in a performant way.

Amazon S3 offers highly resilient, highly secure, and highly available data storage in a cost effective way. Organizations can use S3 to store data for Splunk that has traditionally resided on persistent EBS volumes.

Additionally, Splunk Enterprise now supports the use of SmartStore for almost all of the major use cases solved with Splunk Enterprise, including enterprise security.

Many customers are using SmartStore to reduce the size of Amazon EBS volumes and moving data to S3. This switching brings down the cost for storage, as S3 is cheaper in comparison to EBS volumes.

For more information on SmartStore please see the documentation.
.

Splunk-APN-Blog-CTA-1


Splunk – APN Partner Spotlight

Splunk is an AWS Competency Partner. Its software and cloud services enable customers to search, monitor, analyze, and visualize machine-generated big data from websites, applications, servers, networks, IoT, and mobile devices.

Contact Splunk | Solution Overview | AWS Marketplace

*Already worked with Splunk? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Serverless Containers are the Future of Container Infrastructure

Serverless Containers are the Future of Container Infrastructure

By Tomer Hadassi, Solutions Architecture Team Leader at Spotinst
By Shashi Raina, Partner Solutions Architect at AWS

Spotinst Logo-2
Spotinst-APN-Badge-5
Connect with Spotinst-1.1
Rate Spotinst-1.1

In an effort to increase system stability, developer productivity, and cost effectiveness, IT and DevOps teams have undergone two major shifts in recent years.

The first was the move from on-premises infrastructure to the cloud. This shift allows companies to remove a significant amount of overhead in managing their technologies while scaling capacity up or down as needed, and adding the flexibility to use a plethora of virtual machine (VM) types, sizes, and pricing models.

The second and more recent shift is the major pivot to containers and serverless solutions, with Kubernetes taking charge and becoming the industry standard for container orchestration.

The combination of these two shifts presents organizations with a unique question: how do you maximize an application’s uptime while maintaining a cost-effective infrastructure at both layers?

Keeping availability high by over-provisioning is easy, but it’s also very expensive. As a result, several challenges have arisen on the path to building an optimized, cost-effective, and highly available containerized infrastructure on Amazon Web Services (AWS):

  • Pricing
  • Instance sizing
  • Containers utilization

In this post, we will explore the Spotinst platform and review how it solves these challenges with Serverless Containers.

Spotinst is an AWS Partner Network (APN) Advanced Technology Partner with the AWS Container Competency. Spotinst helps companies automate cloud infrastructure management and save on their AWS computing costs by leveraging Amazon EC2 Spot Instances with ease and confidence.

Challenges

To make the most cost-effective use of the cloud, users strive to achieve a flexible infrastructure that will scale up or down based on resource requirements such as CPU or memory utilization.

While natively scaling the application layer, Kubernetes does little to manage the underlying infrastructure layer.

As like most container orchestrators, it was designed with a physical data center in mind and assumes the capacity is always available as applications scale up and down.

Let’s take a closer look at the challenges business face when building optimized, cost-effective, and highly available infrastructure on AWS.

Pricing

With three available pricing models on AWS—On-Demand, Reserved, and Spot—we need to decide which model works best for every part of the workload.

Kubernetes does a great job with resilience and handling the interruption or replacement of servers. This makes it a great candidate to leverage Spot instances at a 70 percent discount compared to On-Demand prices.

Then again, it’s important to maintain some persistent instances for certain workloads as well as cost predictability. That’s where buying a pool of Amazon EC2 Reserved Instances (RIs) ensures a baseline capacity that is cost effective.

Instance Sizing

The second and most complex challenge is choosing the right infrastructure size and type to satisfy the actual containers requirements.

Should we stick to one instance type? Should we mix families or instance sizes? What should we do when a GPU-based pod comes up to do a quick job a few times a day?

Traditionally, the answer to all of these questions has been, “It varies.” As time goes on, clusters grow and more developers freely push new and differing containers into Kubernetes, resulting in constantly changing answers to the questions above.

Moreover, we need to figure out how to make sure it all keeps scaling and addressing changes in application requirements. To illustrate this point, consider that some containers require 0.2vCPU and 200MB of memory, and others require 8vCPU and 16GB of memory or more.

As a result, setting the right auto scaling policy based on simple CPU or memory thresholds rarely stays reliable in the long term.

Containers Utilization

The third and least tackled challenge is the containers utilization. This involves constantly adapting the limits and resource requests of the containers based on their consumption and utilization.

Even after solving for the first two issues and successfully using the right instance sizes for our containers, how do we know the container wasn’t over-allocated resources?

Getting a pod of 3.8vCPU and 7.5GB of memory on a c5.xlarge instance is great as it puts the instance at nearly 100 percent allocation, but what if that pod only uses half of that? That would mean half of our instance resources are wasted and could be used by other containers.

Successfully solving for all of these issues can cut costs and free developer time by having infrastructure and containers that self-optimize in real-time based on application demands.

Spotinst Automation Platform

Spotinst built a DevOps automation platform that helps customers save time and money.

The core of the Spotinst platform is a sophisticated infrastructure scheduling mechanism that is data-driven. It allows you to run production-grade workloads on Spot instances, while utilizing a variety of instance types and sizes. Spotinst offers an enterprise-grade SLA on Spot Instances while guaranteeing the full utilization of RIs.

On top of this automation platform, Spotinst built Ocean, a Kubernetes pod-driven auto scaling and monitoring service. Ocean automatically and in real-time solves the challenges we discussed earlier, while creating a serverless experience inside your AWS environment, and it does so without any access to your data.

Additionally, Ocean helps you right-size the pods in your cluster for optimal resource utilization.

Spotinst solves the pricing and instance sizing challenges by using dynamic container-driven auto scaling based on flexible instance size and life cycles.

There are three main components at play here: dynamic infrastructure scaling, pod rescheduling simulations, and cluster headroom. This is how they work.

Dynamic Infrastructure Scaling

Spotinst Ocean allows customers to run On-Demand, Spot, and Reserved Instances of all types and sizes within a single cluster. This flexibility means the right instance can be provisioned when it’s needed, while guaranteeing minimum resource waste.

Whenever the Kubernetes Horizontal Pod Autoscaler (HPA) scales up, or a new deployment happens, the Ocean auto scaler reads the overall requirements of that event. These include:

  • CPU reservation
  • Memory reservation
  • GPU request
  • ENI limitations
  • Pod labels, taints, and tolerations
  • Custom labels (instance selectors, on-demand request)
  • Persistent volume claims

After aggregating the sum of these requests, Ocean calculates the most cost-effective infrastructure to answer the request. This can be anywhere from a single t2.small to a combination of several M5.2xlarge, C5.18xlarge, P3.xlarge, R4.large, or anything in between.

The use of dynamic instance families and sizes helps gain a highly defragmented with high resource allocation cluster as it grows and scales up.

Pod Rescheduling Simulations

This dynamic infrastructure scaling ensures maximum resource allocation as we scale up and grow our clusters.

To make sure the allocation is maintained when Kubernetes scales the pods down as application demands decrease, Ocean runs pod rescheduling simulations every 60 seconds to figure out if the cluster can be further defragmented.

By considering all the requirements, Ocean checks for any expendable nodes. These can have their pods drained and the node removed from the cluster without having any of the pods go into a pending state, and get scheduled to other existing nodes in the cluster.

Spotinst-Serverless-1

Figure 1 – Pod rescheduling simulation.

In Figure 1, you can see the pod rescheduling simulation when scaling down expendable nodes, considering pod and instance resources as well as other restrictions. In this example, we were able to reduce the cluster size by about 10 percent pre-scale versus post-scale CPU and memory.

Cluster Headroom

Scaling the infrastructure when it’s needed can achieve maximal node resource allocation, but may also slow the responsiveness of our application.

While high availability is critical, waiting for new nodes to spin up every time capacity is needed can lead to less-than-optimal latency and other performance issues. This is where the cluster headroom comes in. Headroom is a capacity of available units of work in the sizes of the most common deployments that stay available in order to allow Kubernetes to instantly schedule new pods.

Headroom is automatically configured by Ocean based on the size of each deployment in the cluster, and changes dynamically over time.

In practice, effective headroom means that more than 90 percent of scaling events in Kubernetes instantly get scheduled on the cluster into the headroom capacity, combined with a call for a headroom scaling event to be prepared for the next time Kubernetes scales.

In case an abnormal deployment happens that requires more than the available headroom, the cluster auto scaler handles the scaling events with a single scaling event of varying instance types and sizes to cover the entire deployment resource requirements. This helps Ocean achieve a 90 percent resource allocation in the cluster while staying more responsive than a traditional step scaling policy.

Monitoring CPU and Memory Utilization

Once the pods are placed on the instances, the Ocean Pod Right-Sizing mechanism will start monitoring the pods for CPU and memory utilization. Once enough data is aggregated, Ocean starts pushing recommendations on how to right-size the pod.

For example, if a pod was allocated with 4GB of memory and over a few days its usage is between 2GB and 3.4GB , with an average of 2.9GB, Ocean Right-Sizing will recommend sizing the pod at ~3.4GB, which further helps bin pack and defragment the cluster while keeping it highly utilized.

Spotinst-Serverless-2

Figure 2 – Deployment with Amazon EKS best practices.

In Figure 2, you can see the deployment architecture of Ocean + Amazon Elastic Kubernetes Service (Amazon EKS) where EKS managed the Kubernetes control pane and Ocean the worker nodes. The Amazon EKS architecture and security stays in place, and Ocean wraps around the worker nodes to dynamically orchestrate them.

Spotinst-Serverless-3

Figure 3 – Deployment with any container orchestrator on AWS.

In Figure 3, you see the general Ocean architecture with the worker nodes connecting to one of the supported container orchestrators. On the left side, you can see the variety of container management control planes and on the right, Ocean wrapping around the worker nodes.

Implementation Walkthrough

Deploying Spotinst Ocean for your existing Kubernetes implementation is simple and can be done in just a few minutes using your preferred deployment method, such as Amazon CloudFormation, Terraform, Ansible, RESTful API, or the Spotinst UI.

Ocean orchestrates within your AWS environment and leverages existing cluster components, including virtual private clouds (VPC), subnets, AWS Identity and Access Management (IAM) roles, images, security groups, instance profiles, key pairs, user data, and tags.

In a nutshell, Ocean will fetch all of the existing configurations from your cluster to build the configuration for the worker nodes that it will spin up. Then, all you have to do is deploy the Spotinst Kubernetes Cluster Controller into your cluster so that Ocean can get the Kubernetes metrics reported to it.

Once this is done, Ocean will handle all infrastructure provisioning for you going forward. You should be able to give your developers complete freedom to deploy whatever they choose.

Let’s walk through a set up so you can see how simple it is.

  • First, navigate on the left to Ocean > Cloud Clusters, and then choose Create Cluster.

Spotinst-Serverless-4

  • On the next screen, choose to either join an existing cluster or create a new one.

Spotinst-Serverless-5

  • In the General tab, choose the cluster name, region and Auto Scaling Group to import the worker nodes configurations from.

Spotinst-Serverless-6

  • On the Configuration page, verify that all of your configurations were imported and click Next.

Spotinst-Serverless-7

  • To install the controller, generate a token in Step 1 of the wizard and run the script provided in Step 2. Wait two minutes and test connectivity; a green arrow will appear.

Spotinst-Serverless-8

  • On the summary page, you can see the JSON configuration of the cluster, and get a Terraform template populated with all of the cluster’s configurations.
    .
  • Once reviewed, create the cluster and Ocean will handle and optimize your infrastructure going forward.

Optimization Data

Now that we know what Ocean is and how to deploy it, let’s look at some of the numbers and benefits of real usage scenarios, and how using dynamic infrastructure can help.

Example #1: Pod requires 6,500mb of memory and 1.5vCPUs – Different Instance Size

Running five such pods on m5.large or m5.xlarge instances will cost $0.096 per hour per pod. However, going up to 2XL will allow further bin-packing and reduce the cost to $0.077 per hour per pod.

Spotinst-Serverless-9

As seen in the chart above, by going a few sizes up, we can achieve better resource allocation and reduce the cost by 20 percent.

Example #2: Pod of 6500mb of memory and 1.8vCPUs – Different Instance Family

Running replicas of such pods on c5.xlarge will only allow for one pod per instance, while an m5.xlarge will allow for two and reduce the cost from $0.17 per hour per pod to $0.096 per hour per pod.

Spotinst-Serverless-10

As seen in the chart above, by changing instance family, we can achieve better resource allocation and reduce the cost by 43 percent.

Conclusion

To summarize, when running a Kubernetes cluster on AWS there are three main challenges in cost optimization and availability that Spotinst Ocean helps automate and solve.

  1. Pricing model: Spotinst Ocean automatically gets you to 100 percent Reserved Instance coverage to preserve your investment and leverages cost effective Spot instances beyond that.
  2. Instance sizing: Using its container driven auto scaler, Ocean spins up the right instance at the right time based on pod requirements, so you no longer have to deal with it.
  3. Containers utilization: Ocean monitors your containers utilization and how to right-size them and avoid idle resources.

With all of this overhead out of the way, managing Kubernetes clusters on AWS is easier than ever. If you’re new to Kubernetes, check out the Amazon EKS + Spotinst Ocean Quick Start.

If you’re already running a Kubernetes cluster on AWS, sign up for Spotinst for free.

.

Spotinst-APN-Blog-CTA-1


Spotinst – APN Partner Spotlight

Spotinst is an APN Advanced Technology Partner. They help companies automate cloud infrastructure management and save on their AWS computing costs by leveraging Amazon EC2 Spot Instances with ease and confidence.

Contact Spotinst | Solution OverviewAWS Marketplace

*Already worked with Spotinst? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Creating a Migration Factory Using TransitionManager and CloudEndure Migration

Creating a Migration Factory Using TransitionManager and CloudEndure Migration

By Craig Macfarlane, CTO at TDS
By Jon Keeter, Sr. Solutions Architect at AWS

TDS-Logo-1
TDS-APN-Badge-1
Connect with TDS-1
Rate TDS-1

As the adoption of Amazon Web Services (AWS) continues to grow, the need for highly reliable workload migrations at an accelerated pace is paramount. This is particularly true for enterprises turning to AWS to host mission-critical and legacy applications.

Organizations need a solution that expedites the migration process, without introducing further risk.

In this post, we will introduce an integration of TDS TransitionManager with CloudEndure Migration, an AWS solution that accelerates cloud migration projects to AWS while reducing risk through automated processes.

By automating and streamlining the sequential tasks of migration, we can create a factory-like production line, or “migration factory,” to achieve these goals.

Transitional Data Services (TDS) is an AWS Partner Network (APN) Advanced Technology Partner that provides the TransitionManager software-suite, a purpose-built software-as–service (SaaS) solution on top of the AWS platform.

TransitionManager is designed to improve the entire end-to-end process of data center and cloud migration projects.

Background

As companies accelerate cloud adoption, key objectives must be accounted for during the migration:

  • Accelerate the AWS migration process.
  • Automate and simplify the CloudEndure workload movement of Linux and Windows systems.
  • Reduce the risk of errors associated with manual tasks.
  • Maintain compliance with appropriate regulations and policies through proper change management, testing, and approvals.

Cloud migration factories can also benefit from a new solution which:

  • Assures successful outcomes (on time and on budget).
  • Reduces time to value and avoids duplicative cost running both on-premises and cloud resources during the actual migration phase.
  • Offers scalability of human resources to increase migration speed of re-host targets.

Migrations require a complex coordination of activities, performed in the proper sequence to assure overall project success. If one step is missed or performed out of sequence, it can result in a failed migration.

By integrating the cloud transport capabilities of CloudEndure Migration with the operational oversight of TransitionManager, users can move workloads more quickly, with less risk of rollbacks.

Example of an Orchestrated Migration

Let’s go through the steps for creating an automated migration pipeline, or migration factory, using TransitionManager and CloudEndure Migration.

Step 1: Understanding Your Environment

During the assessment phase and planning/design phase, understanding the environment is the first step in creating a migration plan.

A core function of TransitionManager is the ability to import and extract, transform, load (ETL) data from multiple sources. Using TSO Logic, RVtools, Excel spreadsheets, CMDB databases, or other discover tools, these data sources can be imported to the TransitionManager database.

This allows you to:

  • Build an actionable view by aggregating key data points from the multiple sources in your environment.
  • Pre-determine portable workloads from the aggregated data.
  • Analyze the inter-dependencies between applications, or blast radius, of workloads visually before making accidental changes that could lead to service interruptions.
  • Migrate to AWS with speed, taking advantage of TransitionManager’s direct integration with CloudEndure Migration.
  • Do all of this while reducing risk.

Step 2: Building Your Actionable Data View

First, we need an actionable data set from which to make migration decisions.

Let’s ingest key data elements from RVTools to describe our virtual infrastructure, a cost analytics platform like TSOLogic, and our CMDB to build our application to infrastructure relationships

It’s worth noting there is no development time required to create or alter these ETL scripts to your needs.

  • First, upload the files to the TransitionManager instance.

TDS-Migration-Factory-1

  • Select your ETL script that will be used to map, transform, and load the data into the platform.

TDS-Migration-Factory-2

  • Press the Import button and observe the results table.

TDS-Migration-Factory-3

After you’ve completed the data aggregation and ingestion from all of your data sources, your end result is an actionable, visualized view highlighting key data elements required to make quick migration choices.

This interactive map within TransitionManager shows the high-level diagram of all servers and inter-dependencies from the import of the discovery data.

TDS-Migration-Factory-4

Figure 1 – Dependency Analyzer Map.

Step 3: Filtering and Selecting Workloads Matching Your Rehost Migration Pattern

The dependency map in Figure 1 above is great for a high-level overview, but our initial goal is to segment the environment and find those highly portable workloads we can rehost in the cloud and show progress quickly.

Key properties to filter on can include both technical and business metrics, such as:

  • Source virtual machine’s (VM) operating system compatibility with AWS.
  • Source VM’s hypervisor compatibility with rehosting tool (CloudEndure Migration is compatible with any source hypervisor).
  • Business unit flexibility and willingness to use and migrate to cloud services.
  • Inter-dependence of the source workload with other systems.
  • Non mission-critical workloads (Dev/Test/QA environments, for example).
  • Compliance or regulatory workloads (PII, PCI, etc.).

To get started, open the saved view called Highly Portable Rehost Candidates and view the results.

In this example, there are more than 300 rehost candidates identified based on just a few filterable parameters that we defined.

We took that massive environment map of our IT assets and created a segment from which we can focus and start to make progress towards the migration phase.

TDS-Migration-Factory-5

Figure 2 – Highly portable rehost candidates.

Looking at one of the servers by clicking on the workload name in the filter, we can see what data points were aggregated earlier to further evaluate if this is a good rehost candidate.

By right-clicking on any asset in the map, you can quickly see key data points that were aggregated to further evaluate whether you should rehost this workload to the cloud.

TDS-Migration-Factory-6

Figure 3 – Server detail when selecting “Show Asset” in the Dependency Analyzer Map.

After a quick discussion with the migration team and application owner, we will tag this workload with the “Rehost:CloudEndure” tag.

Because we have aggregated the right data in one place, we can streamline the workload selection and placement process without having to bounce around to multiple tools and attempt to re-organize massive data sets to make decisions.

Step 4: Migrating Workloads to AWS

As a final step, we will migrate this selected workload to the cloud by integrating TransitionManager with CloudEndure.

By tagging “TDS-Web024” with the Rehost:CloudEndure tag, we placed the workload in scope of the “Rehost with CloudEndure” workflow automation template.

Now, let’s browse to the template and review the workflow.

The red highlights in the Task Graph below indicate tasks that make API calls to a CloudEndure endpoint.

TDS-Migration-Factory-7

Figure 5 – Task Graph in TransitionManager.

The first task (the bubble filled in green) is a for a human to execute. The owner of the workload is notified to approve the workload for migration to AWS.

After the owner signs off, TransitionManager automatically installs the CloudEndure agent on our “TDS-Web024” in our source environment via API call.

The actual code snippet is:

### Installing the agent

$ScriptBlock =

{

                param([String]$InstallationToken )

                Invoke-WebRequest -Uri "https://console.cloudendure.com/installer_win.exe" -OutFile ".\installer_win.exe"

                $Installation = &(".\installer_win.exe") -t $InstallationToken --no-prompt

                $Installation | Out-String

}

$ReturnText = Invoke-Command -ComputerName $params.ComputerIP -Credential $Credential -ScriptBlock $ScriptBlock -ArgumentList$params.InstallationToken

Next, an automated task runs to check the status of the CloudEndure storage replication process from the source environment to your target Amazon Virtual Private Cloud (VPC).

In the CloudEndure console, the workload is showing fully replicated to AWS, and it’s now ready to be launched or test launched.

TDS-Migration-Factory-8

Figure 6 – CloudEndure console.

We complete the workflow by reaching out to CloudEndure via API to cutover the workload so that it’s running live in our VPC target, with an API call such as:

## Prod Machine Cutover

New-CESession -Credential $Credential

$Response = Invoke-CELaunchTargetMachine -LaunchType "CUTOVER" -Ids @($params.MachineId) -Force

An engineer or app owner then tests the application running in the new AWS environment. Finally, we shut down the source workload and complete the migration activity.

Once you are comfortable with the automated process and approval steps, these tasks can be scheduled and repeated ad-infinitum, and run in parallel to create a high-volume, high success rate migration project with minimal manual interaction.

Each application can be assigned its own owner, approvers, dependent tasks, target environments within the AWS Cloud, and its own schedule for change management and approved move windows.

Summary

In this post, we walked you through a new approach to large-scale migrations to AWS that drives efficiency and significantly accelerates the process while reducing risk and uncertainty.

By using a central planning and orchestration platform, you can quickly aggregate your team knowledge and work more efficiently together to develop approaches consistent with your business needs and IT capabilities.

This integrated solution of TransitionManager and CloudEndure offers:

  • Accelerated cloud migrations.
  • Improved overall planning and execution efficiencies.
  • Rapid achievement of project milestones.

To learn more about TransitionManager or CloudEndure Migration, check out AWS Marketplace or visit tdsi.com.

.

TDS-APN-Blog-CTA-1


Transitional Data Services – APN Partner Spotlight

TDS is an APN Advanced Technology Partner that accelerates large-scale data center and cloud migrations and modernizations. TransitionManager is a purpose-built software suite to improve the migration process while overcoming limitations of conventional tools and methods.

Contact TDS | Solution Overview | AWS Marketplace

*Already worked with TDS? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

How to Use Amazon Rekognition on Cloudinary to Auto-Tag Faces with Names

How to Use Amazon Rekognition on Cloudinary to Auto-Tag Faces with Names

By Amit Khanal, Solutions Architect at Cloudinary

Cloudinary-Logo-1
Cloudinary-APN-Badge-2
Connect with Cloudinary-1
Rate Cloudinary-1

Over the last decade, the number of photos our society has taken and stored is absolutely mind-blowing. Estimates suggest that more than 1 trillion photos were taken in 2018.

The shift to digital photography, combined with the availability of smartphone cameras, has made it easy for photographers of all skill levels to take and upload an enormous amount of content every year.

These massive image collections can be difficult to manage and organize, let alone a brand that’s trying to manage and deliver tens of thousands to even millions of images on a monthly basis.

Consider a large business that wants to automatically connect each employee’s headshot to their profile. An ideal solution offers businesses the ability to identify people’s faces in pictures and automatically tag those photos with names.

The solution would also give users the ability to search for people by name, and further enhancements could include the creation of albums and face recognition for security purposes.

In this post, I will describe how to seamlessly integrate Amazon Rekognition with the Cloudinary platform, and build an application that automatically tags people in images with names.

Our solution learns people’s faces from photos uploaded to a “training” folder in Cloudinary. In many cases, a single photo of someone is enough for Amazon Rekognition to learn and then, later on, identify and tag that person. This works in most photograph scenes and even pictures with many other people in them.

The complete code of the Cloudinary face recognition app is on GitHub. Your feedback, suggestions for enhancements, or pull requests are always welcome. It’s important to note that indexing and labeling faces is a private process for your organization. Each image’s index data is stored in your Amazon Web Services (AWS) account only.

Get to Know Cloudinary

Before delving into the details of our solution, let me briefly introduce Cloudinary, an AWS Partner Network (APN) Advanced Technology Partner with the AWS Digital Customer Experience Competency.

Cloudinary is a cloud-based, media full-stack platform for the world’s top brands. Developers and marketers use Cloudinary for managing rich media, including images and videos, and to deliver an optimal end-user experience.

Among Cloudinary’s many automated capabilities are on-the-fly manipulations, optimizations, and responsive delivery of visual media across devices.

Understanding the Solution

The solution described in this post leverages the following services for auto-tagging images:

  • Cloudinary: For uploading, tagging, and managing images.
  • Amazon Rekognition: For indexing facial images and searching them for facial matches.
  • AWS Lambda: For calling Amazon Rekognition APIs for indexing and searching.
  • Amazon API Gateway: For exposing the Lambda function through an API, which Cloudinary then registers as a web hook.

Two workflows are involved in this solution:

Creation of a training collection: This flow takes images uploaded to a Cloudinary “training” folder and invokes Amazon Rekognition, which indexes the faces and stores them in a private collection in your AWS account. This is a private collection stored on your AWS account only.

Cloudinary-Rekognition-1

Figure 1 – Indexing flow.

Search of images in the trained collection: This flow takes new images uploaded to Cloudinary, invokes Amazon Rekognition, and finds faces in those images that match the indexed faces from the trained collection.

Cloudinary-Rekognition-2

Figure 2 – Search flow.

Building the Solution

As a preliminary step, register for an AWS account and free Cloudinary account. Now, you can set up the AWS environment.

Step 1: Configuring AWS Lambda

You must deploy the app as an AWS Lambda function following these steps:

  • Clone the project from GitHub and deploy the project on your AWS environment as a Lambda function.
  • Ensure that Execution Role on the Lambda function has the ‘AmazonRekognitionFullAccess’ policy attached.

The Lambda function requires the following environment variables:

  • CLOUDINARY_URL: The URL that’s required for making API calls to Cloudinary. To look up that URL, log into the Cloudinary console for the value.
  • trainingFolder: The name of the Cloudinary folder (e.g. training) where you will upload the single-faced images from which Amazon Rekognition will learn to associate the name you provide with the corresponding face.
  • faceRecognitionFolder: The name of the Cloudinary folder (e.g. assets) where you will upload all of the images you want this solution to tag with recognized names.
  • rekognitionCollection: The name of the collection in Amazon Rekognition (e.g. cld-rekog) which contains the indexed faces to be used for face searches.
  • confidenceThreshold: The minimum confidence-score of face matches (e.g. 80). The app considers a match successful if the score returned by Amazon Rekognition is at or higher than this level.
  • faceLabelTagPrefix: The prefix that precedes the names of the tagged images in the “training” folder. The tagging syntax is `faceLabelTagPrefix:< Name>`.
  • transformationParams: The parameters that specify the transformations to apply when requesting images from Cloudinary for indexing or searching. Because original-sized images are not required, I recommend you apply, at a minimum, `q_auto` to reduce the image size and save bandwidth.

Cloudinary-Rekognition-3

Figure 3 – AWS Lambda environment variables.

Step 2: Configuring Amazon API Gateway

Cloudinary integrates with Amazon Rekognition through Amazon API Gateway. To get started, go to the Amazon API Gateway console and import the Swagger file from Github, and then set up your API by following the documentation.

Next, associate the Lambda function created in Step 1 with your API, as shown below.

Cloudinary-Rekognition-4

Figure 4 – Amazon API Gateway setup.

Step 3: Setting Up the Cloudinary Environment

To set up your Cloudinary environment, log into your Cloudinary account and go to your Media Library. In the root folder, create two folders called ‘training’ and `assets`.

Next, go to Upload Settings and enter in the Notification URL field the Amazon API Gateway endpoint you configured in Step 2. Cloudinary sends upload and tagging notifications to this endpoint, which is a requirement for this app.

Cloudinary-Rekognition-5

Figure 5 – Cloudinary settings.

Creating a Trained Collection

Now that all of the components are in place, you can start using the app. First, set up a trained collection by indexing your facial images with Amazon Rekognition. All you have to do is upload them to the “training” folder.

To create a trained collection, upload single-face images only to the “training” folder. Multiple-face images do not work in this app.

You can upload images to Cloudinary in several ways. For the purpose of this post, do that with the upload widget in the Cloudinary console, as follows:

  • Go to your Cloudinary Media Library.
  • Navigate to the “training” folder.
  • Click Upload on the top right-hand corner.
  • Click the Advanced link at the bottom of the upload widget that’s displayed.
  • Enter a tag according to the syntax `faceLabel:<Name>’.
  • Click to select an image to upload from any of the sources available on the widget.

Cloudinary-Rekognition-6

Figure 6 – Uploading an image with custom tag.

You’ve now uploaded the selected image, tagged by you as `faceLabel:<Name>` to Cloudinary. Now, ensure the image resides in the “training” folder.

Repeat the above procedure to train all of the images you’re targeting for facial recognition.

Alternatively, you can upload training images in bulk through Cloudinary’s Software Developer Kit (SDK). This doesn’t require an AWS Lambda function and can be done from any NodeJS environment. Just ensure the node modules are installed and the environment variables as stated above are defined.

If your trainable images that are tagged with `faceLabel:<name>` are already in a “training” folder, call the `indexFaces` function on index.js. That function accepts the “training” folder name, retrieves all the images from the folder, and indexes the ones with the `faceLabel` tag.

As in this `faceLabel` tag code:

            ```

            const cld_rekog = require('./index')

            cld_rekog.indexFaces('training/');

            ```

If you have a list of the URLs and tags for your images, call the `uploadAndIndex` function on index.js. That function uploads the images one by one to Cloudinary, and tags and indexes them during the process.

See this code:

    ```

                        const cld_rekog = require('./index')

// Assume we have three entries to upload and index as below

                        const imageData = [{

            url: 'https://cloudinary-res.cloudinary.com/image/upload/q_auto/profile_marissa_masangcay.jpg',

            tag: 'faceLabel:Marissa Masangcay'

        },

        {

            url: 'https://cloudinary-res.cloudinary.com/image/upload/q_auto/profile_shirly_manor.jpg',

            tag: 'faceLabel:Shirly Manor'

        },

        {

            url: 'https://cloudinary-res.cloudinary.com/image/upload/q_auto/profile_tal_admon.jpg',

            tag: 'faceLabel:Tal Admon'

        }

    ]



    imageData.forEach(data => {

        indexer.uploadAndIndex(data.url, data.tag)

    })

```

Amazon Rekognition Results

Amazon Rekognition yields fairly good results with one trained image per person. By indexing different images of the same person, however, you can grow your collection and make it robust. This enables you to search for different images of people at a certain angle, of a certain pose, with a certain expression, and so forth.

Additionally, Amazon Rekognition returns many details that pertain to indexed faces, such as facial coordinates and poses that you could use in apps. To learn more, see the documentation.

Subsequent to an image upload, the following steps take place:

  • Cloudinary invokes the Amazon API Gateway endpoint defined in the Notification URL field in the Upload Settings screen of your Cloudinary Media Library.
  • Amazon API Gateway invokes the Lambda function with image-upload data from Cloudinary.
  • The Lambda function checks the upload response and, if it verifies the image has been uploaded to the “training” folder with a `faceLabel` tag, indexes the image via Amazon Rekognition.

Once indexing is complete, `faceId` is displayed as an image tag, such as the one below. Refresh the page to see `faceId`.

Cloudinary-Rekognition-7

Figure 7 – Image uploaded and indexed.

Testing the Application

Finally, let’s test the application. Start by uploading images into the `assets` folder. Feel free to upload multi-face images in addition to single-face ones.

If face matches are found, the app shows the related names as tags on the images. The entire process usually takes several seconds for images with a few faces and up to 25-30 seconds for images that contain many faces.

Refresh the page to see the facial tags, as in these examples:

Cloudinary-Rekognition-8

Cloudinary-Rekognition-9

Figure 8 – Tagged images of employees.

Amazon Rekognition can detect up to 100 of the largest faces in an image. If there are more, the service skips detecting some faces. See the details in the Amazon Rekognition Developer Guide.

Summary

In this post, I have demonstrated how to upload images to Cloudinary, create a customized and trained collection of people faces with Amazon Rekognition, upload images of people to Cloudinary, and have them auto-tagged with names.

With those auto-populated tags, you can search for any indexed person’s name in the Cloudinary Media Library, which returns all of the images tagged with that name.

By slightly tweaking the code, you can create multiple Amazon Rekognition collections to index different groups of faces per collection, categorized by, for example, department, demographics, and so forth.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.

Cloudinary-APN-Blog-CTA-1


Cloudinary – APN Partner Spotlight

Cloudinary is an AWS Competency Partner and media full-stack platform for the world’s top brands. Developers and marketers use Cloudinary for managing rich media, including images and videos, and to deliver an optimal end-user experience.

Contact Cloudinary | Solution Overview | AWS Marketplace

*Already worked with Cloudinary? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Why Your Company Should Sponsor AWS re:Invent 2019—Education, Influencers, Leads

Why Your Company Should Sponsor AWS re:Invent 2019—Education, Influencers, Leads

By Michael Galasso, Head of re:Invent Sponsorship at AWS

Is your company joining the up to 65,000 expected attendees at AWS re:Invent 2019? Enhance your conference experience and drive lead generation through sponsorship—an exclusive opportunity for AWS Partner Network (APN) Partners and select AWS enterprise customers.

With plenty of turnkey options still available, it’s not too late to participate in the leading global customer and partner conference for the cloud computing community.

Learn more about sponsorship and get started today >>

What to Expect as a re:Invent 2019 Sponsor

Whether you’re a startup focused on lead generation, a global systems integrator (GSI) seeking high-touch C-level engagement, or a technology partner looking to educate a targeted audience on a new product feature, there are sponsorship opportunities that align with your goals.

All packages include expo space to talk with customers, conference passes, and multimedia assets to drive brand awareness and grow your AWS business.

Many returning APN Partners view AWS re:Invent as a can’t-miss event, and we have historically delivered sponsor satisfaction rates of 90 percent or higher.

Learn more about re:Invent sponsorship tiers >>

A Word from Gold-Level Sponsor ExtraHop Networks

ExtraHop Networks is an APN Advanced Technology Partner and AWS re:Invent 2018 sponsor. ExtraHop helps customers identify the cause of service disruption, performance impacts, and threats with deep visibility into the events occurring in cloud and hybrid environments.

“ExtraHop has invested in AWS conferences throughout the last couple of years, and we’ve encountered a highly-engaged audience,” says Bryce Hein, SVP of Marketing at ExtraHop Networks.

“Our conversations at the AWS events spark quality conversations with high-profile prospects, customers, and industry influencers. As a market disruptor in the security market, it’s important for us to prioritize the best events for us to connect with our targeted audience. The shows AWS produces are a sweet spot for us.”

Building on re:Invent 2018 Sponsorship Success

Last year, 52,000 attendees participated in AWS re:Invent 2018, further elevating its place as the leading global customer and partner conference for the cloud computing community.

Hundreds of APN Partners sponsored AWS re:Invent 2018, generating more than 785,000 total leads. More than 90 percent of sponsors said they were satisfied with the event and planned to renew in 2019.

2018 Attendee Audience Breakdown

From developers to decision makers, AWS re:Invent boasts a dynamic group of attendees from a wide variety of industries. See our full breakdown of attendance demographics below.

re-Invent-2019-Sponsors-1

2018 Expos and Content Hubs

The 2018 conference featured two expos across the campus in Las Vegas, with the main expo situated at the Venetian with a satellite expo focused on startups and experiential technology at the Aria. By the numbers, re:Invent sponsors occupied more than 500,000 square feet of expo space and delivered over 150 speaking sessions.

There were a total of five Content Hubs at AWS re:Invent 2018, with a centralized area for attendees to consume breakout session content. Sponsors received a list of leads from attendees who visited these locations.

Sponsorship Add-On Opportunities

Another benefit of AWS re:Invent sponsorship is the chance to utilize add-on opportunities to help differentiate your brand through unique experiences, such as Restaurant Receptions, hackathons, and more.

For a full list of add-ons, check out our prospectus or contact the team at [email protected] for recommendations on best fit.

Provide High-Touch Experiences with Restaurant Receptions

re-Invent-2019-Sponsors-4Formerly known as Pub Crawls, Restaurant Receptions are an opportunity for re:Invent sponsors to host attendees who share interest in specific topics in a relaxed, social setting.

With this turnkey solution, your company can leave the planning and execution to the AWS team. Simply align your brand with a venue or event concept of choice that reaches your target audience. Learn more here >>

Host a Hackathon

Launch challenges and give attendees an opportunity to engage with your product via our Jam Lounge opportunities. Sponsorship opportunities are still available for 1-, 2- and 3-day options.

Are you a security partner? Consider sponsoring the security-specific jam, a four-hour hackathon that will allow you to showcase your tech and task engineers and developers with solving challenges. Learn more here >>

Promote Health and Wellness Initiatives

re-Invent-2019-Sponsors-3Did you know that AWS re:Invent puts on 4K and 8K races for attendees?

As the official sponsor of these runs, your logo will be included on all promotional items, including t-shirts, in addition to your company’s branding on the Start and Finish lines. Learn more here >>

How to Get Started

If you have questions or want to learn more about AWS re:Invent 2019 sponsorship opportunities, please reach out to the team at [email protected].

If you’re ready to get started, fill out a contract request form and we’ll be in touch!

re-Invent-2019-Sponsors-5.1

from AWS Partner Network (APN) Blog

Introducing the AWS Industrial Software Competency Program

Introducing the AWS Industrial Software Competency Program

By Renata Melnyk, Sr. Partner Program Manager at AWS
By Josef Waltl, Industry Software Segment Lead at AWS

AWS-Industrial-Software-CompetencyIndustrial companies in process and discrete manufacturing leverage highly-specialized software solutions to increase product innovation while decreasing production and operational costs in their value chain.

Last year, we announced AWS Industrial Software Competency solutions from AWS Partner Network (APN) Technology Partners for an end-to-end industrial software toolchain.

These partners provide solutions targeting one or more of the primary steps in discrete manufacturing or process industries: Product Design, Production Design, and Production/Operations. They follow Amazon Web Services (AWS) best practices for building the most secure, high-performing, resilient, and efficient cloud infrastructure for industry applications.

Following up on the success of our technology partners in this space, we are excited to welcome APN Premier and Advanced Consulting Partners to the AWS Industrial Software Competency program.

Learn more about AWS Industrial Software Competency Partners >>

AWS-Industrial-Software-Competency-1

Explore AWS Industrial Software Competency Partner Solutions

The AWS Competency Program helps customers identify and choose the world’s top APN Partners that have demonstrated technical proficiency and proven customer success in specialized solution areas.

To receive the AWS Competency designation, APN Partners must undergo a rigorous technical validation related to industry-specific technology. This validation gives customers complete confidence in choosing APN Partner solutions from the tens of thousands in the AWS Partner Network.

AWS customers can now explore APN Partner solutions in the following areas:

  • Product Design: Applications and services used in the design phase, including Computer Aided Design (CAD), Computer Aided Engineering (CAE), Electronic Design Automation (EDA), and civil engineering.
  • Production Design: Applications for factory layout and Computer-Aided Manufacturing (CAM), Product Lifecycle Management (PLM), and Product Data Management (PDM).
  • Production/Operations: Discrete and process industry applications like Manufacturing Execution Systems (MES), Manufacturing Operations Management (MOM), Plant Information Management System (PIMS), supply chain logistics, Industrial Internet of Things (IIoT), analytic applications for industrial use, and manufacturing specific Enterprise Resource Planning (ERP) solutions.

NEW! AWS Industrial Software Competency Consulting Partners

Congratulations to the APN Consulting Partners that have achieved the AWS Industrial Software Competency:

AWS Industrial Software Competency Technology Partners

These APN Technology Partners have achieved the AWS Industrial Software Competency designation:

Product Design

Production Design

Production/Operations

Read about how AWS Industrial Competency Partners are shaping the next Industrial Revolution >>

APN Partner Videos

Industrial Software Competency-YouTube-1

Watch all of our AWS Industrial Software Competency Partner videos >>

How to Join the AWS Industrial Software Competency Program

The AWS Competency Program recognizes APN Partners who demonstrate technical proficiency and proven customer success in specialized solution areas. APN Partners with experience in Industrial Software can learn more about becoming an AWS Competency Partner here.

The AWS Competency Partner Validation Checklist is for APN Partners applying for an AWS Competency. The checklist provides the criteria necessary to achieve the AWS Competency designation. APN Partners undergo a validation of their capabilities upon applying for the specific AWS Competency.

Validation Checklists are available in: English | Chinese | French | German | Italian | Japanese | Korean | Portuguese | Russian | Spanish

from AWS Partner Network (APN) Blog

How Braze on AWS Helps the American Cancer Society Drive Increased Donations

How Braze on AWS Helps the American Cancer Society Drive Increased Donations

By Ram Dileepan, Sr. Solutions Architect at AWS
By Jennifer McNamee, Director of Technology Partnerships at Braze
By Emily Halperin, Customer Advocacy Associate at Braze

Braze-Logo-1
Braze-APN-Badge-2
Connect with Braze-1
Rate Braze-1

Can you imagine a world without cancer? Well, that’s exactly what the American Cancer Society (ACS) is working to achieve.

According to the World Health Organization, cancer was responsible for an estimated 9.6 million deaths globally in 2018. The ACS is on a mission to free the world from cancer by funding and carrying out research, supporting cancer patients, and educating the public.

ACS achievements include $4.8 billion in funding for research since 1946, providing 9 million free rides to treatment, and more than 900,000 low- or no-cost cancer screenings for patients throughout the world.

ACS funds many of these activities by hosting “Relay for Life” and “Making Strides Against Breast Cancer” events that allow participants to raise money via a mobile app called FUNdraising. This app helps users ask for donations through social networks and accept payments via the device’s camera and online services like PayPal.

To build even stronger relationships with their audience, the American Cancer Society sought a way to interact with users and drive increased app usage and donations. To get there, they teamed up with Braze, an AWS Partner Network (APN) Advanced Technology Partner with the AWS Digital Customer Experience Competency.

Braze has been providing brands with built-for-purpose customer engagement capabilities since 2011. The Braze platform makes messages feel more like conversations between you and your customers across channels such as push notifications, email, in-app messages, and more.

End-to-End Engagement Strategy

The healthcare industry is not new to Braze. Organizations have been using the Braze platform to improve patient experiences and the quality of healthcare for years.

The American Cancer Society, in particular, worked with Braze to improve the outcome of fundraising events by enhancing their mobile fundraising application experience.

ACS created an end-to-end engagement strategy using Braze technology embedded in their app. The approach started with driving increased app engagement and fundraising, and then grew to focus on providing transparency on how each user’s donations have positively impacted the lives of those with cancer.

ACS uses the FUNdraising app to engage users by leveraging the Braze platform to:

  • Remind participants of upcoming events. Users receive push notifications to remind them of upcoming events they have registered for.
  • Educate users on events they may not know about. App users receive pushes encouraging them to register for events they’ve attended in the past, as well as other ACS campaigns or relevant news and information.
  • Update users on goal progress. Users receive pushes based on how much of their donation goal they have met. Push messages are sent at 25, 50, and 75 percent of goal completion to encourage users to meet their objectives.
  • Cultivate community through transparency. ACS sends notifications to users who have made donations to show how their generosity has positively impacted the lives of those with cancer.

Below, you can see a few messages sent to users using Braze and the FUNdraising app.

ACS-Braze-AWS-1.1

The Braze Platform

Braze is a customer engagement platform built for today’s mobile-first world. It helps brands create live views of their customers that stream and process historical, in-the-moment, and predictive data in an interactive feedback loop. With Braze, immediate action on insights can be taken with relevant messaging across mobile, email, and web.

Traditional batch processing was designed for the era of data storage and retrieval. The new era of stream processors enabled Braze to build a system from the ground up that can both process interactions as they happen (latency) and handle large amounts of data (throughput).

This process is critical because, in order to build good customer experiences, brands must have the foundation in place to react to customers in real-time. People’s time is valuable, and the way to show customers you value them is through timely, personalized messages delivered across multiple channels.

“Braze wants to help improve the relationships and customer experience patients have with their health care providers,” says Jon Hyman, Co-Founder and CTO at Braze. “This means enabling health care companies to provide thoughtful messaging, such as helping large insurance companies inform customers about scheduling an annual physical, or reminding them of scheduled appointments.

“These touch points lead to a better overall experience and a more satisfied customer,” adds Hyman.

As a brand, it’s imperative you are able to reach customers wherever they are, whenever they need you, and on the channel they prefer. Braze’s visual customer journey tool, Canvas, makes building sophisticated, responsive campaigns easy. The drag-and-drop interface guides you through the creation process, one step at a time.

With Canvas, campaigns are created that are as unique and dynamic as the humans who build them and the audiences they seek to serve.

Emphasis on Privacy and Security

The Health Insurance Portability and Accountability Act (HIPAA) is a U.S. federal law that protects the privacy of the personal health information (PHI) and sets limits on sharing and use of this information without patient authorization.

Braze takes HIPAA compliance just as seriously as its health care customers do. Braze even created an entirely separate cluster for HIPAA customers so their data doesn’t have to reside next to non-HIPAA customers’ data.

To further mitigate risk, Braze hired a HIPAA lawyer to ensure their procedures, policies, and security controls are also HIPAA compliant. What’s more, all employees working with HIPAA customers receive special, HIPAA-specific training.

Customer Success

Since the adoption of Braze, and a mobile app engagement strategy designed to drive engagement, awareness, and lifecycle management, the American Cancer Society has sent push notifications to identify key moments and messaging for their audience.

The targeted messaging used by ACS takes full advantage of audience segmentation and personalization, boosting awareness and funding by strengthening their audience’s educational and emotional connections with the organization.

The result? The time users spent in the app increased by more than 300 percent. In addition, the number of opens per month increased by 100 percent, and the overall Life Time Value (LTV) of app users is up to 30 percent higher compared to event participants who fundraise without the app.

When the messages explaining in concrete terms how participants’ fundraising efforts were helping people with cancer, and where this money was going—including efforts to improve cancer survival, decrease the incidence of cancer, and enhancing the quality of life for patents and their caregivers—more funds flowed in.

Tens of thousands of additional dollars of in-app funds (an estimated 34 percent increase) were raised since ACS adopted Braze. Overall, 50 percent of recipients started sessions within three days, and new messages were opened at a rate of 30 percent.

ACS was also looking for ways to increase fundraising during the traditionally slow period between November and December. They used the Braze platform to send push notifications to their mobile app during this period.

At the end of the fundraising cycle, users who received push notifications had raised 10 percent more funds than the previous year, which was directly attributed the messages sent in-app.

The Braze Experience

Each Braze customer has access to an onboarding and integration team member to go through the technical setup. Onboarding managers help to integrate the Braze software developer kit (SDK) into customers’ apps, supporting data collection and other key functionalities.

Braze also provides marketing and strategic guidance on best practices, and help end users use the Braze dashboard and become experts with the platform.

All customers have access to robust documentation and a learning module called LAB (Learning at Braze). Once customers are finished onboarding, they are fully supported from a long-term perspective with a Success Manager to help them with strategic and marketing guidance

For ACS, the connection to their volunteers and donors is essential. By using Braze to forge a human connection with their audience, ACS has been able to drive more users to their app, keep those users more engaged, and significantly increase fundraising, providing a major boost in the organization’s mission.

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.

Braze-APN-Blog-CTA-1


Braze – APN Partner Spotlight

Braze is an AWS Competency Partner that provides brands with built-for-purpose customer engagement capabilities. The Braze platform makes messages feel more like conversations between you and your customers across channels such as push notifications, email, in-app messages, and more.

Contact Braze | Solution Overview

*Already worked with Braze? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

How to Use Amazon Rekognition and Amazon Medical Comprehend to Get the Most Out of Medical Imaging Data in Research

How to Use Amazon Rekognition and Amazon Medical Comprehend to Get the Most Out of Medical Imaging Data in Research

By Sarah Gabelman, Director of Product Management at Ambra Health

Ambra-Health-Logo-1
APN Advanced Technology Partner-3
Connect with Ambra Health-1
Rate Ambra Health-1

Medical imaging is a key part of patient health records and clinical trial workflows. These workflows are complex and involve hunting down imaging from an onsite clinical PACS (picture archiving clinical system), requesting imaging be sent from an outside facility, or waiting for imaging to arrive on a compact disc (CD).

Many medical facilities still burn medical imaging on CDs, a time-consuming and error-prone process where patient data must be matched. The staff traditionally assigned in hospitals for this task (often referred to as a film library) are often overwhelmed with enormous stacks of CDs.

This process can take anywhere from a few hours when imaging is onsite, to days or weeks if imaging is mailed or brought by courier service from an outside facility.

Additionally, imaging data on CDs and on-premises archives can create significant risks from lost studies, errors, and unscheduled PACS downtime. Even when an electronic workflow is implemented, there could still be complex challenges around matching data with parent studies, customizing case report forms, integrating with post processing systems, and intaking imaging from outside sites.

In this post, I will discuss key challenges faced by medical facilities and suggest approaches for using Amazon Web Services (AWS) tool sets, as enhanced by Ambra Health, to meet each challenge.

Ambra Health is an AWS Partner Network (APN) Advanced Technology Partner and a medical data and image management cloud software company. We are personally and professionally committed to the mission of delivering better care through better technology—right at the heart of the care network.

This is an article for developers who are working with diagnostic medical imaging or DICOM (digital imaging and communications in medicine) data, either in academic or commercial research settings relating to pharmaceutical development and/or algorithm development for artificial intelligence (AI) or machine learning (ML).

Background

At Ambra Health, we understand these challenges from a unique standpoint. Our company developed a cloud-based image management solution that lets institutions of all sizes securely store, share, and view medical imaging.

Our focus has been DICOM imaging, including X-Rays, CT, ultrasound, and MRI studies. Ambra currently manages more than six billion images, and the company has a growth rate of 40 percent year over year. We include six of the top 10 health systems, and three of the four top children’s hospitals, among our customers.

We turned to AWS to help us scale and improve our ever-growing workflow and use cases.

First, we found that customers had moved some of their storage infrastructure to AWS, so we needed to act as a flexible partner. These customers were based both in the United States and internationally, and we wanted to enable them to run our system in the architecture that was best suited to their cost and operational structure.

Second, our customers were rapidly realizing the imaging data they held could be useful and lead to new insights. We call this process transforming a liability (such as imaging data held for record keeping purposes) into an asset (like imaging that can provide new diagnostic and therapy insights).

To enable these insights, Ambra needed to provide customers with enhanced tool sets around searching for relevant data, and anonymization and de-identification in both metadata as well as pixel-level data in the images themselves.

Searching for Relevant Data

Amazon ElasticSearch Service enables Ambra to quickly index and search through billions of images and studies. Ambra also used Amazon Comprehend Medical and other neuro-linguistic programming (NLP) tools to extract medical information from unstructured reports.

This allowed us to accurately identify studies with specific characteristics, such as diagnosis records (positive or negative) and medical procedure records. As a result, we can help institutions maintain a record of the information they have based on conditions and other search criteria, rather than simply patient identification information (PII).

With this approach, researchers are able to create cohorts of relevant research data based on, for example, lesion size and/or body part location. This can be invaluable as researchers try to find the needles in haystacks of data.

Ambra also provides relevant reporting and summation. This automated procedure replaced manual curation at many institutions that were previously unable to curate data rapidly enough and/or at scale.

Automated features analyze the HL7 message and return the found diagnosis reports and medical procedure reports in under a second. With a manual workflow, it would take many minutes per study for a user to view the HL7 report and parse through the text for the diagnosis and procedure details.

In this video, hear from Morris Panner, CEO at Ambra Health, who shares his thoughts on the value of being an APN Partner and why the industry experience of the AWS teams he’s worked with has been so helpful.

Removing Protected Health Information (PHI)

De-identified DICOM images are an important component of clinical research workflows, but the process of manually de-identifying large amounts of pixel data is both time-consuming and labor intensive for customers.

Ambra Health’s automatic pixel de-identification feature uses Amazon Rekognition and Amazon Comprehend Medical APIs to allow customers to de-identify images more quickly and to reduce user error.

Ambra Health offers two anonymization options using Amazon Rekognition and Amazon Comprehend Medical. The first option masks all text located on the DICOM image. The second masks only text that is recognized as PHI (protected health information).

When the all-text option is enabled for a customer, Ambra converts the DICOM images to JPG format and sends them to Amazon Rekognition. The AWS service then identifies the text on each DICOM and returns the text strings and coordinates found on the images to Ambra Health. We use these coordinates to mask all text on the DICOM images.

When the PHI-text option is enabled, Ambra converts the DICOM images to JPG format and sends them to both Amazon Rekognition and Amazon Comprehend Medical. First, Amazon Rekognition is used to identify the text strings and coordinates on each DICOM. These are sent to Amazon Comprehend Medical and both the text strings and coordinates are passed back to Ambra Health.

Amazon Comprehend Medical processes the text strings provided by Amazon Rekognition, identifies the text strings that contain PHI, and then passes the PHI text strings back to Ambra. We use these coordinates to mask the PHI on the DICOM images.

Ambra also de-identifies other known PHI strings in addition to those identified by Amazon Comprehend Medical.

Ambra Health-Rekognition

Figure 1 – Anonymization options using Amazon Rekognition and Amazon Comprehend Medical.

The diagram above highlights the two anonymization options offered by Ambra Health using Amazon Rekognition and Amazon Comprehend Medical. The first option masks all text located on the DICOM image, while the second masks only text that is recognized as PHI.

Customer Use Case

At one regional academic medical center, the film library staff found themselves bogged down by searching for and downloading imaging data from CDs. They also faced a unique challenge in regards to a trial, where patients were associated with a parent study in the region and subject IDs had to be conserved.

Ambra needed to create a custom workflow to conserve imaging IDs while anonymizing patient information to line up clinical data with patient data. Our engineering team, using AWS tool sets, was able to customize the output of data so it could be stored and reported under specifications set by the statisticians on the team.

The Ambra viewer was also customized to meet the stringent demands of the neuro radiologists reviewing the studies. Today, study upload and anonymization time has been sped up to just minutes. More than 4,000 images have been successfully uploaded into the system and matched.

The reduction in administration time for the team allows them to focus on the studies themselves, leading to greater insights that will enable better patient care across the board.

Summary

Medical imaging has traditionally been thought of as a burdensome liability. However, facilities today can use data for exciting new initiatives leading to unparalleled discoveries.

The challenge with imaging data is appropriate utilization and anonymization. Ambra Health sought to provide customers with an enhanced tool set to search for relevant data and anonymize and de-identify imaging.

Today, Ambra’s automatic pixel de-identification feature uses Amazon Rekognition and Amazon Comprehend Medical APIs to allow customers to de-identify images and reduce error. Now, it’s easier than ever to deploy an integrated application fabric that elevates healthcare efficiency and care.

To learn more about how Ambra Health can free data from silos at your organization, visit AWS Marketplace or contact [email protected].

The content and opinions in this blog are those of the third party author and AWS is not responsible for the content or accuracy of this post.

.

Ambra-Health-APN-Blog-CTA-1


Ambra Health – APN Partner Spotlight

Ambra Health is an APN Advanced Technology Partner. A leading medical data and image management cloud software company, Ambra is committed to the mission of delivering better care through better technology—right at the heart of the care network.

Contact Ambra Health | Solution Overview | AWS Marketplace

*Already worked with Ambra Health? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog