Author: ifttt

Improvising and Modernizing the Enterprise Asset Management Solution on AWS

Improvising and Modernizing the Enterprise Asset Management Solution on AWS

By Thooyavan Arumugam, Senior Cloud Architect at Tech Mahindra
By Saurabh Shrivastava, Partner Solutions Architect at AWS
By Vivek Raju, Manager, Partner Solutions Architect at AWS

Tech Mahindra-Logo-1
Tech Mahindra-APN-Badge-1
Connect with Tech Mahindra-1
Rate Tech Mahindra-1

IBM Maximo is an Enterprise Asset Management (EAM) solution that helps organizations manage their assets, track operations, and perform preventive maintenance using predictive analysis and enterprise-ready features.

Maximo can be deployed in an on-premises environment, as well as on a public cloud or hybrid cloud environment.

As cloud computing is becoming the industry norm, Amazon Web Services (AWS) provides Infrastructure as a Service (IaaS) solutions and a broad range of services to deploy any enterprise-grade application in the cloud.

Businesses all around the world are using the breadth and depth of AWS to become more cloud-native. AWS fuels agility and scalability in your application environment by providing on-demand infrastructure instantly.

Fixing the non-performing environment becomes easy with AWS, as it offers the snapshot and image functionalities, which enable the instance to be rebuilt within a very reasonable timeframe with existing image or snapshot.

This post explores Boral Australia’s journey to the cloud and how Tech Mahindra, an AWS Partner Network (APN) Advanced Consulting Partner, helped the customer’s cloud migration to host IBM Maximo on AWS. Tech Mahindra is also a member of the AWS Managed Service Provider (MSP) Partner Program.


Boral Limited is headquartered in Sydney, with 16,000 employees working across 700 operational sites. They primarily deals with building products and construction materials.

Boral had their asset management platform hosted in a datacenter located in Sydney. The major trigger for this migration project was to upgrade the end of life (EOL) version of asset management product IBM Maximo.

The customer also faced the following challenges:

  • Solution deployed in multiple sites, with each site working in silos and causing a lack of visibility into asset usage across sites.
  • Older version of asset management product running in EOL stage.
  • The solution did not allow for standardization of maintenance practices, effective utilization of resources across locations, and did not allow for digitization or technical growth.
  • To support an asset management transformation and future-proof the solution, Boral chose to implement the most current version of Maximo in the cloud.

In this post, we’ll describe how Tech Mahindra helped Boral to migrate their asset management platform to AWS using cloud-native services.

Application Architecture

Boral’s application has a three-tier architecture re-platformed in Amazon Elastic Compute Cloud (Amazon EC2) instances with the Windows 2012 R2 operating system.

Tech Mahindra established an on-premises integration to Oracle’s financial application using MuleSoft API. We used Tech Mahindra’s Migration of Application to Cloud (MAC) framework to migrate Boral’s database from on-premises to IBM DB2 hosted in Amazon EC2.

Tech Mahindra also set up a change management pipeline to automate the deployment of patches in a systematic way from non-production to production environments.

In the following sections, we’ll share details about the overall architecture and how Boral re-platformed and achieved cloud-native architecture on AWS.

The diagram in Figure 1 shows the application architecture after migrating to AWS.

Tech Mahindra-IBM Maximo-1

Figure 1 – Architecture for Enterprise Asset Management system leveraging AWS.

This is a three-tier architecture hosting the following IBM Maximo components:

  • IBM HTTP in Web Server Tier: Configure a separate, dedicated HTTP server to work with the J2EE application server. Users access the Maximo Asset Management applications by using a web browser, sending request to IBM HTTP web server.
  • IBM WebSphere in Application Tier: Use IBM WebSphere Application Server software. Manages the Maximo Asset Management JavaServer Pages (JSPs), XML, and business logic components. Maximo Asset Management uses a commercial Java 2 Platform and J2EE application server.
  • IBM DB2 in Database Tier: Stores all information about assets, such as their conditions, locations, and related records in any of the supported databases.

AWS Services

The following AWS services and features helped Tech Mahindra host components of Boral’s asset management solution (IBM Maximo) on AWS.

  • AWS Virtual Private Network (VPN): Establishes a secure and private tunnel from the on-premises datacenter to the AWS global network.
  • Amazon Route53: A Domain Name Server (DNS) that routes global traffic to the application using edge location in Amazon CloudFront.
  • Amazon CloudFront: Routes user traffic to the application using worldwide edge locations to achieve low latency.
  • AWS WAF: A web application firewall that’s applied on CloudFront distribution to protect against common exploits that could impact application availability, compromise security, or consume excessive resources.
  • Amazon Virtual Private Cloud (VPC): Sets up a logically isolated, virtual network where the application can run securely.
  • Elastic Load Balancing (ELB): Load balances HTTP/HTTPS applications.
  • Amazon EC2: Provides compute capacity in the cloud. Amazon EC2 was used to host the web, application, and database server.
  • AWS System Manager: Automates maintenance and deployment tasks on Amazon EC2 instances, while automatically applying patches, updates, and configuration changes across resource groups.
  • Amazon CloudWatch: Monitors the entire assent management platform and store application logs for analysis.
  • AWS Config: Assess, audit, and evaluate the configurations of AWS resources.
  • AWS CloudTrail: Enables governance, compliance, operational auditing, and risk auditing of the AWS account. Log, continuously monitor, and retain account activity related to actions across AWS infrastructure.
  • Amazon Simple Email Service (SES): A cloud-based email sending service that sends all of the application’s email.
  • Amazon Simple Storage Service (Amazon S3): Highly scalable object storage used to store instance snapshot backup.
  • Amazon Elastic Block Store (EBS): Provides persistent block storage volumes for use with Amazon EC2 instances. Used as block storage volume for the web, application, and database server.
  • AWS Identity and Access Management (IAM): Manages access to AWS services and resources securely. Used to handle application access across AWS services.
  • AWS Lambda: Runs code without provisioning or managing servers. Lambda is used to automate rules for AWS Config, AWS WAF, IAM, and the server snapshot pipeline.

Application Security and Encryption

To secure connectivity between the on-premises datacenter and AWS, Tech Mahindra set up VPN tunnels. Site-to-Site VPN extends the datacenter to the cloud, and VPN helps connect to a VPC while establishing secure and private sessions with IP security (IPsec) and Transport Layer Security (TLS) tunnels.

Amazon VPC is configured to provide isolated network boundaries to host the resources and restrict network access. The team created multiple private subnets to host applications with no open internet endpoint, and reduced the blast radius for any unforeseen security incidents.

VPC security groups were configured to restrict port and protocol access for corporate networks only at the instance level.

An additional layer of network security was added using network access control list (network ACL), which acts as a firewall for controlling traffic in and out at the subnet level. All the servers are hosted in private subnet and all outbound requests were routed through a NAT Gateway.

Tech Mahindra locked down ports, while IAM services were configured to provide access based on the principle of least privilege. We also had configured role-based access both for the resources and access.

Tech Mahindra additionally configured AWS CloudTrail and AWS Config to adhere to continuous audit and compliance of the environment.

SSL certificates are procured from the third-party vendor and managed using AWS Certificate Manager, where it’s integrated with CloudFront and the Elastic Load Balancer to secure the data in transit.

Amazon EBS volumes were encrypted to provide security for the data at rest. AWS WAF was leveraged to protect the web application from DDoS and SQL Injection attacks.

A Lambda function was used to update the AWS WAF rules dynamically, and for regular backup job execution. Tech Mahindra automated the instance resources monitoring report, which sends email to the customer everyday using Amazon SES.

Migration and Re-Platforming

Tech Mahindra’s MAC toolkit contains a proven and tested migration framework and methodology for accurate, predictable, and accelerated migrations to the cloud.

The framework consists of various cookbooks, tools, and automation libraries for repeatable and predictable migration execution. It’s supported with factory model implementation for repeatable and predictable performance.

MAC has a six-phase migration process with a well-defined set of artifacts used at each phase. Tech Mahindra uses in-house developed and industry tools to automate the migration phases so that enterprise application migration is done right the first time, every time.

As part of the discovery exercise for Boral, Tech Mahindra assessed and analyzed the customer’s existing environment and arrived at a migration approach that yielded the maximum benefit.

Post assessment, Tech Mahindra designed and built the target environment based on AWS best practices, which includes re-platformed OS and database components.

Database Migration

For this engagement, Tech Mahindra had to migrate three variants of application data that were handled through meticulous planning and execution.

The three variants of data were:

  • Master Data (Production)
  • Open Transaction Data (Production)
  • History Transaction Data (Non-Production)

Boral had 20 million records of entries in their system, which Tech Mahindra migrated to AWS through an extensive data migration approach that included identifying the necessary data from the existing system, extracting data, cleansing it, and transforming the data through Maximo Integration Framework (MIF).

Access to the Application

Users will reach the asset management portal through Amazon Route53, with Amazon CloudFront used as content delivery network. CloudFront has an ELB origin that shares the load with attached frontend (web) servers.

If there was an impact in performance that required upgrading a server to a larger capacity, vertical scaling was done by detaching one server at a time from ELB and increasing the instance size and attaching it back, thus allowing resources to scale up without downtime.

IBM HTTP Server Web Tier handles the requests from users coming through the Load Balancer. The request travels to the application servers configured with IBM WebSphere, and the application server acts as WebSphere admin to control the other nodes in a cluster. The clustering configured at the app tier, in turn, takes care of load balancing between the servers.

IBM DB2 Application was used for the database, and was configured to run on Amazon EC2. Database replication was enabled between AZs and supported the application in active/passive methodology.

MuleSoft was used as a middleware hosted in another AWS account. It was an enabler for data communication between Maximo and Oracle financial applications hosted in the customer’s datacenter.

All emails triggered from the application were delivered through Amazon SES.

Operational Maintenance Pipeline

AWS Systems Manager was used to patch the OS periodically. Tech Mahindra also created the patch baselines and maintenance windows as per the standards set forth for patching on-going basis.

Patches were scheduled to update the non-production severs initially, followed by deployment to production servers as per the Tech Mahindra mCOPS (Managed Cloud Operations) standard management process.

Tech Mahindra mCOPS provides the following solution and services:

  • Improved security at both infrastructure and application level.
  • Content delivery network implementation.
  • Cloud infrastructure monitoring.
  • Serverless backup and recovery solution.
  • Optimized cloud environment and capacity management.

Leveraging mCOPS, Boral can seamlessly manage a highly available and persistent cloud environment with optimized resource utilization and reduced cloud spend. Tech Mahindra manages more than 70,000 instances across private and public clouds using mCOPS.

Putting it All Together

Tech Mahindra helped Boral to maximize business benefit by hosting Maximo on AWS.

The new system delivered the following benefits:

  • Migrate entire asset management system to AWS.
  • Latest version of Maximo ( has been implemented on AWS .
  • Migrated 20+ years of multi-site data into a single data lake-style repository.
  • Asset management was hosted on a physical server that had a long maintenance cycle and high mean time to repair. This was dramatically reduced when hosted on AWS, as the customer is  to provision and replace servers quickly.
  • Availability and reliability of the application significantly improved since the application was redesigned to use core AWS services such as ELB and caching technologies for content delivery.
  • Due to the services and features provided by AWS, Tech Mahindra could provide application availability at 99.95% to the customer at the lowest cost.
  • Improved user experience with Amazon CloudFront using edge locations, and the ability to lock down the coverage area using geo restriction features.
  • Environments were secured using AWS best practices and recommendations incorporated with AWS WAF. Dynamic rules were updated using AWS Lambda along with IAM, AWS Config, and CloudTrail.


A business must continue evolving to improve its customer experience, both to satisfy customer needs and save on costs.

Customers get value for money and achieve agility by hosting enterprise applications in the cloud, where you only pay for what you use. Businesses don’t need to worry about high availability and scalability when going to the cloud.

In this post, we learned about different AWS components that can help you to host IBM Maximo on AWS. We explored how traffic flows from Amazon Route53 to applications through Elastic Load Balancing, which handles the load by distributing traffic across the server fleet.

We learned how to ensure network security using Amazon VPC and restrict access using security groups and network ACLs. For audit and monitoring, you can use CloudTrail, CloudWatch, and AWS Config. You also learned how to host enterprise software like IBM Maximo on AWS with a three-tier architecture.

Tech Mahindra-Logo-1
Connect with Tech Mahindra-1

Tech Mahindra – APN Partner Spotlight

Tech Mahindra is an AWS Managed Service Provider. They offer innovative and customer-centric IT services that connect across a number of technologies to deliver tangible business value and experiences to customers.

Contact Tech Mahindra| Practice Overview

*Already worked with Tech Mahindra? Rate this Partner

*To review an APN Partner, you must be an AWS customer that has worked with them directly on a project.

from AWS Partner Network (APN) Blog

Meet the Newest AWS News Bloggers!

Meet the Newest AWS News Bloggers!

I wrote my first post for this blog way back in 2004! Over the course of the first decade, the amount of time that I devoted to the blog grew from a small fraction of my day to a full day. In the early days my email inbox was my primary source of information about upcoming launches, and also my primary tool for managing my work backlog. When that proved to be unscalable, Ana came onboard and immediately built a ticketing system and set up a process for teams to request blog posts. Today, a very capable team (Greg, Devin, and Robin) takes care of tickets, platforms, comments, metrics, and so forth so that I can focus on what I like to do best: using new services and writing about them!

Over the years we have experimented with a couple of different strategies to scale the actual writing process. If you are a long-time reader you may have seen posts from Mike, Jinesh, Randall, Tara, Shaun, and a revolving slate of guest bloggers.

News Bloggers
I would like to introduce you to our current lineup of AWS News Bloggers. Like me, the bloggers have a technical background and are prepared to go hands-on with every new service and feature. Here’s our roster:

Steve Roberts (@bellevuesteve) – Steve focuses on .NET tools and technologies.

Julien Simon (@julsimon) – Julien likes to help developers and enterprises to bring their ideas to life.

Brandon West (@bwest) – Brandon leads our developer relations team in the Americas, and has written a book on the topic.

Martin Beeby (@thebeebs) – Martin focuses on .NET applications, and has worked as a C# and VB developer since 2001.

Danilo Poccia (@danilop) – Danilo works with companies of any size to support innovation. He is the author of AWS Lambda in Action.

Sébastien Stormacq (@sebesto) – Sébastien works with builders to unlock the value of the AWS cloud, using his secret blend of passion, enthusiasm, customer advocacy, curiosity, and creativity.

We are already gearing up for re:Invent 2019, and can’t wait to bring you a rich set of blog posts. Stay tuned!


from AWS News Blog

Learn about AWS Services & Solutions – July AWS Online Tech Talks

Learn about AWS Services & Solutions – July AWS Online Tech Talks

AWS Tech Talks

Join us this July to learn about AWS services and solutions. The AWS Online Tech Talks are live, online presentations that cover a broad range of topics at varying technical levels. These tech talks, led by AWS solutions architects and engineers, feature technical deep dives, live demonstrations, customer examples, and Q&A with AWS experts. Register Now!

Note – All sessions are free and in Pacific Time.

Tech talks this month:


July 24, 2019 | 11:00 AM – 12:00 PM PTBuilding System of Record Applications with Amazon QLDB – Dive deep into the features and functionality of our first-of-its-kind, purpose-built ledger database, Amazon QLDB.


July 31, 2019 | 11:00 AM – 12:00 PM PTMachine Learning on Amazon EKS – Learn how to use KubeFlow and TensorFlow on Amazon EKS for your machine learning needs.

Data Lakes & Analytics

July 31, 2019 | 1:00 PM – 2:00 PM PTHow to Build Serverless Data Lake Analytics with Amazon Athena – Learn how to use Amazon Athena for serverless SQL analytics on your data lake, transform data with AWS Glue, and manage access with AWS Lake Formation.

August 1, 2019 | 11:00 AM – 12:00 PM PTEnhancing Your Apps with Embedded Analytics – Learn how to add powerful embedded analytics capabilities to your applications, portals and websites with Amazon QuickSight.


July 25, 2019 | 9:00 AM – 10:00 AM PTMySQL Options on AWS: Self-Managed, Managed, Serverless – Understand different self-managed and managed MySQL deployment options on AWS, and watch a demonstration of creating a serverless MySQL-compatible database using Amazon Aurora.


July 30, 2019 | 9:00 AM – 10:00 AM PT Build a Serverless App in Under 20 Minutes with Machine Learning Functionality Using AWS Toolkit for Visual Studio Code – Get a live demo on how to create a new, ready-to-deploy serverless application.

End-User Computing
July 23, 2019 | 1:00 PM – 2:00 PM PTA Security-First Approach to Delivering End User Computing Services – Learn how AWS improves security and reduces cost by moving data to the cloud while providing secure, fast access to desktop applications and data.


July 30, 2019 | 11:00 AM – 12:00 PM PTSecurity Spotlight: Best Practices for Edge Security with Amazon FreeRTOS – Learn best practices for building a secure embedded IoT project with Amazon FreeRTOS.

Machine Learning

July 23, 2019 | 9:00 AM – 10:00 AM PTGet Started with Machine Learning: Introducing AWS DeepLens, 2019 Edition – Learn the basics of machine learning through building computer vision apps with the new AWS DeepLens.

August 1, 2019 | 9:00 AM – 10:00 AM PT Implementing Machine Learning Solutions with Amazon SageMaker – Learn how machine learning with Amazon SageMaker can be used to solve industry problems.


July 31, 2019 | 9:00 AM – 10:00 AM PT Best Practices for Android Authentication on AWS with AWS Amplify – Learn the basics of Android authentication on AWS and leverage the built in AWS Amplify Authentication modules to provide user authentication in just a few lines of code.

Networking & Content Delivery

July 23, 2019 | 11:00 AM – 12:00 PM PT Simplify Traffic Monitoring and Visibility with Amazon VPC Traffic Mirroring – Learn to easily mirror your VPC traffic to monitor and secure traffic in real-time with monitoring appliances of your choice.

Productivity & Business Solutions

July 30, 2019 | 1:00 PM – 2:00 PM PTGet Started in Minutes with Amazon Connect in Your Contact Center – See how easy it is to get started with Amazon Connect, based on the same technology used by Amazon Customer Service to power millions of customer conversations.


July 25, 2019 | 11:00 AM – 12:00 PM PT Deploying Robotic Simulations Using Machine Learning with Nvidia JetBot and AWS RoboMaker – Learn how to deploy robotic simulations (and find dinosaurs along the way) using machine learning with Nvidia JetBot and AWS RoboMaker.

Security, Identity & Compliance

July 24, 2019 | 9:00 AM – 10:00 AM PT Deep Dive on AWS Certificate Manager Private CA – Creating and Managing Root and Subordinate Certificate Authorities – Learn how to quickly and easily create a complete CA hierarchy, including root and subordinate CAs, with no need for external CAs.


July 24, 2019 | 1:00 PM – 2:00 PM PT Getting Started with AWS Lambda and Serverless Computing – Learn how to run code without provisioning or managing servers with AWS Lambda.

from AWS News Blog

How BYBE is Disrupting the Adult Beverage Industry

How BYBE is Disrupting the Adult Beverage Industry

Bottles on a shelf lined up to show how bybe disrupts the adult beverage industry through its mobile app

Kevin Mack, BYBE CXO and Co-Founder has always been fascinated with tightly regulated industries—and, with his technology background, he saw the restrictions on alcohol as an exciting engineering problem waiting to be solved. “Where most people see the adult beverage industry as this hard area to get into because of all the restrictions, I saw them as almost technical requirements,” Mack says.

Mack’s co-founder, Drew Knight, previously worked for a beer, wine, and spirits distributor, supplying major brands like Robert Mondavi Wines, Corona, and Kahlua to some of the biggest retailers in the United States. As with other products, these retailers would sometimes offer their customers rebates for alcohol purchases to boost sales. When Knight first started out, these rebates were all in paper form, but soon he noticed that all kinds of rebates, coupons, and rewards were becoming digitized—both via retailers’ own apps and third-party apps, like Shopkick and Checkout 51. Knight says it “made sense” that the large retailers to whom he distributed wanted to promote their own apps rather than the third-party apps. “But,” he says, “there was never a way for digital rebates to be included directly inside their platforms in a legal way.”

That’s because alcohol, in addition to being one of the largest revenue drivers in retail, is also one of the most highly regulated industries in the country, with restrictions that vary from state to state. Knight perceived that these varying restrictions made alcohol difficult to sell via apps, resulting in missed sales opportunities for retailers. “That was really the inspiration and driving force,” he says.

BYBE simplifies digital alcohol promotions by embedding post-purchase rebates for beer, wine, and spirits inside popular retail apps and websites. “We integrate directly in their backend systems to provide discounts on the adult beverage category,” Mack explains. “Because of BYBE, retailers can now show beer, wine, and spirits rebates directly inside their applications, the same way that you see discounts for any other category.”

BYBE also eliminates the hassle of mail-in rebates for consumers with its freestanding BYBE App. Consumers can use the app to browse through available offers, and after purchasing a product, they can simply upload a photo of their receipt to receive their rebate via Paypal or prepaid Mastercard card within 48 hours.

The app can also introduce users to new wines, beers, and spirits that they might not otherwise encounter or think to try. “A lot of times it’s awareness,” Mack says. “There’s new releases coming out all the time and you may not know about it.”

BYBE has already proven that its technology can work with multiple retailers (including Target and Speedway, two of the biggest alcohol retailers in the country). Now, Knight says, “it’s about transforming that product into a company, creating scalable processes to drive growth.” The next product on the horizon is the BYBE Dashboard, which allows BYBE to see live time processing of purchases and rebates as they happen.

“Overall,” Knight asserts, “digital presence for beer, wine, spirits is critical to the success of the category. Right now, beer, wine, spirits is slow to transition to ecommerce and digital relative to other moving consumer categories.” He is confident that BYBE will be instrumental in getting that category up to speed.

from AWS Startups Blog

Introducing the Amazon Corretto Crypto Provider (ACCP) for Improved Cryptography Performance

Introducing the Amazon Corretto Crypto Provider (ACCP) for Improved Cryptography Performance

The Amazon Corretto Crypto Provider (ACCP), a cryptography performance improvement for Amazon Corretto, is now available. Historically, Java cryptography has been CPU-intensive resulting in slow performance and elevated operational costs. ACCP updates dozens of cryptographic algorithms, accelerating cryptographic workloads.

from Recent Announcements

Tech Innovation Can Lead a Retail Rebirth Driving Business Growth

Tech Innovation Can Lead a Retail Rebirth Driving Business Growth


Yet, the challenge today remains. How to experiment and stay ahead of evolving customer expectations while also assuring operational and security best practices are met? Clearly, a strong IT strategy to support the ability to counteract competitive pressures via innovation is needed.

Customers sit in the cat-bird seat when it comes to today’s retail environment. From the ability to compare prices in real-time while standing in the store aisle to demanding unique shopping experiences, meeting rapidly evolving customer expectations is vital to success. Yet, the imperative word here is evolving. Which means that standing still or resting on your laurels is not an option.

The Innovation Lab

A quick look at this year’s NRF 100 retailers reveals a lengthy laundry list of innovations leading retailers are experimenting with to address customer expectations and reduce overhead. Everything from smart buildings and self-checkout that reduce operational costs to artificial intelligence that helps monitor stock levels on store shelves, and apps like mobile eCommerce, loyalty, virtual fit and more that enhance the customer experience are being tried. Which innovations are right for you depend on your business strategy and customer needs.

Let’s look at how a few retailers applied technology transformation for business innovation, in the process helping address specific business goals.

Specialty Retailer Grows IT Automation for Faster Innovation

This specialty home goods retailer was looking to enable its in-house development team to stay nimble and one step ahead of the competition. Tasked with servicing the organization’s eCommerce site and in-store systems, the company sought to grow developer and IT automation, increasing their productivity and the ability to quickly iterate on innovation.

To do so, with the help of Flux7 AWS consultants, the retailer moved to a container-based cloud environment that provides the desired level of DevOps automation, and an immutable infrastructure that encourages greater development innovation. Moving away from an environment where every server was built from scratch and manually patched, the new solution saves countless hours of manual labor, using Ansible Playbooks, Docker Swarm and more for a completely automated software provisioning and cluster setup.

In addition, Flux7 and the retailer fully automated the company’s CI/CD code pipeline and deployed HashiCorp Vault and Consul for secrets management and service discovery. With blue-green development techniques, the retailer is reducing the likelihood of interruptions during patching or upgrade activities and has set itself up to grow its website elasticity and better meet daily and seasonal traffic peaks with greater cost control.

Rent-A-Center Streamlines New Revenue Opportunities

IT is increasingly tasked with facilitating new revenue streams for the business. Rent-A-Center (RAC) saw the opportunity to simultaneously streamline its partner sales and conduct a cloud migration proof-of-concept. With the help of the Flux7 DevOps team, RAC created a sales portal where partners could easily access inventory and close sales on their showroom floors.

The new PCI-compliant partner portal features five 9’s availability and has given the RAC development team the ability to design and build at the speed of the market. IT is now a business enabler and a provider of direct business value, giving the organization a means to build solutions that outpace customer expectations. The new portal not only increased RAC partner sales but was just the proof of concept it needed for a larger initiative to transform its IT function.

RAC Unveils Autoscaling eCommerce Platform

Following its POC, RAC was interested in quickly introducing a new customer-facing eCommerce platform that was secure, PCI compliant, and highly scalable to ensure it would cater to online web-based demand, especially in its peak season. The goal was to introduce an eCommerce platform that would support the entire online shopping workflow using SAP Hybris. Working with the Flux7 DevOps team, RAC was able to implement an AWS microservices architecture solution with a cluster of Hybris servers which would cater to online web-based demand.

With AWS ECS as a backbone technology, RAC has been able to deploy a Hybris setup with automatic scaling, self-healing, one-click deployment, CI/CD, and PCI compliance consistent with the company’s latest technology guidelines and meeting the requirements of its culture of DevOps and extreme agility. As just one proof point of the new platform’s success: over nine million people, a 42% increase in traffic, visited the ecommerce site over the Black Friday weekend without a single hiccup.

Growing Loyalty and Customer Lifetime Value

A home furnishing retailer approached the Flux7 AWS consulting team to help it migrate its loyalty management software from legacy IBM hardware to Linux and then automate its configuration. Once the software deployment was automated, the specialty retailer was able to retire its old hardware, and migrate to the cloud. The new cloud-based system replaces a highly manual process with IT automation that allows developers to focus on innovation, rather than building new dev instances and/or managing the request and provisioning process.

The overarching goal for this specialty retailer is to improve the customer experience through an always available loyalty program. Moreover, the second phase of the company’s efforts, an AWS migration, will be designed to bring DevOps automation that allows for faster innovation and time-to-market and reduced maintenance. With greater consistency and flexibility, this retailer can provide a stable, production environment that despite maintenance, upgrades and other changes is able to deliver an experience to its customers with little to no disruption. IT transformation is a journey, and this retailer is on its way, beginning with the customer experience.

Whether you are a retailer that is actively reimagining the customer experience, or just getting started on the path to technology transformation, a solid strategy that supports innovation at the speed of the market is critical to ongoing success. At Flux7, we help retailers experiment more, fail cheap, and measure results accurately in the digital world to further the goals of the customer-centric Agile Enterprise. To learn more about leading your agile efforts, Get a Quote today.

Contact Us for a Quote

from Flux7 DevOps Blog

AWS Security Profile: Rustan Leino, Senior Principal Applied Scientist

AWS Security Profile: Rustan Leino, Senior Principal Applied Scientist


I recently sat down with Rustan from the Automated Reasoning Group (ARG) at AWS to learn more about the prestigious Computer Aided Verification (CAV) Award that he received, and to understand the work that led to the prize. CAV is a top international conference on formal verification of software and hardware. It brings together experts in this field to discuss groundbreaking research and applications of formal verification in both academia and industry. Rustan received this award as a result of his work developing program-verification technology. Rustan and his team have taken his research and applied it in unique ways to protect AWS core infrastructure on which customers run their most sensitive applications. He shared details about his journey in the formal verification space, the significance of the CAV award, and how he plans to continue scaling formal verification for cloud security at AWS.

Congratulations on your CAV Award! Can you tell us a little bit about the significance of the award and why you received it?

Thanks! I am thrilled to jointly receive this award with Jean-Christophe Filliâtre, who works at the CNRS Research Laboratory in France. The CAV Award recognizes fundamental contributions to program verification, that is, the field of mathematically proving the correctness of software and hardware. Jean-Christophe and I were recognized for the building of intermediate verification languages (IVL), which are a central building block of modern program verifiers.

It’s like this: the world relies on software, and the world relies on that software to function correctly. Software is written by software engineers using some programming language. If the engineers want to check, with mathematical precision, that a piece of software always does what it is intended to do, then they use a program verifier for the programming language at hand. IVLs have accelerated the building of program verifiers for many languages. So, IVLs aid the construction of program verifiers which, in turn, improve software quality that, in turn, makes technology more reliable for all.

What is your role at AWS? How are you applying technologies you’ve been recognized by CAV for at AWS?

I am building and applying proof tools to ensure the correctness and security of various critical components of AWS. This lets us deliver a better and safer experience for our customers. Several tools that we apply are based on IVLs. Among them are the SideTrail verifier for timing-based attacks, the VCC verifier for concurrent systems code, and the verification-aware programming language Dafny, all of which are built on my IVL named Boogie.

What does an automated program verification tool do?

An automated program verifier is a tool that checks if a program behaves as intended. More precisely, the verifier tries to construct a correctness proof that shows that the code meets the given specification. Specifications include things like “data at rest on disk drives is always encrypted,” or “the event-handler always eventually returns control back to the caller,” or “the API method returns a properly formatted buffer encrypted under the current session key.” If the verifier detects a discrepancy (that is, a bug), the developer responds by fixing the code. Sometimes, the verifier can’t determine what the answer is. In this case, the developer can respond by helping the tool with additional information, so-called proof hints, until the tool is able to complete the correctness proof or find another discrepancy.

For example, picture a developer who is writing a program. The program is like a letter written in a word processor, but the letter is written in a language that the computer can understand. For cloud security, say the program manages a set of data keys and takes requests to encrypt data under those keys. The developer writes down the intention that each encryption request must use a different key. This is the specification: the what.

Next, the developer writes code that instructs the computer how to respond to a request. The code separates the keys into two lists. An encryption request takes a key from the “not used” list, encrypts the given data, and then places the key on the “used” list.

To see that the code in this example meets the specification, it is crucial to understand the roles of the two lists. A program verifier might not figure this out by itself and would then indicate the part of the code it can’t verify, much like a spell-checker underlines spelling and grammar mistakes in a letter you write. To help the program verifier along, the developer provides a proof hint that says that the keys on the “not used” list have never been returned. The verifier checks that the proof hint is correct and then, using this hint, is able to construct the proof that the code meets the specification.

You’ve designed several verification tools in your career. Can you share how you’re using verification tools such as Dafny and Boogie to provide higher assurances for AWS infrastructure?

Dafny is a Java-like programming language that was designed with verification in mind. Whereas most programming languages only allow you to write code, Dafny allows you to write specifications and code at the same time. In addition, Dafny allows you to write proof hints (in fact, you can write entire proofs). Having specifications, code, and proofs in one language sets you up for an integrated verification experience. But this would remain an intellectual exercise without an automated program verifier. The Dafny language was designed alongside its automated program verifier. When you write a Dafny program, the verifier constantly runs in the background and points out mistakes as you go along, very much like the spell-checker underlines I alluded to. Internally, the Dafny verifier is based on the Boogie IVL.

At AWS, we’re currently using Dafny to write and prove a variety of security-critical libraries. For example: encryption libraries. Encryption is vital for keeping customer data safe, so it makes for a great place to focus energy on formal verification.

You spent time in scientific research roles before joining AWS. Has your experience at AWS caused you to see scientific challenges in a different way now?

I began my career in 1989 in the Microsoft Windows LAN Manager team. Based on my experiences helping network computers together, I became convinced that formally proving the correctness of programs was going to go from a “nice to have” to a “must have” in the future, because of the need for more security in a world where computers are so interconnected. At the time, the tools and techniques for proving programs correct were so rudimentary that the only safe harbor for this type of work was in esoteric research laboratories. Thus, that’s where I could be found. But these days, the tools are increasingly scalable and usable, so finally I made the jump back into development where I’m leading efforts to apply and operationalize this approach, and also to continue my research based on the problems that arise as we do so.

One of the challenges we had in the 1990s and 2000s was that few people knew how to use the tools, even if they did exist. Thus, while in research laboratories, an additional focus of mine has been on making tools that are so easy to use that they can be used in university education. Now, with dozens of universities using my tools and after several eye-opening successes with the Dafny language and verifier, I’m scaling these efforts up with development teams in AWS that can hire the students who are trained with Dafny.

I alluded to continuing research. There are still scientific challenges to make specifications more expressive and more concise, to design programming languages more streamlined for verification, and to make tools more automated, faster, and more predictable. But there’s an equally large challenge in influencing the software engineering process. The two are intimately linked, and cannot be teased apart. Only by changing the process can we hope for larger improvements in software engineering. Our application of formal verification at AWS is teaching us a lot about this challenge. We like to think we’re changing the software engineering world.

What are the next big challenges that we need to tackle in cloud security? How will automated reasoning play a role?

There is a lot of important software to verify. This excites me tremendously. As I see it, the only way we can scale is to distribute the verification effort beyond the verification community, and to get usable verification tools into the hands of software engineers. Tooling can help put the concerns of security engineers into everyday development. To meet this challenge, we need to provide appropriate training and we need to make tools as seamless as possible for engineers to use.

I hear your YouTube channel, Verification Corner, is loved by engineering students. What’s the next video you’ll be creating?

[Rustan laughs] Yes, Verification Corner has been a fun way for me to teach about verification and I receive appreciation from people around the world who have learned something from these videos. The episodes tend to focus on learning concepts of program verification. These concepts are important to all software engineers, and Verification Corner shows the concepts in the context of small (and sometimes beautiful) programs. Beyond learning the concepts in isolation, it’s also important to see the concepts in use in larger programs, to help engineers apply the concepts. I want to devote some future Verification Corner episodes to showing verification “in the trenches;” that is, the application of verification in larger, real-life (and sometimes not so beautiful) programs for cloud security, as we’re continuing to do at AWS.

Want more AWS Security how-to content, news, and feature announcements? Follow us on Twitter.


Supriya Anand

Supriya is a Senior Digital Strategist at AWS.

from AWS Security Blog

Fighting Financial Crime with Hawk:AI

Fighting Financial Crime with Hawk:AI

HawkAI fights financial crime

Money laundering costs the world 2.7% of global GDP annually. Financial institutions spend over $200 billion per year on anti-money laundering compliance, while any misstep will mean painful fines and reputational damage.

hawk:AI, a growing fintech company, was co-founded by Tobias Schweiger and Wolfgang Berner in 2018 with one mission: ending financial crime. hawk:AI, a money laundering detection and investigation platform, uses the most sophisticated machine learning techniques to help financial institutions prevent financial crime. The Munich-based startup also drastically increases process efficiency through its solutions built on AWS.

Founding hawk:AI felt almost imperative to the founding team once they realized the extent of the problem. Schweiger explains, “We simply saw the opportunity to target a huge market while also solving for a critical problem to society.”

hawk:AI differentiates by utilizing the AWS cloud to achieve flexibility and speed in its solution, including using machine learning to reduce time spent per investigation and increase the percentage of money laundering they are able to detect. The hawk:AI team appreciates that AWS services are scalable and available in modular capacities; they rely on several AWS services in their process.

The large quantity of data used in hawk:AI’s process are housed in Amazon S3, and Amazon SageMaker is used to reason over this data. Specifically, the hawk:AI data science team uses SageMaker to quickly achieve analytics capabilities without the infrastructure management that a different solution might require. They appreciate that SageMaker can handle many aspects of their machine learning workflow, from analytics to verification of the trained models. Compared to a manual process, the team estimates that they can mobilize and deploy solutions over 30% faster, which often marks the difference between successfully fighting money laundering and failing.

“We chose AWS for multiple reasons, including its security and compliance capabilities, its broad adaption and hence trust in the financial services space and its state-of-the-art machine learning offerings,” says Wolfgang Berner, CTO/CPO at hawk:AI. “Going forward, we’re excited to do more with additional AWS services, as the extensive machine learning, global deployment options and infrastructure support are unmatched for our needs.“

from AWS Startups Blog

Intuit: Serving Millions of Global Customers with Amazon Connect

Intuit: Serving Millions of Global Customers with Amazon Connect

Recently, Bill Schuller, Intuit Contact Center Domain Architect met with AWS’s Simon Elisha to discuss how Intuit manages its customer contact centers with AWS Connect.

As a 35-year-old company with an international customer base, Intuit is widely known as the maker of Quick Books and Turbo Tax, among other software products. Its 50 million customers can access its global contact centers not just for password resets and feature explanations, but for detailed tax interpretation and advice. As you can imagine, this presents a challenge of scale.

Using Amazon Connect, a self-service, cloud-based contact center service, Intuit has been able to provide a seamless call-in experience to Intuit customers from around the globe. When a customer calls in to Amazon Connect, Intuit is able to do a “data dip” through AWS Lambda out to the company’s CRM system (in this case, SalesForce) in order to get more information from the customer. At this point, Intuit can leverage other services like Amazon Lex for national language feedback and then get the customer to the right person who can help. When the call is over, instead of having that important recording of the call locked up in a proprietary system, the audio is moved into an S3 bucket, where Intuit can do some post-call processing. It can also be sent it out to third parties for analysis, or Intuit can use Amazon Transcribe or Amazon Comprehend to get a transcription or sentiment analysis to understand more about what happened during that particular call.

Watch the video below to understand the reasons why Intuit decided on this set of AWS services (hint: it has to do with the ability to experiment with speed and scale but without the cost overhead).

*Check out more This Is My Architecture video series.

About the author

Annik StahlAnnik Stahl is a Senior Program Manager in AWS, specializing in blog and magazine content as well as customer ratings and satisfaction. Having been the face of Microsoft Office for 10 years as the Crabby Office Lady columnist, she loves getting to know her customers and wants to hear from you.

from AWS Architecture Blog

Hiring for Culture – Using Amazon’s Hiring Process to Build Your Team

Hiring for Culture – Using Amazon’s Hiring Process to Build Your Team

Guest post by Richard Howard, Startup Business Development 

Hiring the right people for your startup is one of the most important things that you will do as a founder. Beyond finding product/market fit, it’s probably the most important thing that you’ll do. Before joining AWS, I’d interviewed and hired a bunch of people as the CEO and Co-Founder of the live event startup Shortcut. Amazon, however, is the only employer that has actually taught and trained me how to properly interview and hire. I’d now like to share some of those lessons here because I think they are critical and particularly applicable to startups.


The first stage of hiring the right people for your startup is to define your culture. How can you assess somebody for cultural fit if you don’t really have a culture? At Amazon, our culture is defined by our 14 Leadership Principles. These principles have evolved over time based on Amazon’s growth, needs, and learnings so don’t feel like you need to copy them word for word. In my view, the Amazon principles that are most applicable to startups are “Customer Obsession”, “Bias for Action,” and “Ownership.”

However you define your culture, whether it’s with leadership principles or something else, you can’t just hire for it and leave it alone. Using those cultural principles is how you should assess potential hires, judge people’s performance, and think about new initiatives. If you hire for culture but then don’t reinforce it, your culture will be defined by your noisiest and most forceful employees.

The Interview

Amazon hires almost exclusively for cultural fit so the interview is all about assessing for that. We have a question bank that matches questions to a particular leadership principle we’re looking for. For example, if I was checking for ‘Bias for Action’, I might ask; “Tell me about a time when you worked against tight deadlines and didn’t have time to consider all options before making a decision” or “Describe a situation where you made an important business decision without consulting your manager.”

This type of resource is incredibly valuable for your startup. Define your culture, then build out the interview questions that will correspond to that culture. That way, you’ll know that each interviewee is subject to the same criteria and that you’re judging them fairly.

You may have noticed that the Amazon questions are not hypotheticals like “What would you do in X situation?” Ask for real world examples of things that the person has done. That way you’ll get a real sense of their ability rather than their rose-tinged view of themselves.

The Decision

During an in-person interview loop—Amazon’s term for a series of candidate interviews—a candidate will be interviewed by roughly five different Amazonians all looking for different leadership principles. We’re seeing whether this person ‘raises the bar’ on the current team members. Meeting the bar is not enough. Think of that from your startups perspective – your team and your culture are of vital importance but how are you going to improve if you keep hiring people that are as good as everyone you already have? You must be constantly trying to hire better and better people.

Your process should reflect your stage and size of your team. If you’re a three-person startup, then it makes sense for everyone to interview the person looking to become the fourth. If you’re a 40-person startup, maybe it’s three or four people that do the interviews.

Whatever your process is, make sure that you’re judging people fairly according to your cultural principles. Otherwise you’ll just end up with an office of people who think and act a lot like you.


Airbnb CEO Brian Chesky famously interviewed the first 300 employees at Airbnb until he become such a bottleneck that they had to remove that step. It makes sense to interview the first 50 – 100 people at your startup. If you’re really going to scale, those people are going to be your cultural bedrock and it makes sense for you to have final say on whether they are or are not a good fit.

Once you get past 100 people, though, you become the bottleneck. That’s when training becomes incredibly important. You’ll want to know that the people interviewing the next 500 employees will have the same high standards as you. That’s when things like really defining your culture and having a question bank that people can refer to are critical. At Amazon, each interview also has an independent bar-raiser who is specially trained to assess whether the candidate is indeed raising the bar for the company.

As the founder, you should probably do the first rounds of training. That way, rather than act as a bottleneck on the hiring process, you’re scaling yourself by training the next generation of hirers.


The way that Amazon interviews and hires is one of the most important things that I’ve learned here. It’s also one of the most important things that I can see that is missing from a lot of the startups that I’ve worked at or meet in my current role. Really, it comes down to just a couple of things that are easy to remember: Set the culture, hire for it, don’t ask hypotheticals and constantly raise the bar with each new hire. Do that and you’ll have an incredible team in no time.

from AWS Startups Blog