TechTalks With Tom Smith: Tools and Techniques for Migration

TechTalks With Tom Smith: Tools and Techniques for Migration

birds migrating

Time to pack it up and move it out. Or, rather, up.

To understand the current state of migrating legacy apps to microservices, we spoke to IT executives from 18 different companies. We asked, “What are the most effective techniques and tools for migrating legacy apps to microservices?” Here’s what we learned:

You may also enjoy: The Best Cloud Migration Approach: Lift-And-Shift, Replatform, Or Refactor?

Kubernetes

  • We did everything with Docker and Kubernetes. Datastore was on PostgreSQL. Depends on the use case. We went through an era of containers wars and Docker/Kubernetes has won. It’s less about the technology you’re going to use and how are you going to get there and get your team aligned. More about the journey than the tooling. Alignment and culture are the hardest part.
  • Spring and Java tend to be the most regarded stacks. Seeing a shift to serverless with Lambda and K8s in the cloud. CloudWatch can watch all your containers for you. Use best of class CI/CD tools.
  • The manual process is required. In addition to decomposing legacy apps into components a challenge in understanding how the legacy app acquires its configuration and how to pass the identical configuration to the microservice. If you use K8s as your target platform, it supports two of three ways of mapping to a legacy app – it requires thinking and design work, it is not easily automated.
  • K8s is the deployment vehicle of choice. On the app side, we are seeing features or sub-services move into cloud-native design one-by-one. People start with the least critical and get confidence in running in a dual hybrid mode. Start with one or two services and quickly get to 10, 20, and more. Scalability, fault tolerance, and geographic distributions. Day two operations are quite different than with a traditional, legacy app.
  • Docker and K8s are the big ones. All are derivatives on the infrastructure side.
  • Containerizing our workloads and standardizing all runtime aspects on K8s primitives is one of the most major factors in this effort. These include environment variable override scheme, secrets handling, service discovery, load balancing, automatic failover procedures, network encryption, access control, processes scheduling, joining nodes to a cluster, monitoring and more. Each of these aspects had been handled in a bespoke way in the past for each service. Abstracting the infrastructure further away from the operating system has also contributed to the portability of the product. We’re employing a message delivery system to propagate events as the main asynchronous interaction between services. The events payload is encoded using a Protocol Buffers schema. This gives us an easy to maintain and evolve a contract model between the services. A nice property of that technique is also the type of safety and ease of use that comes with using the generated model. We primarily use Java as our choice runtime. Adopting Spring Boot has helped to standardize how we externalize and consume configuration and allowed us to hook into an existing ecosystem of available integrations (like Micrometer, gRPC, etc.). We adopted Project Reactor as our reactive programming library of choice. The learning curve was steep, but it has helped us to apply common principled solutions to very complex problems. It greatly contributes to the resilience of the system.

Docker

  • Containers (Docker) and container orchestration platforms help a lot in managing the microservices architecture and its dependencies. Tools for generating microservices client and server APIs. Examples would be Swagger for REST APIs, and Google’s Protocol Buffers for internal microservice-to-microservice APIs.

Prometheus

  • K8s, debugging and tracing tools like Prometheus, controller, logging. Getting traffic into K8s and within K8s efficient communications with Envoy, sidecar proxy for networking functions (routing caching). API gateway meter, API management, API monitoring, dev portal. A gateway that’s lightweight, flexible, and portable taking into account east-west traffic with micro-gateways.
  • When going to microservices, you need to think about scale, there are so many APIs, how do you keep up with them and the metrics. You might want to use Prometheus, CloudWatch if going to Lambda when going to microservices you have to bring open telemetry, tracing, logging. How to debug on a monolithic application as a developer I can attach a debugger to a binary, I can’t do that with microservices. With microservices, every outage is like a murder mystery. This is a big issue for serverless and microservices.

Other

  • There are a couple. It comes down to accepting CI/CD you’re going to have trouble. Consider the build and delivery pipeline as part of the microservice itself. Everything is in one place. With microservices, you scatter functionality all over the place. More independent and less tightly bound. The pipeline becomes an artifact. Need to provide test coverage across microservices as permutations grow, Automation begins at the developer’s desk.
  • There are many tools that help the migration to microservices, such as Spring Boot and Microprofile, both of which simplify the development of standalone apps in a microservices architecture. Stream processing technologies are becoming increasingly popular for building microservices, as there is a lot of overlap between a “stream-oriented” architecture and a “microservices-oriented” architecture. In-memory technology platforms are useful when high-performance microservices are a key part of the architecture.
  • Legacy apps should be migrated to independent, loosely coupled services through gradual decomposition, by splitting off capabilities using a strangler pattern. Select functionality based on a domain with clear boundaries that need to be modified or scaled independently. Make sure your teams are organized to build and support their portions of the application, avoid dependencies and bottlenecks that minimize the benefits of microservice adoption, and take advantage of infrastructure-level tools for monitoring, log management, and other supporting capabilities.
  • There are many tools available such as Amazon’s AWS Application Discovery Services, which will help IT understand your application and workloads, while others help IT understand server and database migration. Microsoft Azure has tools that help you define the business case and understand your current application and workloads. There is an entire ecosystem of partners who provide similar tools that may fit your specific needs for your migration that help you map dependencies, optimize workload and determine the best cloud computing model, so it may be prudent to look at others if you have specific needs or requirements. We provide the ability to monitor your applications in real-time both from an app performance as well as business performance perspective, providing data you need to see your application in action, validate the decisions and money spent on the migration, improve your user experience and provide the ability to rapidly change your development and release cycles.
  • Have a service mesh in place before you start making the transition. API ingress management platform (service control platform). An entry point for every connect in the monolith. Implement security and observability. Existing service mesh solutions are super hyper-focused on greenfield and K8s. Creates an enclosure around the monolith during the transitions.
  • There are a couple of approaches that work well when moving to microservices. Break the application down into logical components that each fit well in a single microservice, and then build it again as component pieces, which will minimize code changes while fully taking advantage of running smaller autonomous application components that are easier to deploy and manage separately. Build data access-as -a-service for the application to use for all data request and write calls. This moves data complexity into its own domain, decoupling the data from the application components. It’s essential to embrace containers, container orchestration and use DevOps tools – integrating security into your processes and automating throughout.
  • You need a hybrid integration platform that supports direct legacy to microservice communication so that you can choose the ideal digital environment without compromises based on your original IT debt.
  • Design Patterns such as the Strangler pattern are effective in migrating components of legacy apps to the microservices architecture. Saga, Circuit Breaker, Chassis and Contract are other design patterns that can help. Another technique/practice is to decompose by domain models so that microservices reflect business functionality. A third aspect is to have data segregation within each microservice or for a group of related microservices, supplemented by a more traditional permanent data store for some applications. Non-relational databases, lightweight message queues, API gateways, and serverless platforms are speeding up migrations. Server-side JavaScript and newer languages such as Go are fast becoming the programming platforms of choice for developing self-sufficient services.

Here’s who shared their insights:

Further Reading

TechTalks With Tom Smith: What Devs Need to Know About Kubernetes

A Guide to Cloud Migration

from DZone Cloud Zone

Sharing is caring!

Comments are closed.