cloud-native

The folks AlphaBravo share their favorite tools from the CNCF.

Hot on the heels of our announcement as a CNCF partner (link here in case you missed it), we wanted to talk about some of our favorite CNCF projects right now. We will be posting a more in-depth blog on each of these projects in the coming weeks, so stay tuned.

You may also enjoy:  Implementing Cloud-Native Enterprise Applications With Open-Source Software 

Kubernetes

This one is a no-brainer. As well as being one of the fastest growing and most committed-to projects ever, Kubernetes provides something that anyone who wasn’t Google had before. A scalable, enterprise-grade, software and container deployment engine. And it is open-source and free to use. It is so much of a game-changer that the DOD has mandated that CNCF-approved Kubernetes be used for all future projects.

Kubernetes takes what Docker or LXD does for a single machine and makes it massively scalable for anyone to use.

We aren’t saying that Kubernetes is easy. With all this power comes a great amount of complexity. The

Key Highlights

  • and load balancing: Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
  • Storage orchestration: Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks: You can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing: Kubernetes allows you to specify how much CPU and memory (RAM) each container needs. When containers have resource requests specified, Kubernetes can make better decisions to manage the resources for containers.
  • Self-healing: Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
  • Secret and configuration management: Kubernetes lets you store and manage sensitive information, such as passwords, OAuth tokens, and ssh keys. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.

Why Should You Care?

In this new world or DevSecOps, Agile, microservices, and the like, your organization needs a tool that is battle-tested and ready to scale with your workloads. Kubernetes is exactly that tool.

Helm

So, you are sold on Kubernetes. Now what? Does your team need to learn how to package and deploy all the common software you need just to get it up and running? No, they do not.

Enter Helm, “The package manager for Kubernetes.”

Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application.

Charts are easy to create, version, share, and publish. The latest version of Helm is maintained by the CNCF – in collaboration with Microsoft, Google, Bitnami and the Helm contributor community.

Key Highlights

  • Simple to Deploy: Easily deploy complex software solution to Kubernetes with a single command.
  • High quality, open-source maintained charts: Leverage the well maintained, public helm chart repositories. You can choose from “stable”, or, if you are feeling a bit more adventurous, the “incubator” charts.
  • Package and version control your own apps: You can write Helm charts for your own applications and even reference sub-charts to be deployed as part of the solution. This allows you write your software infrastructure as code and to track changes to your deployments easily via SCM.

Why Should You Care?

As I stated earlier, Kubernetes is complex. Helm removes some of that complexity by providing an easy deployment method and a massive collection of pre-built charts to give your team a head start in container orchestration.

Prometheus

What good is all this infrastructure if you have no way to monitor it and get metrics about what is going on in your deployments? Prometheus solves that problem.

Prometheus is a systems and service monitoring framework that collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed to be true.

When used in conjunction with Grafana (visualizations) and alert-manager (alerting), Prometheus provides a powerful path to observability for your apps.

Key Highlights

  • Dimensional Data: Prometheus implements a highly dimensional data model. Time series are identified by a metric name and a set of key-value pairs.
  • Powerful Queries: PromQL allows slicing and dicing of collected time series data in order to generate ad-hoc graphs, tables, and alerts.
  • Simple Operation: Each server is independent for reliability, relying only on local storage. Written in Go, all binaries are statically linked and easy to deploy.
  • Many Integrations: Existing exporters allow bridging of third-party data into Prometheus. Examples: system statistics, as well as Docker, HAProxy, StatsD, and JMX metrics.

Why Should You Care?

If you are on the path to being cloud-native and prefer using open-source software, there is no better choice for monitoring your Kubernetes cluster and applications that Prometheus.

CRI-O

Many organization leverage Docker for development, and rightly so. Their Docker toolset is great for developers and infrastructure teams to build our and test their containers.

However, when you get to production, you do not necessarily need that additional overhead. This is why Red Hat created CRI-O.

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes. It is a lightweight alternative to using Docker as the runtime for Kubernetes. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today it supports runc and Kata Containers as the container runtimes but any OCI-conformant runtime can be plugged in principle.

Key Highlights

  • Designed for Kubernetes: CRI-O was designed specifically to run on Kubernetes. This makes it one of the most efficient and compatible solutions to use.
  • Any Container, Any Registry: Provides support for running all OCI compliant container images and can pull from any container registry.
  • Stable and Highly Maintained: CRI-O is committed to being extremely stable and has contributors from major corporations including Red Hat, Intel, SUSE and IBM.

Why Should You Care?

Initially, running Docker as your CRI is fine, but as you begin to scale and start looking for additional efficiencies in your Infrastructure, you can count on CRI-O to provide that lightweight and stable platform you need.

Envoy

The majority of operational problems that arise when moving to a distributed architecture and microservice are ultimately grounded in two areas: networking and observability. It is simply an orders of magnitude larger problem to network and debug a set of intertwined distributed services versus a single monolithic application.

Envoy is an open source edge and service proxy, designed for cloud-native applications.

Envoy runs alongside every application and abstracts the network by providing common features in a platform-agnostic manner. When all service traffic in an infrastructure flows via an Envoy mesh, it becomes easy to visualize problem areas via consistent observability, tune overall performance, and add substrate features in a single place.

Envoy is also the basis for other great tools like the Istio Service Mesh and Ambassador API Gateway.

Key Highlights

  • Out Of Process Architecture: Envoy is a self contained, high performance server with a small memory footprint. It runs alongside any application language or framework.
  • Advanced Load Balancing: Envoy supports advanced load balancing features including automatic retries, circuit breaking, global rate limiting, request shadowing, zone local load balancing, etc.
  • Observability: Deep observability of L7 traffic, native support for distributed tracing, and wire-level observability of MongoDB, DynamoDB, and more.

Why Should You Care?

Envoy is a powerful tool that allows for observability and tracing to be added to any application you run in Kubernetes. When looking for signals that something could be going wrong, or at least going better, Envoy allows for that access.

Closing

Thanks for taking the time to go over this list of our top 5 favorite CNCF projects. There are many other amazing projects hosted by the CNCF that are worthy of their own spotlight, but that is another blog for another day. If you want to see the full list of CNCF projects (be prepared to be overwhelmed), you can check it out here.

Please reach out to us at AlphaBravo if your organization needs DevSecOps or Kubernetes guidance and implementation expertise. Email us at

Further Reading

Monitoring With Prometheus

Cloud-Native Series: What Is Cloud-Native?

from DZone Cloud Zone