Category: Open Source

Manage Your Open Distro for Elasticsearch Alerting Monitors With odfe-monitor-cli

Manage Your Open Distro for Elasticsearch Alerting Monitors With odfe-monitor-cli

When you use Open Distro for Elasticsearch Alerting, you create monitors in Kibana. Setting up monitors with a UI is fast and convenient, making it easy to get started. If monitoring is a major workload for your cluster, though, you may have hundreds or even thousands of monitors to create, update, and tune over time. Setting so many monitors using the Kibana UI would be time-consuming and tedious. Fortunately, the Alerting plugin has a REST API that makes it easier for you to manage your monitors from the command line.

If you’re new to the alerting features in Open Distro for Elasticsearch, take a look at some prior posts, where we covered the basics of setting up a monitor in Kibana and alerting on Open Distro for Elasticsearch Security audit logs.

The Alerting plugin’s REST API lets you perform CRUD and other operations on your monitors. odfe-monitor-cli uses this API for its requests, but lets you save your monitors in YAML files. You can build an automated pipeline to deploy monitors to your cluster and use that pipeline to deploy the same monitors to multiple clusters that support development, testing, and production. You can maintain your monitors in a source control system for sharing, versioning, and review. The CLI helps you guard against drift by reading monitors from your cluster and diffing them against your YAML files.

This blog post explains how to manage your monitors using YAML files through odfe-monitor-cli, available on GitHub under the Apache 2.0 license.


odfe-monitor-cli currently uses HTTP basic authentication. Make sure basic authentication is enabled on your cluster.

Install odfe-monitor-cli

The install process is a single command:

curl -sfL | bash -s -- -b /usr/local/bin

Note: See the odfe-monitor-cli README for other installation methods and instructions on how to build from source.

Once installation is successful, verify that it works as expected:

$ odfe-monitor-cli
This application will help you to manage the Opendistro alerting monitors using YAML files.

  odfe-monitor-cli [command]

Create and sync destinations

You define destinations in Open Distro for Elasticsearch Alerting to specify where messages (Slack, Chime, or custom) should be sent. odfe-monitor-cli doesn’t support managing destinations yet, so you need to use the Kibana UI to create them.

First, navigate to https://localhost:5601 to access Kibana. Log in, and select the Alerting tab. Select Destinations, and create a destination.


Open Distro for Elasticsearch's Alerting Destination definition pane. Setting up a destination for alerts.

On your computer, create a new directory, odfe-monitor-cli.This directory will hold the monitors you create, and any monitors or destinations you sync from your cluster.

$ mkdir odfe-monitor-cli
$ cd odfe-monitor-cli
$ odfe-monitor-cli sync --destinations #Sync remote destination

The final command in that sequence fetches all remote destinations and writes them to a new file, destinations.yml. The file contains a map of destination names and IDs. You’ll use the destination name later when you create a monitor. If you view the file using cat destinations.yml, it should look like this:

#destinations.yml file content
sample_destination: _6wzIGsBoP5_pydBFBzc

If you already have existing monitors on your cluster and would like to preserve them, you can sync those, as well. If not, skip this step. This command fetches all remote monitors to monitors.yml:

odfe-monitor-cli sync --monitors #Sync existing remote monitors

You can add additional directories under your root directory and break your monitors into multiple YAML files, organizing them however you see fit. When you use odfe-monitor-cli to send changes to your cluster, it walks the entire directory structure under the current directory, finding all .yml files. Use the --rootDir option to change the root directory to traverse.

Create a new monitor

Use a text editor to create a new file error-count-alert.yml. Copy and paste the yml below to that file, and change destinationId to the name of an existing destination. You can place your file anywhere in or below the odfe-monitor-cli directory.

- name: 'Sample Alerting monitor'
      interval: 10
      unit: MINUTES
  enabled: true
    - search:
          - log* # Change this as per monitor, this is just an example
        query: # This block should be valid Elasticsearch query
          size: 0
            match_all: {
              boost: 1.0
    - name: '500'
      severity: '2'
      condition: | #This is how you can create multiline
        // Performs some crude custom scoring and returns true if that score exceeds a certain value
        int score = 0;
        for (int i = 0; i < ctx.results[0].hits.hits.length; i++) {
          // Weighs 500 errors 10 times as heavily as 503 errors
          if (ctx.results[0].hits.hits[i]._source.http_status_code == "500") {
            score += 10;
          } else if (ctx.results[0].hits.hits[i]._source.http_status_code == "503") {
            score += 1;
        if (score > 99) {
          return true;
        } else {
          return true;
        - name: Sample Action
          destinationId: sample_destination #This destination should be available in destinations.yaml file otherwise it will throw an error.
          subject: 'There is an error'
          message: |
            Monitor  just entered an alert state. Please investigate the issue.
            - Trigger: 
            - Severity: 
            - Period start: 
            - Period end: 

odfe-monitor-cli provides a diff command that retrieves monitors from your cluster and walks your local directory structure to show you any differences between your cluster’s monitors and your local monitors. You can use the diff command to validate that no one has changed the monitors in your cluster. For now, call the diff command to verify that it finds the new monitor you just created.

$ odfe-monitor-cli diff
 These monitors are currently missing in alerting
name: 'Sample Alerting monitor'
type: 'monitor'

After verifying the diff, you could get any new or changed monitors reviewed by peers, or approved by your management or security department.

You use the push command to send your local changes to your Open Distro for Elasticsearch cluster. When you use push, odfe-monitor-cli calls the Run Monitor API to verify your monitor configurations and ensure that there are no errors. If any error occurs, odfe-monitor-cli displays the error with details. You can fix them and re-run the push command until you get a clean run.

By default, the push command runs in dry run mode, simply diffing and checking the syntax of any additions. Because it doesn’t publish anything to the cluster, it won’t publish any accidental changes. Use the --submit option to send your changes to your cluster when you’re ready:

$ odfe-monitor-cli push --submit

The push command does the following:

  • Runs and validates modified and new monitors.
  • Creates new monitors and updates existing monitors when the --submit flag is provided.
    : Pushing changes with --submit overrides any changes you have made to existing monitors on your cluster (via Kibana or any other way).
  • Does not delete any monitors. Provide --delete along with --submit to delete all untracked monitors. Be careful! You can’t un-delete monitors.


This post introduced you to odfe-monitor-cli, a command-line interface for managing monitors on your Open Distro for Elasticsearch cluster. odfe-monitor-cli makes it easy to store your monitors in version control and deploy these monitors to your Open Distro for Elasticsearch cluster. You can validate that your monitors work as intended and share monitors between environments.

Have an issue or question? Want to contribute? Check out the Open Distro for Elasticsearch forums. You can file issues here. We welcome your participation on the project! See you on the forums and code repos!

from AWS Open Source Blog

Scale HPC Workloads with Elastic Fabric Adapter and AWS ParallelCluster

Scale HPC Workloads with Elastic Fabric Adapter and AWS ParallelCluster

In April, 2019, AWS announced the general availability of Elastic Fabric Adapter (EFA), an EC2 network device that improves throughput and scalability of distributed High Performance Computing (HPC) and Machine Learning (ML) workloads. Today, we’re excited to announce support of EFA through AWS ParallelCluster.

EFA is a network interface for Amazon EC2 instances that enables you to run HPC applications requiring high levels of inter-instance communications (such as computational fluid dynamics, weather modeling, and reservoir simulation) at scale on AWS. It uses an industry-standard operating system bypass technique, with a new custom Scalable Reliable Datagram (SRD) Protocol to enhance the performance of inter-instance communications, which is critical to scaling HPC applications. For more on EFA and supported instance types, see Elastic Fabric Adapter (EFA) for Tightly-Coupled HPC Workloads.

AWS ParallelCluster takes care of the undifferentiated heavy lifting involved in setting up an HPC cluster with EFA enabled. When you set the enable_efa = compute flag in your cluster section, AWS ParallelCluster will add EFA to all network-enhanced instances. Under the cover, AWS ParallelCluster performs the following steps:

  1. Sets InterfaceType = efa in the Launch Template.
  2. Ensures that the security group has rules to allow all inbound and outbound traffic to itself. Unlike traditional TCP traffic, EFA requires an inbound rule and an outbound rule that explicitly allow all traffic to its own security group ID sg-xxxxx. See Prepare an EFA-enabled Security Group for more information.
  3. Installs EFA kernel module, an AWS-specific version of the Libfabric Network Stack, and OpenMPI 3.1.4.
  4. Validates instance type, base os, and a placement group.

To get started, you’ll need to have AWS ParallelCluster set up, see Getting Started with AWS ParallelCluster. For this tutorial, we’ll assume that you have an AWS ParallelCluster installed and are familiar with the ~/.parallelcluster/config file.

Modify your ~/.parallelcluster/config file to include a cluster section that minimally includes the following:

cluster_template = efa
update_check = true
sanity_check = true

aws_region_name = [your_aws_region]

[cluster efa]
key_name =               [your_keypair]
vpc_settings =           public
base_os =                alinux
master_instance_type =   c5.xlarge
compute_instance_type =  c5n.18xlarge
placement_group =        DYNAMIC
enable_efa = compute [vpc public]
vpc_id = [your_vpc]
master_subnet_id = [your_subnet]
  • base_os – currently we support Amazon Linux (alinux), Centos 7 (centos7), and Ubuntu 16.04 (ubuntu1604) with EFA.
  • master_instance_type This can be any instance type (it is outside of the placement group formed for the compute nodes and does not have EFA enabled). We chose c5n.xlarge due to its cheaper price yet still good network performance, as compared with the c5n.18xlarge.
  • compute_instance_type EFA is enabled only on the compute nodes; this is where your code runs when submitted as a job through one of the schedulers, and these instances need to be one of the supported instance types, which at the time of writing includes c5n.18xlarge, i3en.24xlarge, p3dn.24xlarge. See the docs for Currently supported instances.
  • placement_group places your compute nodes physically adjacent, which enables you to benefit fully from EFA’s low network latency and high throughput.
  • enable_efa This is the only new parameter we’ve added to turn on EFA support for the compute nodes. At this time, the only option is compute. This is designed to draw your attention to the fact that EFA is only enabled on the compute nodes.

Now you can create the cluster:

$ pcluster create efa
MasterServer: RUNNING
ClusterUser: ec2-user

Once cluster creation is complete, you can SSH into the cluster:

$ pcluster ssh efa -i ~/path/to/ssh_key

You can now see that there’s a module, openmpi/3.1.4, available. When this is loaded, you can confirm that mpirun is correctly set on the PATH to be the EFA-enabled version in /opt/amazon/efa:

[[email protected] ~]$ module avail

----------------------------------------------- /usr/share/Modules/modulefiles ------------------------------------------------
dot           module-git    module-info   modules       null          openmpi/3.1.3 use.own
[[email protected] ~]$ module load openmpi/3.1.4
[[email protected] ~]$ which mpirun

This version of openmpi is compiled with support for libfabric, a library that allows us to communicate over the EFA device through standard mpi commands. At the time of writing, Open MPI is the only mpi library that supports EFA; Intel MPI is expected to be released shortly.

Now you’re ready to submit a job. First create a file submit.sge containing the following:

#$ -pe mpi 2

module load openmpi
mpirun -N 1 -np 2 [command here]

CFD++ Example

EFA speeds up common workloads, such as Computational Fluid Dynamics. In the following example, we ran CFD++ on a 24M cell case using EFA-enabled c5n.18xlarge instances. CFD++ is a flow solver developed by Metacomp Technologies. The model is an example of a Mach 3 external flow calculation (it’s a Klingon bird of prey):

example of a Mach 3 external flow calculation.

You can see the two scaling curves below; the blue curve shows scaling with EFA; the purple curve without EFA. EFA offers significantly greater scaling and is many times more performant at higher core counts.

scaling curves, with and without EFA.

New Docs!

Last, but definitely not least, we are also excited to announce new docs for AWS ParallelCluster. These are available in ten languages and simply the readthedocs version in many ways. Take a look! Of course, you can still submit doc updates by creating a pull request on the AWS Docs GitHub repo.

AWS ParallelCluster is a community-driven project. We encourage submitting a pull request or providing feedback through GitHub issues. User feedback drives our development and pushes us to excel in every way!

from AWS Open Source Blog

Set up Multi-Tenant Kibana Access in Open Distro for Elasticsearch

Set up Multi-Tenant Kibana Access in Open Distro for Elasticsearch

Elasticsearch has become a default choice for storing and analyzing log data to deliver insights on your application’s performance, your security stance, and your users’ interactions with your application. It’s so useful that many teams adopt Elasticsearch early in their development cycle to support DevOps. This grass-roots adoption often mushrooms into a confusing set of clusters and users across a large organization. At some point, you want to centralize logs so that you can manage your spending and usage more closely.

The flip side of a centralized logging architecture is that you must manage access to the data. You want your payments processing department to keep its data private and invisible from, for example, your front end developers. Open Distro for Elasticsearch Security allows you to manage access to data at document- and field-level granularity. You create Roles, assign Action Groups to those roles, and map Users to the roles to control their access to indices.

Access control for Kibana is harder to achieve. Kibana’s visualizations and dashboards normally share a common index, .kibana. If your users have access to that index, then they have access to all of the visualizations in it. The Open Distro for Elasticsearch Security fixes this problem by letting you define Tenants — silos that segregate visualizations and dashboards, for a multi-tenant Kibana experience. In this post, I’ll walk through setting up multi-tenancy for two hypothetical departments, payments and front end.


Kibana multi-tenancy is enabled out of the box in Open Distro for Elasticsearch. If you have disabled multi-tenancy, our documentation will guide you in enabling it. You’ll also need a running Open Distro for Elasticsearch cluster. I ran esrally, using the http_logs track to generate indexes in my cluster. I’ll use logs-221998 for the payments department and logs-211998 for the front end department.

Important: You must give your roles, users, and tenants different names! I’ve used the convention of appending -role, -user, and -tenant to ensure that the names are unique.

Set up roles

Roles are the basis for access control in Open Distro for Elasticsearch Security. Roles allow you to specify which actions its users can take, and which indices those users can access.

I’ll create two roles — payments-role and frontend-role — each with access to the appropriate underlying index. To create a role, navigate to https://localhost:9200. Log in with a user that has administrator rights (the default admin user is my choice). Click Explore on my own to dismiss the splash screen and click the Security tab in Kibana’s left rail, then click the Roles button:


Open Distro for Elasticsearch Security plugin main panel, selecting the roles button


Next, click the “+” button to add a new role.


Open Distro for Elasticsearch Security plugin, pane for adding a new role


In the Overview section, name the role payments-role and then click the Index Permissions tab at the top of the page. You can also give the role cluster-level permissions in the Cluster Permissions tab. For the purposes of this post, I’ll limit to index-level access control.


Open Distro for Elasticsearch Security plugin, pane setting index access permission

In the Index Permissions tab, click the Add new index and document Type button. In the resulting page, select logs-221998 from the Index drop-down, then click Save.


Open Distro for Elasticsearch Security plugin restricting a role's access to a particular index


Clicking Save reveals a Permissions: Action Groups drop-down. Select ALL (you can set permissions for this user to be more restricted, by choosing READ, for example, which would limit to read-only access). Don’t click Save Role Definition yet, you still need to add the tenant. Select the Tenants tab and click the Add button. Fill in the Tenant field with payments-tenant. You can use any unique value for the field; it’s just a name you choose to refer to the tenant.

You are done configuring this role. Click Save Role Definition.


Open Distro for Elasticsearch Security plugin, pane for adding a tenant to a role


Repeat this process to create the frontend-role role, with access to a different index and a different tenant name. I’m using logs-211998, and frontend-tenant in my cluster.

Set up users

Users in Open Distro for Elasticsearch are authenticated entities. You add them to roles to grant them the permissions that those roles allow. The Open Distro for Elasticsearch Security has an internal user database. If you authenticate directly, via Kibana login or basic HTTP authentication, you will directly be assigned to that user.

You’ll also see the term “backend” in many of the following screens. A backend role is the role supplied by a federated identity provider. The backend role is distinct from, and mapped onto, internal roles and users. It’s a little confusing, but for this post, we can ignore the backend roles.

I’ll create two users — payments-user and frontend-user. Click the Security tab in Kibana’s left rail, then click the Internal User Database button.


Open Distro for Elasticsearch Security plugin, main panel selecting the internal user database


Click the ‘+’ symbol to create Add a new internal user:


Open Distro for Elasticsearch Security plugin, pane for adding a new user to the internal user database


Fill in the Username, Password, and Repeat password fields. Click Submit.


Open Distro for Elasticsearch Security plugin, pane for setting a new user's username and password


Repeat this process to create the frontend-user and frontend-tenant.

Map Users to Roles

The last step is to map the users you created (along with their tenants) to the roles that you created. Click the Security tab and then the Role Mappings button.


Open Distro for Elasticsearch Security plugin, main panel, selecting the role mappings button


Click the “+” button to add a new role mapping. Select payments-role from the Role dropdown. Click the + Add User button, and type payments-user in the text box. Finally, click Submit.


Open Distro for Elasticsearch Security plugin, mapping the user onto the role


Repeat the process for frontend-role and frontend-user user.

In order for your users to be able to use Kibana, you need to add them to the kibana_user role, too. From the role mappings screen, click the edit pencil for kibana_user.


Open Distro for Elasticsearch Security plugin, showing where to click to add users to the kibana_user role


On the next screen, click Add User and type payments-user in the text box. Click Add User again to add the frontend-user. Click Submit to save your changes.

Congratulations, your setup is complete!

Test your tenants

Note: you may run into problems if your browser has cached your identity in cookies. To test in a clean environment, with Firefox, use File > New Private Window, which opens a window with no saved cookies. In Chrome, use File > New Incognito Window.

To test your tenancy and access control, you’ll create a visualization as the payments-user, with the payments-tenant, and verify that you cannot access that visualization when you log in as the frontend-user. In your new window, navigate to https://localhost:5601 and log in as the payments-user user. Click on Explore on my own to dismiss the splash screen. Click the Tenants tab in Kibana’s left rail.


Open Distro for Elasticsearch Security plugin selecting the tenant for Kibana visualizations and dashboards


You can see that the Private tenant is currently selected. Every role has a Global tenant and a Private tenant. Work that you do when you select the Global tenant is visible to all other users/tenants. Work that you do in the Private tenant is visible only to the logged-in user (currently, the payments-user). Lastly, you can see the payments-tenant tenant. Only users that have roles with the payments-tenant can see visualizations and dashboards created when that tenant is selected. Click Select next to the payments-tenant to choose the payments-tenant tenant.

Now you need to create and save a visualization. First, create an index pattern. Click the Management tab, and then click Index Patterns. Type logs-221998 in the Index Pattern text box. Click Next Step. On the following screen, set your Time filter field name.


Creating an index pattern in Kibana, showing how to set the specific index


Note: Normally, you use a wildcard for your index pattern. Esrally created 6 indices in my cluster, all with the pattern logs-XXXXXX. When you set up your roles, you gave access to a specific index for each role. In this case, the payments-user has access only to the logs-221998 index. When you create a visualization as this user, Kibana will access all indices that match the wildcard in the index pattern you create now, including the other five indices that are prohibited. Kibana fails with an access error. To work around this issue, type the index name exactly. For centralized logging, make sure that each department uses a unique prefix for its indices. Then your index patterns can contain wildcard values for each department.

In the Visualize tab, build a simple Metric with traffic count (Note, Rally’s http_logs data has timestamps in 1998. You’ll need to set your time selector correctly to see any results.) Save it as payments-traffic.


A simple metric visualization in Kibana, with the value 10,716,760


Log out and log back in as frontend-user in a New Private Window. On the Tenants tab, you will see that you have the frontend-tenant, not the payments-tenant.


Open Distro for Elasticsearch Security plugin, tenant selection pane. The selected tenant does not have access to other tenants


Select the Visualize tab and you will be asked to create an index pattern. Use the logs-211998 index. Select the Visualize tab again. Kibana tells you that you have no visualizations.


In this post, you used Open Distro for Elasticsearch Security to create two users with their own Kibana tenants provided through the roles you assigned to them. Open Distro for Elasticsearch’s tenancy model keeps tenants segregated so that your payments department’s visualizations and dashboards are not visible to users in your front end department. You then further restricted access to the underlying indices so that users in your front end department can’t access the data in your payment department’s indices. You’ve created a silo that allows you to manage your sensitive data!

Join in on GitHub to improve project documentation, add examples, submit feature requests, and file bug reports. Check out the code, build a plugin, and open a pull request – we’re happy to review and figure out steps to integrate. We welcome your participation on the project. If you have any questions, don’t hesitate to ask on the community discussion forums.

from AWS Open Source Blog

Announcing Gluon Time Series, an Open-Source Time Series Modeling Toolkit

Announcing Gluon Time Series, an Open-Source Time Series Modeling Toolkit

Today, we announce the availability of Gluon Time Series (GluonTS), an MXNet-based toolkit for time series analysis using the Gluon API. We are excited to give researchers and practitioners working with time series data access to this toolkit, which we have built for our own needs as applied scientists working on real-world industrial time series problems both at Amazon and on behalf of our customers. GluonTS is available as open source software on Github today, under the Apache license, version 2.0.

Time series applications are everywhere

We can find time series data, i.e. collections of data points indexed by time, across many different fields and industries. The time series of item sales in retail, metrics from monitoring devices, applications, or cloud resources, or time series of measurements generated by Internet of Things sensors, are only some of the many examples of time series data. The most common machine learning tasks related to time series are extrapolation (forecasting), interpolation (smoothing), detection (such as outlier, anomaly, or change-point detection), and classification.

Within Amazon, we record and make use of time series data across a variety of domains and applications. Some of these include forecasting the product and labor demand in our supply chain, or making sure that we can elastically scale AWS compute and storage capacity for all AWS customers. Anomaly detection on system and application metrics allows us to automatically detect when cloud-based applications are experiencing operational issues.

With GluonTS, we are open-sourcing a toolkit that we’ve developed internally to build algorithms for these and similar applications. It allows machine learning scientists to build new time series models, in particular deep-learning-based models, and compare them with state-of-the-art models included in GluonTS.

GluonTS highlights

GluonTS enables users to build time series models from pre-built blocks that contain useful abstractions. GluonTS also has reference implementations of popular models assembled from these building blocks, which can be used both as a starting point for model exploration, and for comparison. We’ve included tooling in GluonTS to alleviate researchers’ burden of having to re-implement methods for data processing, backtesting, model comparison, and evaluation. All of these are a time-sink and a source of error — after all, a bug in evaluation code leading to mischaracterization of a model’s actual performance can be much more severe than a bug in an algorithm (which would be detected before it is deployed).

Building blocks for assembling new time series models

We have written GluonTS such that many components can be combined and assembled in different ways, so that we can come up with and test new models quickly. Perhaps the most obvious components to include are neural network architectures, and GluonTS offers a sequence-to-sequence framework, auto-regressive networks, and causal convolutions, to name just a few. We’ve also included finer-grained components. For example, forecasts should typically be probabilistic, to better support optimal decision making. For this, GluonTS offers a number of typical parametric probability distributions, as well as tools for modeling cumulative distribution functions or quantile functions directly, which can be readily included in a neural network architecture. Further probabilistic components such as Gaussian Processes and linear-Gaussian state-space models (including a Kalman filter implementation) are also included, so that combinations of neural network and traditional probabilistic models can easily be created. We’ve also included data transformations such as the venerable Box-Cox transformation, whose parameters can be learned jointly with other model parameters.

Easy comparison with state-of-the-art models

GluonTS contains reference implementations of deep-learning-based time series models from the literature, which showcase how to use the components and can be used as starting points for model exploration. We’ve included models from our own line of research, such as DeepAR and spline quantile function RNNs, but also sequence models from other domains such as WaveNet (originally for speech synthesis, adapted here for the forecasting use case). GluonTS makes it easy to compare against these reference implementations, and also allows easy benchmarking against other models from other open-source libraries, such as Prophet and the R forecast package.


GluonTS includes tooling for loading and transforming input data, so that data in different forms can be used and transformed to meet the requirements of a particular model. We have also included an evaluation component that computes many of the accuracy metrics discussed in the forecasting literature, and we look forward to contributions from the community in adding more metrics. As there are subtleties around how exactly the metrics are computed, having a standardized implementation is invaluable for making meaningful and reproducible comparisons between different models.

While metrics are, of course, important, the work of exploring, debugging, and continuously improving models often starts with plotting results on controlled data. For plotting, we rely on Matplotlib, and we’ve included a synthetic data set generator that can simulate time series data with various configurable characteristics.

How does GluonTS relate to Amazon Forecast?

GluonTS is targeted towards researchers, i.e. machine learning, time series modeling, and forecasting experts who want to design novel time series models, build their models from scratch, or require custom models for special use cases. For production use cases and users who don’t need to build custom models, Amazon offers Amazon Forecast, a fully-managed service that uses machine learning to deliver highly accurate forecasts. With Amazon Forecast, no machine learning expertise is required to build accurate, machine learning-based time series forecasting models, as Amazon Forecast employs AutoML capabilities that take care the heavy lifting of selecting, building, and optimizing the right models for you.

Getting started with GluonTS

GluonTS is available on GitHub and on PyPi. After you’ve completed installation, it’s easy to arrive at your first forecast using a pre-built forecasting model. Once you have collected your data, training a model and producing the following plot takes about ten lines of Python.


The figure above shows the forecast for the volume of Tweets (every five minutes) mentioning the AMZN ticker symbol. This was obtained by training a model on data from the Numenta Anomaly Benchmark dataset.

It is early days for us and GluonTS. We expect GluonTS to evolve over time, and we will add more applications beyond forecasting. Some more work is needed to reach a 1.0 version. We look forward to feedback and contributions to GluonTS in the form of bug reports, proposals for feature enhancements, pull requests for new and improved functionality, and, of course, implementations of the latest and greatest time series models.

Related literature and upcoming events

We have a paper on GluonTS at the ICML 2019 Time Series workshop and we will be giving tutorials at SIGMOD 2019 and KDD 2019 on forecasting, where we will feature GluonTS.

A sub-selection of publications featuring models in GluonTS:

Also see, on the AWS Machine Learning blog: Creating neural time series models with Gluon Time Series.

Lorenzo Stella, Syama Rangapuram, Konstantinos Benidis, Alexander Alexandrov, David Salinas, Danielle Maddix, Yuyang Wang, Valentin Flunkert, Jasper Schulz, and Michael Bohlke-Schneider also contributed to this post as well as to GluonTS.

from AWS Open Source Blog

New! Open Distro for Elasticsearch’s Job Scheduler Plugin

New! Open Distro for Elasticsearch’s Job Scheduler Plugin

Open Distro for Elasticsearch’s JobScheduler plugin provides a framework for developers to accomplish common, scheduled tasks on their cluster. You can implement Job Scheduler’s Service Provider Interface (SPI) to take snapshots, manage your data’s lifecycle, run periodic jobs, and much more.

When you use Job Scheduler, you build a plugin that implements interfaces provided in the Job Scheduler library. You can schedule jobs by specifying an interval, or using a Unix Cron expression to define a more flexible schedule to execute your job. Job Scheduler has a sweeper that listens for update events on the Elasticsearch cluster, and a scheduler that manages when jobs run.

Build, install, code, run!

You can build and install the Job Scheduler plugin by following the instructions in the Open Distro for Elasticsearch Job Scheduler GitHub repo.

Please take a look at the source code – play with it, build with it! Let us know if it doesn’t support your use case or if you have ideas for how to improve it. The sample-extension-plugin example code in the Job Scheduler source repo provides a complete example of using Job Scheduler.

Join in on GitHub to improve project documentation, add examples, submit feature requests, and file bug reports. Check out the code, build a plugin, and open a pull request – we’re happy to review and figure out steps to integrate. We welcome your participation on the project. If you have any questions, don’t hesitate to ask on the community discussion forums.

from AWS Open Source Blog

Store Open Distro for Elasticsearch’s Performance Analyzer Output in Elasticsearch

Store Open Distro for Elasticsearch’s Performance Analyzer Output in Elasticsearch

Open Distro for Elasticsearch‘s Performance Analyzer plugin exposes a REST API that returns metrics from your Elasticsearch cluster. To get the most out of these metrics, you can store them in Elasticsearch and use Kibana to visualize them. While you can use Open Distro for Elasticsearch’s PerfTop to build visualizations, PerfTop doesn’t retain data and is meant to be lightweight.

In this post, I’ll explore Performance Analyzer’s API through a code sample that reads Performance Analyzer’s metrics and writes them to Elasticsearch. You might wonder why Performance Analyzer doesn’t do that already (we welcome your pull requests!). Performance Analyzer is designed as a lightweight co-process for Elasticsearch. If your Elasticsearch cluster is in trouble, it might not be able to respond to requests, and Kibana might be down. If you adopt the sample code, I recommend that you send the data to a different Open Distro for Elasticsearch cluster to avoid this issue.

You can follow along with the sample code I published in our GitHub Community repository. The code is in the pa-to-es folder when you clone the repository. You can find information about the other code samples in past blog posts.

Code overview

The pa-to-es folder contains three Python files (Python version 3.x required) and an Elasticsearch template that sets the type of the @timestamp field to be date. is the application, consisting of an infinite loop that calls Performance Analyzer – pulling metrics, parsing those metrics, and sending them to Elasticsearch:

    while 1:
        print('Gathering docs')
        docs = MetricGatherer().get_all_metrics()
        print('Sending docs: ', len(docs))

As you can see, supplies two classes — MetricGatherer and MetricWriter— to communicate with Elasticsearch. MetricGatherer.get_all_metrics() loops through the working metric descriptions in calling get_metric() for each.

To get the metrics, MetricGatherer generates a URL of the form:


(You can get more details on Performance Analyzer’s API in our documentation.) The metric descriptions are namedtuples, providing metric/dimension/aggregation trios. It would be more efficient to send multiples, but I found parsing the results so much more complicated that it made any performance gains less important. To determine the metric descriptions, I generated all of the possible combinations of metric/dimension/aggregation, tested, and retained the working descriptions in It would be great to build an API that exposes valid combinations rather than working from a static set of descriptions (did I mention, we welcome all pull requests?).

MetricGatherer uses result_parse.ResultParser to interpret the output of the call to Performance Analyzer. The output JSON consists of one element per node. Within that element, it returns a list of fields, followed by a set of records:

  "XU9kOXBBQbmFSvkGLv4iGw": {
    "timestamp": 1558636900000,
     "data": {
  }, ...

ResultParser zips together the separated field names and values and generates a dict, skipping empty values. The records generator function uses this dict as the basis for its return, adding the timestamp from the original return body. records also adds the node name and the aggregation as fields in the dict to facilitate visualizing the data in Kibana.

MetricWriter closes the loop, taking the collection of dicts, each of which will be written as a document to Elasticsearch, building a _bulk body, and POSTing that batch to Elasticsearch. As written, the code is hard-wired to send the _bulk to https://localhost:9200. In practice, you’ll want to change the output to go to a different Elasticsearch cluster. The authentication for the POST request is admin:admin – be sure to change that when you change your passwords for Open Distro for Elasticsearch.

Add the template to your cluster

You can run the code as written, and you will see data flow into your Open Distro for Elasticsearch cluster. However, the timestamp returned by Performance Analyzer is a long int, Elasticsearch will set the mapping as number, and you won’t be able to use Kibana’s time-based functions for the index. I could truncate the timestamp or rewrite it so that the mapping is automatically detected. I chose instead to set a template.

The below template (template.json in the pa-to-es folder) sets the field type for @timestamp to date. You need to send this template to Elasticsearch before you send any data, auto-creating the index. (If you already ran pa-to-es, don’t worry, just DELETE any indices that it created.) You can use Kibana’s developer pane to send the template to Elasticsearch.

Navigate to https://localhost:5601. Log in, dismiss the splash screen, and select the DevTools tab. Click Get to work. Copy-paste the below text into the interactive pane and click the triangle to the right. (Depending on the version of Elasticsearch you’re running, you may receive a warning about type removal. It’s OK to ignore this warning.)

POST _template/pa 
    "index_patterns": ["pa-*"],
    "settings": {
        "number_of_shards": 1
    "mappings": {
        "log": {
            "properties": {
                "@timestamp": {
                    "type": "date"

Monitoring Elasticsearch

I ran esrally, with the http_logs track against my Open Distro for Elasticsearch, and also ran to gather metrics. I then used the data to build a Kibana dashboard for monitoring my cluster.

A kibana dashboard with metrics gathered by Open Distro for Elasticsearch's Performance Analyzer plugin


The metrics stored in Elasticsearch documents have a single metric/dimensions/aggregation combination, giving you freedom to build Kibana visualizations at the finest granularity. For example, my dashboard exposes CPU utilization down to the Elasticsearch operation level, the disk wait time on each node, and read and write throughput for each operation. In a future post, I will dive deep on building out dashboards and other visualizations with Performance Analyzer data.

from AWS Open Source Blog

Firecracker Open Source Update May, 2019

Firecracker Open Source Update May, 2019

Firecracker logo.

It’s been six months since we launched Firecracker at re:Invent, and we’ve been thrilled by the reception that the open source community has given us. Over these six months, we have merged 87 commits from 30+ external contributors into the Firecracker master branch (representing ~24% of all commits in that time span). These contributions covered device model virtio spec compliance, support for CPUs without one-GiB hugepages, and memory model improvements, as well as improvements in documentation, API specification, testing, and bug fixes. Open source users have also filed about a dozen bug reports, and we’ve received community feedback for several RFCs. The Firecracker repository on GitHub now has over 450 GitHub forks, and there are over 900 people on the Firecracker Slack. We’ve also been excited to see several other open source teams working in the containers/serverless compute space integrating Firecracker with their projects, including Kata Containers, UniK, and OSv. Intel also recently launched “Cloud Hypervisor,” leveraging code from rust-vmm, Firecracker, and crosvm.

In this post, we’ll report on what’s going on now in the Firecracker repo and the milestones that we’ve already planned for the next few months, and, most importantly, get your input for future roadmap items.

Current work

So far in 2019, we have worked on defense in depth, ease of use, AMD and Arm CPU support, and vsock. To improve isolation in a high-density, multi-tenant scenario, we added the option to dynamically adjust rate limiters, added a metric for guests spoofing their MAC address, and defaulted Firecracker to the strictest seccomp level. Usability improvements for engineers working with Firecracker include better API documentation, more dev tools, and clearer error/panic information. With Firecracker 0.16.0, we released alpha AMD support (we still need to get automated testing up and running) and Arm is coming along well. We also spent some time thinking about the right way to support vsock in Firecracker (we’ve received a lot of feedback from the open source community, thanks!), and that implementation is now in progress.

Ahead in 2019

Continuing to raise the security bar for Firecracker, we want to apply fuzzing to the device model and support block device encryption as an additional layer of isolation in specific use cases. We’re going to continue iterating on our continuous integration system to make test runs more granular, to make adding new platforms easy, and to improve test run report clarity. We would also like to know whether the GitHub releases (which include code and binaries) fit existing consumers. Should we maintain something like a Snap package? Are there other forms of software distribution that we should look at?

Next, we want to figure out how to support inference acceleration. While GPU passthrough and oversubscribed serverless compute workloads don’t mix well technically, accelerated inference computation in a function or in a container-based microservice seems like a natural use case. We’d love to hear your feedback and ideas on this one.

Finally, as much as we had hoped to one day be a part of the versioning revolution vanguard, once we close on AMD and Arm CPU support, vsock, and the CI, as well as settle on an approach for API stability, we will declare Firecracker v1.

Your ideas and use cases!

At AWS, 90-95% of our roadmap is driven by customers. This is the time of the year when we plan for 2020, and we’re eager to take in proposals from everyone in the open source community. So please tell us what you want to see in Firecracker by adding your ideas, asks, and use cases to this 2020 Roadmap RFC. Of course we’ve already given the existing feature requests some thought, but we’ll be happy if you come up with more! While we encourage everyone to think big, we will ensure that the resulting roadmap aligns with our mission of enabling secure, multi-tenant, minimal-overhead execution of container and function workloads.

Next post

We’ll keep in touch with more posts like this one. Our next blog post will discuss our tenets, talk about how rust-vmm relates to Firecracker’s future, and present the Roadmap we set for 2020.

from AWS Open Source Blog

Use Json Web Tokens (JWTs) to Authenticate in Open Distro for Elasticsearch and Kibana

Use Json Web Tokens (JWTs) to Authenticate in Open Distro for Elasticsearch and Kibana

Token-based authentication systems are popular in the world of web services. They provide many benefits, including (but not limited to) security, scalability, statelessness, and extensibility. With Amazon’s Open Distro for Elasticsearch, users now have an opportunity to take advantage of the numerous security features included in the Security plugin. One such feature is the ability to authenticate users with JSON Web Tokens (JWT) for a single sign-on experience. In this post, I walk through how to generate valid JWTs, configure the Security plugin to support JWTs, and finally authenticate requests to both Elasticsearch and Kibana using claims presented in the tokens.


To work through this example, clone or download our Community repository. The jwt-tokens directory contains the sample code and configuration for you to follow along with this post. There are two config files – kibana.yml, and config.yml – a docker-compose.yml, and a token-gen directory with java code and a .pom to build it.

Generating JWTs

A JWT is composed of three Base64-encoded parts: a header, a payload, and a signature, concatenated with a period (.). Ideally, JWTs are provided by an authentication server after validating credentials provided by the user. The user sends this token as a part of every request, and the web service allows or denies the request based on the claims presented in the token. For the purposes of this post, you will generate one such token using a shared secret (provided by the HS256 algorithm).

Let’s start by analyzing some sample Java code that generates JWTs valid for 16 minutes. This code uses the jjwt library to generate the tokens and signing keys:

  1 import io.jsonwebtoken.Jwts;
  2 import io.jsonwebtoken.SignatureAlgorithm;
  3 import;
  4 import;
  5 import java.util.Date;
  6 import java.util.HashMap;
  7 import;
  9 public class JWTTest {
 10     public static void main(String[] args) {
 11         Key key = Keys.secretKeyFor(SignatureAlgorithm.HS256);
 12         Date exp = new Date(System.currentTimeMillis() + 1000000);
 13         HashMap<String,Object> hm = new HashMap<>();
 14         hm.put("roles","admin");
 15         String jws = Jwts.builder()
 16                 .setClaims(hm)
 17                 .setIssuer("https://localhost")
 18                 .setSubject("admin")
 19                 .setExpiration(exp)
 20                 .signWith(key).compact();
 21         System.out.println("Token:");
 22         System.out.println(jws); 
 23         if(Jwts.parser().setSigningKey(key).parseClaimsJws(jws).getBody().getSubject().equals("test")) {
 24             System.out.println("test");
 25         }
 26         String encoded = Encoders.BASE64.encode(key.getEncoded());
 27         System.out.println("Shared secret:");
 28         System.out.println(encoded);
 29     }
 30 }
  • Line 11 gives us a random signing key based on the HMAC_SHA256 algorithm. This is the signing_key that the Security plugin uses when verifying the JWT Tokens. Since we are using a symmetric key algorithm, this signing key is the Base64-encoded shared secret (Line 28). If we were using an asymmetric algorithm such as RSA or ECDSA, the signing key will be the public key. Line 19 sets the claims. The Security plugin automatically identifies the algorithm.
  • Line 13-14 creates a claim that maps the key roles to the value admin.
  • Line 12 generates a Date 16 minutes from the current time. Line 19 uses this date in the Jwts.Builder.
  • Line 20 signs the JWT token using the signing_key created on Line 11.

You need Apache Maven to compile and run the sample code. I used Homebrew to install Maven 3.6.1 with the command

$ brew install maven.

From the token-gen directory, build and run the code:

$ cd token-gen
$ mvn clean install
$ java -jar target/jwt-test-tokens-1.0-SNAPSHOT-jar-with-dependencies.jar

Shared Secret:

Make sure to copy the token and shared secret, you’ll need them in a minute. You can also find these commands in the README in the token-gen directory.

Configuring the security plugin to use JWTs

Open Distro for Elasticsearch’s Security Plugin contains a configuration file that specifies authentication type, challenge, and various other configuration keys that must be present in the payload of the JWT for the request to be authenticated. You can also specify an authentication backend if you want further authentication of the request. When you start up the container, you will override the default configuration with the file named config.yml in the jwt-tokens directory.

Open jwt-tokens/config.yml in your favorite editor and change it to read as below:

1   jwt_auth_domain:
2     enabled: true
3     http_enabled: true 
4     transport_enabled: true
5     order: 0
6     http_authenticator:
7       type: jwt
8       challenge: false
9       config:
10        signing_key: "usuxqaUmbbe0VqN+Q90KCk5sXHCfEVookMRyEXAMPLE="
11        jwt_header: "Authorization"
12        jwt_url_parameters: null
13        roles_key: "roles"
14        subject_key: "sub"
15    authentication_backend:
16      type: noop
  • Line 2 enables this domain to use JWT for authentication.
  • Line 7 chooses JWT as the authentication type.
  • Line 8 sets the key challenge to “false.” A challenge is not required here, since the JWT token contains everything we need to validate. Line 15 is set to noop for the same reason.
  • Line 10 sets the signing_key to the Base64-encoded shared secret that we generated in the Java code above. Note: be sure to replace the secret key with the secret key that you generated in the prior section.
  • Line 11 is the HTTP header in which the token is transmitted. You will be using the authorization header with the bearer scheme. The “Authorization” header is used by default, but you could also pass the JWT using a URL parameter.
  • Line 13 specifies the key that stores user roles as a comma-separated list. In our case, we are only specifying admin.
  • Line 14 specifies the key that stores the username. If this key is missing, we just get the registered subject claim. In our code above, we are just setting the subject claim.

If you are new to the world of Docker and Open Distro for Elasticsearch, I highly recommend getting started with the Open Distro for Elasticsearch documentation. Throughout this tutorial, I use a cluster with one Elasticsearch node and one Kibana node.

Kibana Changes

To simulate how Kibana would work if we used a standard token provider, you just need to add one additional line in kibana.yml. Edit jwt-tokens/kibana.yml and add:

opendistro_security.auth.type: "jwt"

Run Elasticsearch, and Kibana

You will need a running Docker environment to follow along. I use Docker Desktop for Mac. You can find instructions on setting it up and running it in the post on how to download and configure Docker Desktop (use the docker-compose.yml from the jwt-tokens directory instead of the one in that post). From the jwt-tokens directory, run the following command:

$ docker-compose up

After the images download and the cluster starts, run docker ps in a new terminal. You should see something similar to the output below: two containers, with one running the Elasticsearch image and the other running the Kibana image.

$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 63c3e9df19ac amazon/opendistro-for-elasticsearch-kibana:0.9.0 "/usr/local/bin/kiba…" 8 seconds ago Up 7 seconds→5601/tcp odfe-kibana 0aa5316ffbc7 amazon/opendistro-for-elasticsearch:0.9.0 "/usr/local/bin/dock…" 8 seconds ago Up 7 seconds→9200/tcp,→9600/tcp, 9300/tcp odfe-node1

Reinitializing the security index [Optional]

If you ran docker-compose only after you edited config.yml, you can skip this section. If you ran docker-compose at any time before you edited config.yml, you will need to reinitialize the security index and ensure that requests are being authenticated.

First, find your Elasticsearch container with docker ps:

$ docker ps
533f03ee0fdc amazon/opendistro-for-elasticsearch:0.9.0 "/usr/local/bin/dock…" 2 days ago Up 20 seconds→9200/tcp,→9600/tcp, 9300/tcp odfe-node1
3a2c4a582165 amazon/opendistro-for-elasticsearch-kibana:0.9.0 "/usr/local/bin/kiba…" 2 days ago Up 20 seconds→5601/tcp odfe-kiban

Copy the CONTAINER ID for the amazon/opendistro-for-elasticsearch container. In my case, the container ID is 533f03ee0fdc You can get Bash access to that container by running:

$ docker exec -it 533f03ee0fdc /bin/bash

Make sure to use your container ID in the above command. Reinitialize the security index and exit:

$ plugins/opendistro_security/tools/ -f plugins/opendistro_security/securityconfig/config.yml -icl -nhnv -cert config/kirk.pem -cacert config/root-ca.pem -key config/kirk-key.pem -t config
Open Distro Security Admin v6
Will connect to localhost:9300 ... done
Elasticsearch Version: 6.5.4
Open Distro Security Version:
Connected as CN=kirk,OU=client,O=client,L=test,C=de
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: odfe-cluster
Clusterstate: YELLOW
Number of nodes: 1
Number of data nodes: 1
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch
Will update 'security/config' with plugins/opendistro_security/securityconfig/config.yml
SUCC: Configuration for 'config' created or updated
Done with success
$ exit

Test your changes

Now you can test out some basic commands by adding the authorization header. Be sure to replace the token in the below commands with the token you generated above:

$ curl -XGET https://localhost:9200/_cat/nodes -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJyb2xlcyI6ImFkbWluIiwiaXNzIjoiaHR0cHM6Ly9sb2NhbGhvc3QiLCJzdWIiOiJhZG1pbiIsImV4cCI6MTU1NDc1Nzk3M30.KY5gC4yrBXXYYcaEJOl-xyiEr98h9Sw9dIWwEXAMPLE" --insecure 37 38 3 0.03 0.11 0.09 mdi * WTNYA_5
$ curl -XGET https://localhost:9200/_cluster/health\?pretty -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJyb2xlcyI6ImFkbWluIiwiaXNzIjoiaHR0cHM6Ly9sb2NhbGhvc3QiLCJzdWIiOiJhZG1pbiIsImV4cCI6MTU1NDc1Nzk3M30.KY5gC4yrBXXYYcaEJOl-xyiEr98h9Sw9dIWwlzJYpBg" --insecure
  "cluster_name" : "odfe-cluster",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 7,
  "active_shards" : 7,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 58.333333333333336
$ curl -XGET https://localhost:9200/_opendistro/_security/authinfo\?pretty -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJyb2xlcyI6ImFkbWluIiwiaXNzIjoiaHR0cHM6Ly9sb2NhbGhvc3QiLCJzdWIiOiJhZG1pbiIsImV4cCI6MTU1NDc1Nzk3M30.KY5gC4yrBXXYYcaEJOl-xyiEr98h9Sw9dIWwlzJYpBg" --insecure { "user" : "User [name=admin, roles=[admin], requestedTenant=null]", "user_name" : "admin", "user_requested_tenant" : null, "remote_address" : "", "backend_roles" : [ "admin" ], "custom_attribute_names" : [ "attr.jwt.iss", "attr.jwt.sub", "attr.jwt.exp", "attr.jwt.roles" ], "roles" : [ "all_access", "own_index" ], "tenants" : { "admin_tenant" : true, "admin" : true }, "principal" : null, "peer_certificates" : "0", "sso_logout_url" : null }

Then you can issue a request to Kibana from the terminal and note the successful response:

$ curl -XGET http://localhost:5601 -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJyb2xlcyI6ImFkbWluIiwiaXNzIjoiaHR0cHM6Ly9sb2NhbGhvc3QiLCJzdWIiOiJhZG1pbiIsImV4cCI6MTU1MzY0Mjc1NX0.2RVy0VEObwduF9nNZas498LTJMRLC9luTuebEXAMPLE" -i

HTTP/1.1 302 Found location: /app/kibana kbn-name: kibana set-cookie: security_storage=Fe26.2**a86a495463a9ed2aef99e9499025b000888bc70232d006765c9990f8c9d7412*viOmkphhLLIDeBTxX9_OkQ*lIBpboN6gQ07QvwY7mMp-48IsrvI0qtfaRR8_VmPesYmlqlNizId2smn-kXtIJdsmZBpz7y4WLJzmqP0hKKCBAAJ9Bccj-fVh5QJdHW6mWEhuS870VlB9PUMZAnQ8ju6D8Gs-70A16rodBDSI4b601EhJET4vtMObTFmvYkiavqKvc9CPbwMpHRQdIKwX9AzSjbekMC8CSn1PgzMbtNijYNFd3sLZHrDxrqTSQijm8M**ba624f98f91081024b49264a08c692287b30bca4f185aa8925c1bb238cdf27ef*fc9z6yinUj2Xp920Iy-GoKdVzO5G4aZRsxQWi_bVH-Y; Path=/ set-cookie: security_preferences=Fe26.2**a2791807692cd418aa644804fd0e6e5cd33421a899e0797d8a97ec4e7f2cbf0*guZ5n6zMcCwylCPOazyyew*1n43XcDV1NcGvgl-VwD07njHLkxn-VdgQNVMk5ZQSsw**f25a10407839cc2869b06826eb5459f166baf6fcea11df6b1f4a316152fec3e4*K5wr95D7cVoetpvEFjdzjSN-mgvBEU9tWpx6QiLgEuE; Max-Age=2217100485; Expires=Mon, 27 Jun 2089 17:56:50 GMT; Path=/ set-cookie: security_authentication=Fe26.2**5ca6f12884a00a406f89887bb91f33ee7a68f22c815996a9adbda934698364d*OuII1jATnWfYzaHIv4_HvQ*qoTlwVqRvpDzkWmq-JYZbXpSbEJ6DyG5qhmNenM0GB6vbGEcnkXmpUFvOICkAyRuzmKwl9Uut1GYM98TLwhTZbzFb6Z1d5Sb4MOpk6DJNFjuokIm0u9tqsCwCGMEO_avmosVy4gceAluSX-7vN-vC461jt2B3_DIbyeREjPLtjr91a2I95nGQRir_-4cypkjUaS3Blub1ZC7fNnkBcK5POvo-nKTXJmx5KQx4O_6zVc3vFfoQLJ7_AUrLAID_htMHMv5o7_qn1oMHP-LTr5zvO4iDLlY1UgBJCmikpMatxPg8ophKxWkMRuIdo4UaZEjrzXwQPJtYBmpJxwQtolJQB5jwOnNNVqtUeiI7sWitHM**1c4cf336b71a513045bf0bfe50ff96447c213f70dfd3745d713e57235a7edff9*fLp9DLSMhgKHjOIJ8VDHMbVI9Z7W56Velx4Pi5STK4s; Max-Age=3600; Expires=Tue, 26 Mar 2019 21:42:05 GMT; HttpOnly; Path=/ cache-control: no-cache content-length: 0 connection: close Date: Tue, 26 Mar 2019 20:42:05 GMT


Congratulations! You have created JWT tokens for authenticating and controlling access to your Open Distro for Elasticsearch cluster. You modified the Security Plugin’s configuration to accept JWTs. You ran your modified Elasticsearch and Kibana in your containers, and successfully sent queries.

Have an issue or question? Want to contribute? You can get help and discuss Open Distro for Elasticsearch on our forums. You can file issues here.

from AWS Open Source Blog

Build Your Own: Open Distro for Elasticsearch Build Scripts Now Available

Build Your Own: Open Distro for Elasticsearch Build Scripts Now Available

Open Distro for Elasticsearch logo with builder tools, suggesting that you can now build Open Distro yourselfWant to craft your own Docker images using Open Distro for Elasticsearch build scripts? Or build your RPM or Debian packages to customize your own Open Distro for Elasticsearch stack? Our build scripts for Elasticsearch and for Kibana are now available for you to do just that.

As with the rest of Open Distro for Elasticsearch, you can use, modify and contribute to these scripts. Check them out, build your images, and modify and adapt them for your use case!


If you have any questions, join and ask on the project community discussion forums.

Found a bug, need a feature, or want to contribute more examples and documentation?

Join in on GitHub to enhance technical documentation, add example code, request features, submit bugs. Raise a pull request and contribute build scripts for other types of packages you’re building. Write a blog post on your use-case and we’d be happy to feature it on our project news section.

We welcome your participation on the project! See you on the forums and code repos!

from AWS Open Source Blog

Running Open Distro for Elasticsearch on Kubernetes

Running Open Distro for Elasticsearch on Kubernetes

This post is a walk-through on deploying Open Distro for Elasticsearch on Kubernetes as a production-grade deployment.

Ring is an Amazon subsidiary specializing in the production of smart devices for home security. With its signature product, the Ring Video Doorbell and Neighborhood Security feed for many major cities, Ring is pursuing a mission to reduce crime in communities around the world. At Ring, we needed a scalable solution for storing and querying heavy volumes of security log data produced by Ring devices.

We had a few requirements for this log aggregation and querying platform. These included user authentication and Role-based Access Control (RBAC) for accessing logs, and SAML support for integrating authentication with our existing Single Sign-On infrastructure. We also required all communication to and within the platform to be encrypted in transit, as logs may contain sensitive data. Our final requirement was a monitoring system that could be used for security alerting, based on the incoming log data.

Open Distro for Elasticsearch provides several methods of authentication ranging from HTTP Basic authentication to Kerberos ticket-based authentication. Open Distro for Elasticsearch also provides a rich set of role-based access control (RBAC) features that allow locking down access to ingested log data at a very granular level. This makes securing our central logging platform very simple.

In addition, Open Distro for Elasticsearch provides SAML support for Kibana, the open source front-end UI for Elasticsearch. This SAML support allows for integrating the authentication with several Identity Providers such as AWS Single Sign-On or Okta. All communication to, from, and within the platform uses TLS encryption, which fulfills our encryption requirements as well.

Lastly, Open Distro for Elasticsearch offers alerting and monitoring services that allow setting up of custom security alerts and system health monitoring. Open Distro for Elasticsearch answered many of our needs for Ring’s Security Observability infrastructure.

As part of Ring’s Security Operations, we were already using Amazon Elastic Container Service for Kubernetes (Amazon EKS) for deploying and maintaining a Kubernetes cluster responsible for housing our security tooling.

The team decided to deploy Open Distro for Elasticsearch in Kubernetes as a scaled-out deployment. Kubernetes is a very popular container orchestration platform and, as our logging requirements grow, Kubernetes allows us to continue scaling up the platform with ease and agility. It also reduces reliance on a configuration management infrastructure.

In this post, we’ll share some lessons we learned which we hope will help others in solving similar challenges.


This walk-through is focused on a deployment in Amazon EKS, the managed Containers-as-a-Service offering from AWS.

Please ensure that all dependent Kubernetes plugins are deployed in the cluster being used, such as external-dns or KIAM.

Ensure access to the cluster using the kubectl binary and corresponding kubeconfig credentials file.

Annotations for external-dns will not work if the external-dns service is not deployed. You can deploy it using the community-developed Helm chart.

Annotations for pod IAM roles will not work if KIAM is not deployed. You can deploy KIAM using its community developed Helm chart.

This deployment requires TLS certificates to be bootstrapped, as well as an existing Certificate Authority for issuing said certificates. See our earlier post on how to Add Your Own SSL Certificates to Open Distro for Elasticsearch for more information on generating your own certificates.

Project plan

Based on our previous experience deploying the community-developed version of Elasticsearch on Kubernetes, I decided to follow the same pattern with Open Distro for Elasticsearch.

This is the architecture we aimed for:


architecture for a Ring Security EKS cluster.

We decided to use Amazon EKS for a managed Kubernetes cluster, bearing in mind the following considerations:

  • Ring Security already has a running Kubernetes cluster in Amazon EKS with the ability to scale worker nodes up or down for security tooling, which can easily be used to host this Open Distro for Elasticsearch cluster.
  • The Cluster consists of eight m5.2xlarge instances being used as worker nodes, making it large enough to host our Elasticsearch cluster.
  • Amazon EKS saves us the hassle of managing our own Kubernetes API server by providing a managed one, so patching and security for Kubernetes is very simple.

We started off with an eight-node test deployment that could eventually be scaled to a production deployment.

We also decided to use the official Docker images provided by the Open Distro team, to save us the trouble of managing our own container images and container registry.

The Elasticsearch cluster we planned would consist of three master nodes, two client/coordinating nodes, and three data nodes.

We chose the Kubernetes resource types for the respective Elasticsearch node types as following:

  • Deployment for master nodes (stateless)
  • Deployment for client nodes (stateless)
  • StatefulSet for data nodes (stateful)

Our cluster’s Elasticsearch API is fronted by an AWS Network Load Balancer (NLB), deployed using the Kubernetes Service resource type.

We decided to use Kubernetes taints and the anti-affinity API spec to ensure that Elasticsearch master, client, and data nodes are spun up on separate EC2 worker nodes. In conjunction, we decided to use the Kubernetes tolerations API spec to ensure that Elasticsearch master and client nodes are spun up on dedicated EC2 worker nodes for each container.

Creating initial resources

Start by cloning the Open Distro for Elasticsearch community repository. This repository contains Kubernetes manifests for a sample deployment of Open Distro for Elasticsearch. The files are named based on the resource types they create, starting with a digit that indicates which file takes precedence upon deployment.

From the root of this repository, navigate to the open-distro-elasticsearch-kubernetes folder:

$ cd open-distro-elasticsearch-kubernetes

Once there, navigate to the elasticsearch subfolder using the command cd elasticsearch. This folder contains our sample Open Distro for Elasticsearch deployment on Kubernetes.

Next, create a Kubernetes namespace to house the Elasticsearch cluster assets, using the 10-es-namespace.yml file:

$ kubectl apply -f 10-es-namespace.yml

Create a discovery service using the Kubernetes Service resource type in the 20-es-svc-discovery.yml file to allow master nodes to be discoverable over broadcast port 9300:

$ kubectl apply -f 20-es-svc-discovery.yml

Create a Kubernetes ServiceAccount as a requirement for future StatefulSets using file 20-es-service-account.yml:

$ kubectl apply -f 20-es-service-account.yml

Create a Kubernetes StorageClass resource for AWS Elastic Block Storage drives as gp2 storage (attached to data nodes) using file 25-es-sc-gp2.yml:

$ kubectl apply -f 25-es-sc-gp2.yml

Create a Kubernetes ConfigMap resource type (which will be used to bootstrap the relevant Elasticsearch configs such as elasticsearch.yml and logging.yml onto the containers upon deployment) using file 30-es-configmap.yml:

$ kubectl apply -f 30-es-configmap.yml

This ConfigMap resource contains two configuration files required by Open Distro for Elasticsearch, elasticsearch.yml and logging.yml. These files have been supplied with settings that meet our requirements; you may need to change them depending on your own specific deployment requirements.

API ingress using Kubernetes service and AWS Network Load Balancer

Deploy a Kubernetes Service resource for an ingress point to the Elasticsearch API.
Create the resource using file 35-es-service.yml:

$ kubectl apply -f 35-es-service.yml

This resource type uses the annotations key to create a corresponding internal Network Load Balancer (NLB) in AWS.

The annotations section below defines key/value pairs for the configuration settings in the  AWS Network Load Balancer that this manifest will set up:

    # Service external-dns has to be deployed for this A record to be created in AWS Route53

    # Defined ELB backend protocol as HTTPS to allow connection to Elasticsearch API https

    # Load Balancer type that will be launched in AWS, ELB or NLB. "nlb"

    # ARN of ACM certificate registered to the deployed ELB for handling connections over TLS
    # ACM certificate should be issued to the DNS hostname defined earlier ( "arn:aws:acm:us-east-1:111222333444:certificate/c69f6022-b24f-43d9-b9c8-dfe288d9443d" "https" "true" "60" "true"

    # Annotation to create internal only ELB

The key is used to set up the AWS Route53 DNS A Record entry for this newly-created NLB.

Other key/value pairs in this annotations spec define configuration options of the ELB being spun up. This includes the Load Balancer’s backend protocol, TLS Certificate being sourced from AWS Certificate Manager (ACM), or whether the NLB will be external or internal.

The annotations within the Kubernetes manifest are commented to clarify their respective purposes.

Open Distro for Elasticsearch requires us to open three ports on the ingress point: ports 9200 (for HTTPS/REST access), 9300 (for transport layer access), and 9600 (for accessing metrics using performance analyzer or other services). We open these ports within the same Kubernetes manifest.

Security and TLS configuration

This section deals with bootstrapping TLS certificates using a Kubernetes Secrets object.

Create a Kubernetes Secrets resource which will be used to bootstrap the relevant TLS certificates and private keys onto the containers upon deployment using file 35-es-bootstrap-secrets.yml:

$ kubectl apply -f 35-es-bootstrap-secrets.yml

Required certificates include one certificate chain for the issued certificate defined in the elk-crt.pem portion of the Secrets object, one corresponding private key for the issued certificate in the elk-key.pem portion, and finally your root CA’s certificate to add it to the trusted CA chain in the elk-root-ca.pem portion.

You also need to bootstrap admin certificates for using the cluster security initialization script provided by default, and for setting up Elasticsearch user passwords. These certificates correspond to the same certificate and key types mentioned earlier for configuring TLS.

The portions of the Secrets object dealing with admin certificates are admin-crt.pem for the trusted certificate chain, admin-key.pem for the corresponding private key for this certificate, and admin-root-ca.pem for the root CA(s) certificate.

Our certificate data has, of course, been redacted from the 35-es-bootstrap-secrets.yml for security reasons. Add in your own certificates and private keys issued by your own certificate authority.

Within the ConfigMap that we created earlier, there are two separate parts to the config options that load the relevant certs: one deals with TLS configuration for the REST layer of Elasticsearch, and the other deals with the SSL configuration on the transport layer of Elasticsearch. You can see these in the comments in the elasticsearch.yml portion of the ConfigMap. The passphrases for both the transport layer and REST layer private keys are loaded into the containers using environment variables, which we will go over in a later section.

Node configuration and deployment

Master nodes

Create a Kubernetes Deployment resource for deploying three Elasticsearch master nodes using file 40-es-master-deploy.yml.Some parameters in this deployment that need to be configured to suit your own deployment. Within the spec.template.annotations in file 40-es-master-deploy.yml, you need to supply a role name that the pod is able to assume from AWS IAM:


You can use the annotation to define the role you want the pod to assume by changing the value of the to the desired IAM role’s name. This is useful to allow the Elasticsearch nodes to access the AWS API securely.

The number of master nodes can be changed by changing the value of spec.replicas to the desired number:

  replicas: 3 # Number of Elasticsearch master nodes to deploy

Elasticsearch best practices recommend three master nodes to avoid data synchronization errors and split-brain scenarios.

Environment variables within the containers will be used to input the private key passphrases for the private key being used for TLS by both the transport socket and the HTTP layer in Open Distro. The following section deals with configuring both these passphrases:


The ConfigMap and Secrets resources that we created earlier are loaded as volumes under the volumes spec:

  - name: config
      name: elasticsearch
  - name: certs
      secretName: elasticsearch-tls-data

These volumes are then mounted onto the containers using the volumeMounts spec where the config and certificate files are loaded onto the designated file paths:

    - mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
      name: config
      subPath: elasticsearch.yml
    - mountPath: /usr/share/elasticsearch/config/logging.yml
      name: config
      subPath: logging.yml
    - ... (see the source file for the full text)

An initContainers script is used to increase the vm.max_map_count value to 262144 for the worker node by default (otherwise, virtual memory allocation by the operating system will be too low for Elasticsearch to index data without running into out-of-memory exceptions). This is explained in greater detail in “Virtual Memory” in the Elasticsearch documentation.

Since we are using node labels to define which worker nodes these pods will be deployed to, the following affinity spec is required, with appropriate node labels:

        - matchExpressions:
          - key: type # Replace this with corresponding worker node label's key
            operator: In
            - general # Replace this with corresponding worker node label's value

These key/value pairs should be changed depending on the worker node labels being used in the EKS cluster’s setup.

We are also exposing ports 9200 (HTTPS/REST API Access), 9300 (Transport Socket), and 9600 (Metrics Access) on the resulting containers to allow for clustering. This is done using the following section of the Kubernetes Deployment manifest:

    - containerPort: 9300
      name: transport
    - containerPort: 9200
      name: http
    - containerPort: 9600
      name: metrics

Once all the aforementioned configuration is completed, you can deploy the master nodes:

$ kubectl apply -f 40-es-master-deploy.yml

Check for the master node pods to come up:

$ kubectl -n elasticsearch get pods

If the master nodes are up and running correctly, the output should look like:

NAME                         READY     STATUS    RESTARTS   AGE
es-master-78f97f98d9-275sl   1/1       Running   0          1d
es-master-78f97f98d9-kwqxt   1/1       Running   0          1d
es-master-78f97f98d9-lp6bn   1/1       Running   0          1d

You can see whether the master nodes are successfully running by checking the log output of any of these master nodes:

$ kubectl -n elasticsearch logs -f es-master-78f97f98d9-275sl

If the log output contains the following message string, it means that the Elasticsearch master nodes have clustered successfully:

[2019-04-04T06:34:16,816][INFO ][o.e.c.s.ClusterApplierService] [es-master-78f97f98d9-275sl] detected_master {es-master-78f97f98d9-kwqxt}

Client nodes

Create a Kubernetes Deployment resource for deploying two Elasticsearch client/coordinating nodes using file 50-es-client-deploy.yml. There are certain parameters in this file that need to be configured for your deployment.

Within the spec.template.annotations in file 50-es-client-deploy.yml, you need to supply a role name that the pod is able to assume from AWS IAM: <ARN_OF_IAM_ROLE_FOR_CONTAINER>

The number of client nodes can be changed by changing the value of spec.replicas to the desired number:

  replicas: 2 # Number of Elasticsearch client nodes to deploy

We have chosen two coordinating/client nodes for this test cluster. This number can be increased based on the volume of incoming traffic.

Replace the values of the following environment variables with the same private key passphrases provided to the master node deployment:


The same ConfigMap and Secrets resources are used by this deployment as with the master node deployment earlier. They have the same configuration and the same method of bootstrapping so we will skip that to avoid repetition in this section.

Virtual memory allocation is performed in the same way as for the master node deployment.
One key difference to note is the Weighted Anti-Affinity applied to this client node deployment, to prevent the client nodes from scheduling on the same worker nodes as the master nodes:

        - weight: 1
            topologyKey: ""
                component: elasticsearch
                role: client

The same port configuration as the master node deployment applies here.

Once all the aforementioned configuration is completed, you can deploy the client nodes:

$ kubectl apply -f 50-es-client-deploy.yml

Check for the client node pods to come up:

$ kubectl -n elasticsearch get pods

If the client nodes are up and running, your output will look like:

NAME                         READY     STATUS    RESTARTS   AGE
es-client-855f48886-75cz8    1/1       Running   0          1d
es-client-855f48886-r4vzn    1/1       Running   0          1d
es-master-78f97f98d9-275sl   1/1       Running   0          1d
es-master-78f97f98d9-kwqxt   1/1       Running   0          1d
es-master-78f97f98d9-lp6bn   1/1       Running   0          1d

You can see if the client nodes are successfully running by checking the log output of any of these client nodes:

$ kubectl -n elasticsearch logs -f es-client-855f48886-75cz8

If the log output contains the following message string, the Elasticsearch client nodes have clustered successfully:

[2019-04-04T06:35:57,180][INFO ][o.e.c.s.ClusterApplierService] [es-client-855f48886-75cz8] detected_master {es-master-78f97f98d9-kwqxt}

Data nodes

Create a Kubernetes Service resource for a Kubernetes internal ingress point to the Elasticsearch data nodes, using file 60-es-data-svc.yml:

$ kubectl apply -f 60-es-data-svc.yml

This will create a local Service resource within the EKS cluster to allow access to the Elasticsearch data nodes.

Create a Kubernetes StatefulSet resource for deploying three Elasticsearch data nodes using file 70-es-data-sts.yml. These are stateful nodes that will be storing the indexed data.

Some parameters need to be configured specifically for your own deployment.

Within the spec.template.annotations in file 70-es-data-sts.yml, you need to supply a role name that the pod is able to assume from AWS IAM:


The number of data nodes can be changed by changing the value of spec.replicas:

  replicas: 3 # Number of Elasticsearch data nodes to deploy

We have chosen three data nodes for this test cluster. This number can be increased based on your own requirements.

Replace the values of the following environment variables with the same private key passphrases provided to the master and client node deployments as shown:


The same ConfigMap and Secrets resources are used by this deployment as with the master and client node deployments earlier. They use the same configuration and the same method of bootstrapping, so we won’t repeat those in this section.

Virtual memory allocation is performed in the same way as for the master node deployment.
The serviceName: elasticsearch-data definition is configured to use the data service we created earlier in the 60-es-data-svc.yml file.

The volumeClaimTemplates section in the 70-es-data-sts.yml file provisions storage volumes for these stateful nodes:

  - metadata:
      name: data
      accessModes: [ ReadWriteOnce ]
      storageClassName: elk-gp2
          storage: 2Ti

This defines the provisioning of EBS storage volumes for every pod in the StatefulSet and attaches the storage volume to the pod as a mount point. The storageClassName key referenced here is the name of the StorageClass resource that we defined initially in file 25-es-sc-gp2.yml

We also use an extra initContainers section here to allow the Elasticsearch user with UID and GID 1000 read and write permissions to the provisioned EBS volume, using the fixmount script:

- name: fixmount
    command: [ 'sh', '-c', 'chown -R 1000:1000 /usr/share/elasticsearch/data' ]
    image: busybox
      - mountPath: /usr/share/elasticsearch/data
        name: data

Once all configuration has been completed, we can deploy these Elasticsearch data nodes:

$ kubectl apply -f 70-es-data-sts.yml

Check for the data node pods to come up:

$ kubectl -n elasticsearch get pods

If the data nodes are up and running, your output will look like:

NAME                         READY     STATUS    RESTARTS   AGE
es-client-855f48886-75cz8    1/1       Running   0          1d
es-client-855f48886-r4vzn    1/1       Running   0          1d
es-data-0                    1/1       Running   0          1d
es-data-1                    1/1       Running   0          1d
es-data-2                    1/1       Running   0          1d
es-master-78f97f98d9-275sl   1/1       Running   0          1d
es-master-78f97f98d9-kwqxt   1/1       Running   0          1d
es-master-78f97f98d9-lp6bn   1/1       Running   0          1d

You can see whether the data nodes are successfully running by checking the log output of any of these data nodes:

$ kubectl -n elasticsearch logs -f es-data-0

If the log output contains the following message string, the Elasticsearch data nodes have clustered successfully and are ready to start indexing data:

[2019-04-04T06:37:57,208][INFO ][o.e.c.s.ClusterApplierService] [es-data-0] detected_master {es-master-78f97f98d9-kwqxt}

At this point the cluster has been successfully deployed, but you still need to initialize it with the default users and their passwords. This security initialization is covered in the following section.

Cluster security initialization

As described in the documentation for Open Distro for Elasticsearch, after deployment is complete, a cluster has to be initialized with security before it can be made available for use.

This is done through two files that reside on the containers running Open Distro for Elasticsearch.
To start the initialization process, use the following command to gain shell access to one of master nodes:

$ kubectl -n elasticsearch exec -it es-master-78f97f98d9-275sl -- bash

Once you have shell access to the running Elasticsearch pod, navigate to the Open Distro tools directory with cd /usr/share/elasticsearch/plugins/opendistro_security/tools and execute:

$ chmod +x

This will make the password hashing script executable. Now you can use this script to generate bcrypt hashed passwords for your default users. The default users can be seen in file /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml which by default looks like the following example for the admin user (I’ve omitted the rest of the file for brevity):

# This is the internal user database
# The hash value is a bcrypt hash and can be generated with plugin/tools/

# Still using default password: admin
  readonly: true
  hash: $2y$12$SFNvhLHf7MPCpRCq00o/BuU8GMdcD.7BymhT80YHNISBHsEXAMPLE
    - admin
    #no dots allowed in attribute names
    attribute1: value1
    attribute2: value2
    attribute3: value3

To change the passwords in the internal_users.yml file, start by generating hashed passwords for each user in the file using the script:

$ ./ -p <password you want to hash>

For example, if I want to change the password of the admin user, I would do the following:

[[email protected] tools]# ./ -p ThisIsAStrongPassword9876212

The output string is the bcrypt hashed password. We will now replace the hash for the admin user in the internal_users.yml file with this hash.

This snippet shows an updated internal_users.yml file:

# This is the internal user database
# The hash value is a bcrypt hash and can be generated with plugin/tools/

# Password changed for user admin
  readonly: true
  hash: $2y$12$yMchvPrjvqbwweYihFiDyePfUj3CEqgps3X1ACciPjtbibEXAMPLE
    - admin
    #no dots allowed in attribute names
    attribute1: value1
    attribute2: value2
    attribute3: value3

This step needs to be performed for all users within the internal_users.yml file. Do this for each user defined in this file and store the plaintext version of the password securely, as some of these will be required in the future.

The initialization process is performed using the /usr/share/elasticsearch/plugins/opendistro_security/tools/ script.

The initialization command requires certain parameters, and should look like:

$ /usr/share/elasticsearch/plugins/opendistro_security/tools/ -cacert /usr/share/elasticsearch/config/admin-root-ca.pem -cert /usr/share/elasticsearch/config/admin-crt.pem -key /usr/share/elasticsearch/config/admin-key.pem -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -keypass <replace-with-passphrase-for-admin-private-key> -h <replace-with-IP-of-master-nodes> -nhnv -icl

This command specifies what admin client TLS certificate and private key to use to execute the script successfully. This is the second set of certificates that we loaded earlier as part of the ConfigMap for our cluster deployment.

The -cd flag specifies the directory in which the initialization configs are stored. The -keypass flag must be set to the passphrase chosen when the admin client private key was generated. The -h flag specifies what hostname to use, in this case the internal IP address of the pod we’re shelling into.
If it runs successfully and is able to initialize the cluster, the output will look like:

Open Distro Security Admin v6
Will connect to ... done
Elasticsearch Version: 6.5.4
Open Distro Security Version:
Connected as
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: logs
Clusterstate: GREEN
Number of nodes: 8
Number of data nodes: 3
.opendistro_security index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/
Will update 'security/config' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/config.yml
   SUCC: Configuration for 'config' created or updated
Will update 'security/roles' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update 'security/rolesmapping' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update 'security/internalusers' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update 'security/actiongroups' with /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Done with success

The Open Distro for Elasticsearch cluster has now been successfully deployed, configured, and initialized! You can now move on to a Kibana deployment.

Kibana Deployment

After Elasticsearch is running successfully, you will need to access it through the Kibana UI.
From the root of the community repository you cloned earlier, navigate to the open-distro-elasticsearch-kubernetes folder:

$ cd open-distro-elasticsearch-kubernetes

Once there, navigate to the kibana subfolder using command cd kibana.
Then create a Kubernetes namespace to house the Kibana assets using file 10-kb-namespace.yml:

$ kubectl apply -f 10-kb-namespace.yml

Create a Kubernetes ConfigMap resource which will be used to bootstrap Kibana’s main config file kibana.yml onto the Kibana container upon deployment using file 20-kb-configmap.yml:

$ kubectl apply -f 20-kb-configmap.yml

Create a Kubernetes Secrets resource for bootstrapping TLS certificates and private keys for TLS configuration on the Kibana pods using file 25-kb-bootstrap-secrets.yml:

$ kubectl apply -f 25-kb-bootstrap-secrets.yml

Within the 25-kb-bootstrap-secrets.yml file, you will replace empty certificate data sections with each of your relevant certificates and private keys.

Replace the elasticsearch.url parameter with the DNS name you chose when deploying the Service for Elasticsearch in file 35-es-service.yml.

Create a Kubernetes Deployment resource for deploying a single Kibana node using file 30-kb-deploy.yml:

$ kubectl apply -f 30-kb-deploy.yml

Within this deployment are several environment variables:

    - name: CLUSTER_NAME
      value: logs
      value: kibanaserver
    # Replace with URL of Elasticsearch API
    # Replace with password chosen during cluster initialization
    # Replace with key passphrase for key used to generate Kibana TLS cert
    - name: KEY_PASSPHRASE
    # 32-character random string to be used as cookie password by security plugin
    - name: COOKIE_PASS

Environment variables that require configuration are commented within the 30-kb-deploy.yml file.
Begin by replacing the <URL_OF_ELASTICSEARCH_API> part with the DNS name you chose during the Elasticsearch deployment. This was configured in file 35-es-service.yml under the annotation.

Replace the <PASSWORD_CHOSEN_DURING_CLUSTER_INITIALIZATION> with the password set for the kibanaserver user during cluster initialization.

Next, replace <PASSPHRASE_FOR_KIBANA_TLS_PRIVATE_KEY> with the passphrase chosen for the private key of the bootstrapped Kibana TLS certificate.

Lastly, replace <COOKIE_PASS_FOR_SECURITY_PLUGIN_32CHARS> with a 32-character random string which will be used in encrypted session cookies by the security plugin.

Create a Kubernetes Service resource type for an ingress point to Kibana’s Web UI, which uses annotations to create a corresponding external-facing Network Load Balancer in AWS. This allows ingress into the cluster using file 40-kb-service.yml:

$ kubectl apply -f 40-kb-service.yml

This Service deployment will create an external-facing Network Load Balancer for UI access to Kibana, and will map port 443 to port 5601 on which the Kibana API runs.

It will also register the Network Load Balancer with an ACM certificate for the chosen DNS hostname, as long as it is provided with a valid ACM certificate ARN under the annotation.

Kibana will take a few moments to get up and running. Once Kibana is running, you should be able to access the Kibana UI using the DNS address you chose when you deployed the Kibana service using file 40-kb-service.yml.

This parameter was set in the annotation, for example Once Kibana is available you will see:


Open Distro for Elasticsearch Kibana login.


Log in with your previously configured admin credentials to gain access to the cluster and use Elasticsearch.


Congratulations! You now have a production grade deployment of Open Distro for Elasticsearch. You deployed three master nodes, two client nodes, and three data nodes. You secured your cluster with internal TLS and role-based access control. You can easily scale or rescale your cluster to fit your workload.

Have an issue or question? Want to contribute? You can get help and discuss Open Distro for Elasticsearch on our forums. You can file issues here.


Thanks to:

  • Zack Doherty (Senior SRE – Tinder Engineering) for all his help with Kubernetes internals.
  • Pires for his work on the Open-source Elasticsearch deployment in Kubernetes.
  • The Open Distro for Elasticsearch team for their support and guidance in writing this post.
  • The Ring Security Team for their support and encouragement.

from AWS Open Source Blog